May 16 00:05:18.886944 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:16:42 -00 2025 May 16 00:05:18.886966 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffa0077ec5e89092631d817251b58c64c9261c447bd6e8bcef43c52d5e74873e May 16 00:05:18.886977 kernel: BIOS-provided physical RAM map: May 16 00:05:18.886984 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:05:18.886990 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 00:05:18.886997 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 00:05:18.887005 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 16 00:05:18.887011 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 00:05:18.887017 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 00:05:18.887024 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 00:05:18.887030 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 16 00:05:18.887039 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 00:05:18.887045 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 00:05:18.887052 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 00:05:18.887060 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 00:05:18.887067 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 00:05:18.887076 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 16 00:05:18.887083 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 16 00:05:18.887090 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 16 00:05:18.887097 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 16 00:05:18.887104 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 00:05:18.887130 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 00:05:18.887136 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 00:05:18.887143 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:05:18.887151 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 00:05:18.887158 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:05:18.887165 kernel: NX (Execute Disable) protection: active May 16 00:05:18.887174 kernel: APIC: Static calls initialized May 16 00:05:18.887181 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 16 00:05:18.887188 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 16 00:05:18.887195 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 16 00:05:18.887202 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 16 00:05:18.887209 kernel: extended physical RAM map: May 16 00:05:18.887216 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:05:18.887223 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 00:05:18.887230 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 00:05:18.887236 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 16 00:05:18.887243 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 00:05:18.887251 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 00:05:18.887260 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 00:05:18.887270 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 16 00:05:18.887278 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 16 00:05:18.887285 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 16 00:05:18.887292 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 16 00:05:18.887299 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 16 00:05:18.887309 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 00:05:18.887317 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 00:05:18.887324 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 00:05:18.887333 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 00:05:18.887341 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 00:05:18.887349 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 16 00:05:18.887358 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 16 00:05:18.887365 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 16 00:05:18.887372 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 16 00:05:18.887382 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 00:05:18.887389 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 00:05:18.887396 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 00:05:18.887403 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:05:18.887411 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 00:05:18.887418 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:05:18.887425 kernel: efi: EFI v2.7 by EDK II May 16 00:05:18.887432 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 16 00:05:18.887440 kernel: random: crng init done May 16 00:05:18.887447 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 16 00:05:18.887454 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 16 00:05:18.887461 kernel: secureboot: Secure boot disabled May 16 00:05:18.887471 kernel: SMBIOS 2.8 present. May 16 00:05:18.887478 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 16 00:05:18.887485 kernel: Hypervisor detected: KVM May 16 00:05:18.887492 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 00:05:18.887500 kernel: kvm-clock: using sched offset of 2697864232 cycles May 16 00:05:18.887507 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 00:05:18.887515 kernel: tsc: Detected 2794.748 MHz processor May 16 00:05:18.887523 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 00:05:18.887530 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 00:05:18.887538 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 16 00:05:18.887548 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 00:05:18.887555 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 00:05:18.887562 kernel: Using GB pages for direct mapping May 16 00:05:18.887570 kernel: ACPI: Early table checksum verification disabled May 16 00:05:18.887577 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 16 00:05:18.887585 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 16 00:05:18.887592 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:05:18.887600 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:05:18.887607 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 16 00:05:18.887617 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:05:18.887624 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:05:18.887632 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:05:18.887639 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:05:18.887647 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 16 00:05:18.887654 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 16 00:05:18.887661 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 16 00:05:18.887669 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 16 00:05:18.887676 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 16 00:05:18.887686 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 16 00:05:18.887693 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 16 00:05:18.887700 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 16 00:05:18.887708 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 16 00:05:18.887715 kernel: No NUMA configuration found May 16 00:05:18.887722 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 16 00:05:18.887730 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 16 00:05:18.887737 kernel: Zone ranges: May 16 00:05:18.887745 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 00:05:18.887754 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 16 00:05:18.887761 kernel: Normal empty May 16 00:05:18.887769 kernel: Movable zone start for each node May 16 00:05:18.887776 kernel: Early memory node ranges May 16 00:05:18.887784 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 00:05:18.887791 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 16 00:05:18.887798 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 16 00:05:18.887805 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 16 00:05:18.887813 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 16 00:05:18.887822 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 16 00:05:18.887830 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 16 00:05:18.887837 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 16 00:05:18.887844 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 16 00:05:18.887851 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:05:18.887859 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 00:05:18.887874 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 16 00:05:18.887883 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:05:18.887891 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 16 00:05:18.887898 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 16 00:05:18.887906 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 16 00:05:18.887914 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 16 00:05:18.887921 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 16 00:05:18.887931 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 00:05:18.887939 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 00:05:18.887946 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 00:05:18.887954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 00:05:18.887964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 00:05:18.887972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 00:05:18.887980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 00:05:18.887987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 00:05:18.887995 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 00:05:18.888003 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 00:05:18.888010 kernel: TSC deadline timer available May 16 00:05:18.888018 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 16 00:05:18.888026 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 00:05:18.888033 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 00:05:18.888043 kernel: kvm-guest: setup PV sched yield May 16 00:05:18.888051 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 16 00:05:18.888058 kernel: Booting paravirtualized kernel on KVM May 16 00:05:18.888066 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 00:05:18.888074 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 00:05:18.888082 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 16 00:05:18.888090 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 16 00:05:18.888097 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 00:05:18.888105 kernel: kvm-guest: PV spinlocks enabled May 16 00:05:18.888132 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 00:05:18.888142 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffa0077ec5e89092631d817251b58c64c9261c447bd6e8bcef43c52d5e74873e May 16 00:05:18.888150 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:05:18.888158 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:05:18.888166 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:05:18.888174 kernel: Fallback order for Node 0: 0 May 16 00:05:18.888182 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 16 00:05:18.888189 kernel: Policy zone: DMA32 May 16 00:05:18.888199 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:05:18.888207 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 177824K reserved, 0K cma-reserved) May 16 00:05:18.888215 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:05:18.888223 kernel: ftrace: allocating 37922 entries in 149 pages May 16 00:05:18.888231 kernel: ftrace: allocated 149 pages with 4 groups May 16 00:05:18.888238 kernel: Dynamic Preempt: voluntary May 16 00:05:18.888246 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:05:18.888258 kernel: rcu: RCU event tracing is enabled. May 16 00:05:18.888266 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:05:18.888276 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:05:18.888284 kernel: Rude variant of Tasks RCU enabled. May 16 00:05:18.888292 kernel: Tracing variant of Tasks RCU enabled. May 16 00:05:18.888300 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:05:18.888308 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:05:18.888316 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 00:05:18.888323 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:05:18.888331 kernel: Console: colour dummy device 80x25 May 16 00:05:18.888339 kernel: printk: console [ttyS0] enabled May 16 00:05:18.888348 kernel: ACPI: Core revision 20230628 May 16 00:05:18.888357 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 00:05:18.888364 kernel: APIC: Switch to symmetric I/O mode setup May 16 00:05:18.888372 kernel: x2apic enabled May 16 00:05:18.888380 kernel: APIC: Switched APIC routing to: physical x2apic May 16 00:05:18.888388 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 00:05:18.888396 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 00:05:18.888403 kernel: kvm-guest: setup PV IPIs May 16 00:05:18.888411 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 00:05:18.888421 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 00:05:18.888429 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 00:05:18.888437 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 00:05:18.888444 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 00:05:18.888452 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 00:05:18.888460 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 00:05:18.888468 kernel: Spectre V2 : Mitigation: Retpolines May 16 00:05:18.888476 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 00:05:18.888484 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 00:05:18.888494 kernel: RETBleed: Mitigation: untrained return thunk May 16 00:05:18.888502 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 00:05:18.888510 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 00:05:18.888518 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 00:05:18.888526 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 00:05:18.888534 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 00:05:18.888541 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 00:05:18.888549 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 00:05:18.888557 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 00:05:18.888567 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 00:05:18.888574 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 00:05:18.888582 kernel: Freeing SMP alternatives memory: 32K May 16 00:05:18.888590 kernel: pid_max: default: 32768 minimum: 301 May 16 00:05:18.888597 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:05:18.888605 kernel: landlock: Up and running. May 16 00:05:18.888613 kernel: SELinux: Initializing. May 16 00:05:18.888620 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:05:18.888628 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:05:18.888638 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 00:05:18.888646 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:05:18.888654 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:05:18.888662 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:05:18.888670 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 00:05:18.888677 kernel: ... version: 0 May 16 00:05:18.888685 kernel: ... bit width: 48 May 16 00:05:18.888692 kernel: ... generic registers: 6 May 16 00:05:18.888702 kernel: ... value mask: 0000ffffffffffff May 16 00:05:18.888710 kernel: ... max period: 00007fffffffffff May 16 00:05:18.888718 kernel: ... fixed-purpose events: 0 May 16 00:05:18.888725 kernel: ... event mask: 000000000000003f May 16 00:05:18.888733 kernel: signal: max sigframe size: 1776 May 16 00:05:18.888740 kernel: rcu: Hierarchical SRCU implementation. May 16 00:05:18.888748 kernel: rcu: Max phase no-delay instances is 400. May 16 00:05:18.888756 kernel: smp: Bringing up secondary CPUs ... May 16 00:05:18.888764 kernel: smpboot: x86: Booting SMP configuration: May 16 00:05:18.888771 kernel: .... node #0, CPUs: #1 #2 #3 May 16 00:05:18.888781 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:05:18.888789 kernel: smpboot: Max logical packages: 1 May 16 00:05:18.888796 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 00:05:18.888804 kernel: devtmpfs: initialized May 16 00:05:18.888811 kernel: x86/mm: Memory block size: 128MB May 16 00:05:18.888819 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 16 00:05:18.888827 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 16 00:05:18.888835 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 16 00:05:18.888843 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 16 00:05:18.888853 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 16 00:05:18.888860 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 16 00:05:18.888868 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:05:18.888876 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:05:18.888884 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:05:18.888891 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:05:18.888899 kernel: audit: initializing netlink subsys (disabled) May 16 00:05:18.888907 kernel: audit: type=2000 audit(1747353919.295:1): state=initialized audit_enabled=0 res=1 May 16 00:05:18.888916 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:05:18.888924 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 00:05:18.888932 kernel: cpuidle: using governor menu May 16 00:05:18.888939 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:05:18.888947 kernel: dca service started, version 1.12.1 May 16 00:05:18.888955 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 16 00:05:18.888963 kernel: PCI: Using configuration type 1 for base access May 16 00:05:18.888970 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 00:05:18.888978 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:05:18.888988 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:05:18.888996 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:05:18.889003 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:05:18.889011 kernel: ACPI: Added _OSI(Module Device) May 16 00:05:18.889019 kernel: ACPI: Added _OSI(Processor Device) May 16 00:05:18.889026 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:05:18.889034 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:05:18.889042 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:05:18.889049 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 00:05:18.889059 kernel: ACPI: Interpreter enabled May 16 00:05:18.889067 kernel: ACPI: PM: (supports S0 S3 S5) May 16 00:05:18.889075 kernel: ACPI: Using IOAPIC for interrupt routing May 16 00:05:18.889083 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 00:05:18.889090 kernel: PCI: Using E820 reservations for host bridge windows May 16 00:05:18.889098 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 00:05:18.889124 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:05:18.889302 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:05:18.889437 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 00:05:18.889564 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 00:05:18.889574 kernel: PCI host bridge to bus 0000:00 May 16 00:05:18.889700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 00:05:18.889815 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 00:05:18.889926 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 00:05:18.890037 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 16 00:05:18.890179 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 16 00:05:18.890292 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 16 00:05:18.890403 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:05:18.890543 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 16 00:05:18.890677 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 16 00:05:18.890802 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 16 00:05:18.890932 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 16 00:05:18.891056 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 16 00:05:18.891289 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 16 00:05:18.891413 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 00:05:18.891550 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:05:18.891673 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 16 00:05:18.891796 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 16 00:05:18.891925 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 16 00:05:18.892057 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 16 00:05:18.892215 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 16 00:05:18.892340 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 16 00:05:18.892465 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 16 00:05:18.892598 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 16 00:05:18.892725 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 16 00:05:18.892854 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 16 00:05:18.892980 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 16 00:05:18.893104 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 16 00:05:18.893274 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 16 00:05:18.893398 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 00:05:18.893535 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 16 00:05:18.893660 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 16 00:05:18.893788 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 16 00:05:18.893919 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 16 00:05:18.894043 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 16 00:05:18.894053 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 00:05:18.894061 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 00:05:18.894069 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 00:05:18.894077 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 00:05:18.894084 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 00:05:18.894095 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 00:05:18.894103 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 00:05:18.894173 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 00:05:18.894181 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 00:05:18.894188 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 00:05:18.894197 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 00:05:18.894204 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 00:05:18.894212 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 00:05:18.894220 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 00:05:18.894230 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 00:05:18.894238 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 00:05:18.894246 kernel: iommu: Default domain type: Translated May 16 00:05:18.894253 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 00:05:18.894261 kernel: efivars: Registered efivars operations May 16 00:05:18.894269 kernel: PCI: Using ACPI for IRQ routing May 16 00:05:18.894277 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 00:05:18.894284 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 16 00:05:18.894292 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 16 00:05:18.894302 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 16 00:05:18.894309 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 16 00:05:18.894317 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 16 00:05:18.894325 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 16 00:05:18.894332 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 16 00:05:18.894340 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 16 00:05:18.894466 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 00:05:18.894587 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 00:05:18.894715 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 00:05:18.894725 kernel: vgaarb: loaded May 16 00:05:18.894733 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 00:05:18.894741 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 00:05:18.894749 kernel: clocksource: Switched to clocksource kvm-clock May 16 00:05:18.894757 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:05:18.894765 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:05:18.894772 kernel: pnp: PnP ACPI init May 16 00:05:18.894906 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 16 00:05:18.894921 kernel: pnp: PnP ACPI: found 6 devices May 16 00:05:18.894929 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 00:05:18.894937 kernel: NET: Registered PF_INET protocol family May 16 00:05:18.894945 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:05:18.894969 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:05:18.894979 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:05:18.894987 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:05:18.894997 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 00:05:18.895007 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:05:18.895015 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:05:18.895023 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:05:18.895031 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:05:18.895039 kernel: NET: Registered PF_XDP protocol family May 16 00:05:18.895189 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 16 00:05:18.895313 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 16 00:05:18.895426 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 00:05:18.895545 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 00:05:18.895660 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 00:05:18.895788 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 16 00:05:18.895903 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 16 00:05:18.896014 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 16 00:05:18.896025 kernel: PCI: CLS 0 bytes, default 64 May 16 00:05:18.896033 kernel: Initialise system trusted keyrings May 16 00:05:18.896041 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:05:18.896053 kernel: Key type asymmetric registered May 16 00:05:18.896061 kernel: Asymmetric key parser 'x509' registered May 16 00:05:18.896069 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 00:05:18.896077 kernel: io scheduler mq-deadline registered May 16 00:05:18.896085 kernel: io scheduler kyber registered May 16 00:05:18.896093 kernel: io scheduler bfq registered May 16 00:05:18.896101 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 00:05:18.896169 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 00:05:18.896177 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 00:05:18.896189 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 00:05:18.896199 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:05:18.896207 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 00:05:18.896215 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 00:05:18.896223 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 00:05:18.896231 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 00:05:18.896370 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 00:05:18.896382 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 00:05:18.896505 kernel: rtc_cmos 00:04: registered as rtc0 May 16 00:05:18.896627 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T00:05:18 UTC (1747353918) May 16 00:05:18.896744 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 00:05:18.896755 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 00:05:18.896763 kernel: efifb: probing for efifb May 16 00:05:18.896771 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 16 00:05:18.896782 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 16 00:05:18.896790 kernel: efifb: scrolling: redraw May 16 00:05:18.896798 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 00:05:18.896806 kernel: Console: switching to colour frame buffer device 160x50 May 16 00:05:18.896814 kernel: fb0: EFI VGA frame buffer device May 16 00:05:18.896822 kernel: pstore: Using crash dump compression: deflate May 16 00:05:18.896830 kernel: pstore: Registered efi_pstore as persistent store backend May 16 00:05:18.896838 kernel: NET: Registered PF_INET6 protocol family May 16 00:05:18.896846 kernel: Segment Routing with IPv6 May 16 00:05:18.896857 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:05:18.896865 kernel: NET: Registered PF_PACKET protocol family May 16 00:05:18.896873 kernel: Key type dns_resolver registered May 16 00:05:18.896880 kernel: IPI shorthand broadcast: enabled May 16 00:05:18.896888 kernel: sched_clock: Marking stable (576002503, 154179627)->(777793568, -47611438) May 16 00:05:18.896896 kernel: registered taskstats version 1 May 16 00:05:18.896904 kernel: Loading compiled-in X.509 certificates May 16 00:05:18.896912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 22e80ca6ad28c00533ea5eb0843f23994a6e2a11' May 16 00:05:18.896920 kernel: Key type .fscrypt registered May 16 00:05:18.896931 kernel: Key type fscrypt-provisioning registered May 16 00:05:18.896941 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:05:18.896949 kernel: ima: Allocated hash algorithm: sha1 May 16 00:05:18.896957 kernel: ima: No architecture policies found May 16 00:05:18.896965 kernel: clk: Disabling unused clocks May 16 00:05:18.896973 kernel: Freeing unused kernel image (initmem) memory: 43484K May 16 00:05:18.896981 kernel: Write protecting the kernel read-only data: 38912k May 16 00:05:18.896989 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 16 00:05:18.896997 kernel: Run /init as init process May 16 00:05:18.897007 kernel: with arguments: May 16 00:05:18.897015 kernel: /init May 16 00:05:18.897023 kernel: with environment: May 16 00:05:18.897031 kernel: HOME=/ May 16 00:05:18.897038 kernel: TERM=linux May 16 00:05:18.897046 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:05:18.897055 systemd[1]: Successfully made /usr/ read-only. May 16 00:05:18.897066 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:05:18.897078 systemd[1]: Detected virtualization kvm. May 16 00:05:18.897086 systemd[1]: Detected architecture x86-64. May 16 00:05:18.897094 systemd[1]: Running in initrd. May 16 00:05:18.897102 systemd[1]: No hostname configured, using default hostname. May 16 00:05:18.897144 systemd[1]: Hostname set to . May 16 00:05:18.897153 systemd[1]: Initializing machine ID from VM UUID. May 16 00:05:18.897161 systemd[1]: Queued start job for default target initrd.target. May 16 00:05:18.897170 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:05:18.897182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:05:18.897191 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:05:18.897200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:05:18.897209 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:05:18.897218 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:05:18.897232 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:05:18.897243 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:05:18.897252 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:05:18.897261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:05:18.897269 systemd[1]: Reached target paths.target - Path Units. May 16 00:05:18.897278 systemd[1]: Reached target slices.target - Slice Units. May 16 00:05:18.897287 systemd[1]: Reached target swap.target - Swaps. May 16 00:05:18.897295 systemd[1]: Reached target timers.target - Timer Units. May 16 00:05:18.897304 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:05:18.897312 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:05:18.897323 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:05:18.897332 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 00:05:18.897340 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:05:18.897349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:05:18.897357 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:05:18.897366 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:05:18.897375 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:05:18.897383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:05:18.897392 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:05:18.897403 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:05:18.897411 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:05:18.897420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:05:18.897428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:05:18.897437 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:05:18.897446 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:05:18.897457 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:05:18.897466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:05:18.897495 systemd-journald[193]: Collecting audit messages is disabled. May 16 00:05:18.897518 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:05:18.897527 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:05:18.897535 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:05:18.897544 systemd-journald[193]: Journal started May 16 00:05:18.897567 systemd-journald[193]: Runtime Journal (/run/log/journal/b80e3a1d928a41ee970d397839aea3ab) is 6M, max 48.2M, 42.2M free. May 16 00:05:18.900195 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:05:18.902195 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:05:18.902494 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:05:18.906427 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:05:18.909340 systemd-modules-load[194]: Inserted module 'overlay' May 16 00:05:18.917753 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:05:18.919207 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:05:18.920806 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:05:18.935975 dracut-cmdline[223]: dracut-dracut-053 May 16 00:05:18.938911 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffa0077ec5e89092631d817251b58c64c9261c447bd6e8bcef43c52d5e74873e May 16 00:05:18.948144 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:05:18.950138 kernel: Bridge firewalling registered May 16 00:05:18.950149 systemd-modules-load[194]: Inserted module 'br_netfilter' May 16 00:05:18.952543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:05:18.958343 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:05:18.967393 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:05:18.973295 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:05:19.006911 systemd-resolved[265]: Positive Trust Anchors: May 16 00:05:19.006931 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:05:19.006961 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:05:19.009420 systemd-resolved[265]: Defaulting to hostname 'linux'. May 16 00:05:19.010448 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:05:19.015823 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:05:19.039146 kernel: SCSI subsystem initialized May 16 00:05:19.048143 kernel: Loading iSCSI transport class v2.0-870. May 16 00:05:19.058142 kernel: iscsi: registered transport (tcp) May 16 00:05:19.079221 kernel: iscsi: registered transport (qla4xxx) May 16 00:05:19.079238 kernel: QLogic iSCSI HBA Driver May 16 00:05:19.125928 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:05:19.137258 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:05:19.163742 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:05:19.163774 kernel: device-mapper: uevent: version 1.0.3 May 16 00:05:19.163786 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:05:19.206145 kernel: raid6: avx2x4 gen() 30398 MB/s May 16 00:05:19.223134 kernel: raid6: avx2x2 gen() 30769 MB/s May 16 00:05:19.240216 kernel: raid6: avx2x1 gen() 25195 MB/s May 16 00:05:19.240234 kernel: raid6: using algorithm avx2x2 gen() 30769 MB/s May 16 00:05:19.258236 kernel: raid6: .... xor() 19944 MB/s, rmw enabled May 16 00:05:19.258307 kernel: raid6: using avx2x2 recovery algorithm May 16 00:05:19.281151 kernel: xor: automatically using best checksumming function avx May 16 00:05:19.431151 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:05:19.445561 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:05:19.457341 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:05:19.474388 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 16 00:05:19.480136 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:05:19.493356 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:05:19.506729 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation May 16 00:05:19.540533 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:05:19.552274 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:05:19.615163 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:05:19.624274 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:05:19.634928 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:05:19.636812 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:05:19.640419 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:05:19.642700 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:05:19.655358 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:05:19.658516 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 00:05:19.660133 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:05:19.663627 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:05:19.675330 kernel: AVX2 version of gcm_enc/dec engaged. May 16 00:05:19.675345 kernel: AES CTR mode by8 optimization enabled May 16 00:05:19.681324 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:05:19.687897 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:05:19.687921 kernel: GPT:9289727 != 19775487 May 16 00:05:19.687932 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:05:19.687948 kernel: GPT:9289727 != 19775487 May 16 00:05:19.687959 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:05:19.687969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:05:19.687979 kernel: libata version 3.00 loaded. May 16 00:05:19.693939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:05:19.694761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:05:19.697187 kernel: ahci 0000:00:1f.2: version 3.0 May 16 00:05:19.700144 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 00:05:19.698780 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:05:19.705049 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 16 00:05:19.705254 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 00:05:19.700164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:05:19.700359 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:05:19.712223 kernel: scsi host0: ahci May 16 00:05:19.712400 kernel: scsi host1: ahci May 16 00:05:19.707923 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:05:19.719157 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (476) May 16 00:05:19.719413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:05:19.723157 kernel: BTRFS: device fsid 7e35ecc6-4b22-44da-ae37-cf2eabf14492 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (463) May 16 00:05:19.723173 kernel: scsi host2: ahci May 16 00:05:19.727126 kernel: scsi host3: ahci May 16 00:05:19.732131 kernel: scsi host4: ahci May 16 00:05:19.734176 kernel: scsi host5: ahci May 16 00:05:19.734348 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 16 00:05:19.734360 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 16 00:05:19.735860 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 16 00:05:19.735873 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 16 00:05:19.735888 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 16 00:05:19.738184 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 16 00:05:19.739904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:05:19.750488 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 00:05:19.771571 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 00:05:19.779222 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 00:05:19.781717 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 00:05:19.793780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:05:19.805249 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:05:19.807779 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:05:19.816403 disk-uuid[572]: Primary Header is updated. May 16 00:05:19.816403 disk-uuid[572]: Secondary Entries is updated. May 16 00:05:19.816403 disk-uuid[572]: Secondary Header is updated. May 16 00:05:19.819128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:05:19.824160 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:05:19.834784 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:05:20.045229 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 00:05:20.045328 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 00:05:20.045340 kernel: ata3.00: applying bridge limits May 16 00:05:20.047156 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 00:05:20.047242 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 00:05:20.048139 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 00:05:20.049138 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 00:05:20.049199 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 00:05:20.050137 kernel: ata3.00: configured for UDMA/100 May 16 00:05:20.051138 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 00:05:20.096146 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 00:05:20.096355 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 00:05:20.112137 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 00:05:20.825745 disk-uuid[574]: The operation has completed successfully. May 16 00:05:20.826917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:05:20.859517 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:05:20.859646 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:05:20.898239 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:05:20.901459 sh[597]: Success May 16 00:05:20.914184 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 16 00:05:20.949564 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:05:20.963920 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:05:20.966015 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:05:20.978409 kernel: BTRFS info (device dm-0): first mount of filesystem 7e35ecc6-4b22-44da-ae37-cf2eabf14492 May 16 00:05:20.978450 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 00:05:20.978468 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:05:20.979604 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:05:20.980454 kernel: BTRFS info (device dm-0): using free space tree May 16 00:05:20.985921 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:05:20.988704 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:05:21.003386 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:05:21.006161 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:05:21.023483 kernel: BTRFS info (device vda6): first mount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:05:21.023514 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:05:21.023526 kernel: BTRFS info (device vda6): using free space tree May 16 00:05:21.027160 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:05:21.032141 kernel: BTRFS info (device vda6): last unmount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:05:21.037616 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:05:21.048268 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:05:21.163527 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:05:21.171798 ignition[682]: Ignition 2.20.0 May 16 00:05:21.172003 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:05:21.171809 ignition[682]: Stage: fetch-offline May 16 00:05:21.171862 ignition[682]: no configs at "/usr/lib/ignition/base.d" May 16 00:05:21.171872 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:05:21.171975 ignition[682]: parsed url from cmdline: "" May 16 00:05:21.171980 ignition[682]: no config URL provided May 16 00:05:21.171989 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:05:21.171999 ignition[682]: no config at "/usr/lib/ignition/user.ign" May 16 00:05:21.172026 ignition[682]: op(1): [started] loading QEMU firmware config module May 16 00:05:21.172032 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:05:21.185701 ignition[682]: op(1): [finished] loading QEMU firmware config module May 16 00:05:21.203009 systemd-networkd[780]: lo: Link UP May 16 00:05:21.203019 systemd-networkd[780]: lo: Gained carrier May 16 00:05:21.204691 systemd-networkd[780]: Enumeration completed May 16 00:05:21.205046 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:05:21.205050 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:05:21.205322 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:05:21.207642 systemd-networkd[780]: eth0: Link UP May 16 00:05:21.207646 systemd-networkd[780]: eth0: Gained carrier May 16 00:05:21.207653 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:05:21.208959 systemd[1]: Reached target network.target - Network. May 16 00:05:21.232165 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:05:21.241652 ignition[682]: parsing config with SHA512: 12795db4472ffe6568633a5179e4ae7562fabae0bc24dddbc3a43af1f1546513188425e8ca529b962d256bd7197b93d9867690c548bd8b65bab7bad3cd4e170a May 16 00:05:21.252620 unknown[682]: fetched base config from "system" May 16 00:05:21.253587 unknown[682]: fetched user config from "qemu" May 16 00:05:21.254037 ignition[682]: fetch-offline: fetch-offline passed May 16 00:05:21.254160 ignition[682]: Ignition finished successfully May 16 00:05:21.257099 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:05:21.258490 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:05:21.263271 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:05:21.286545 ignition[788]: Ignition 2.20.0 May 16 00:05:21.286556 ignition[788]: Stage: kargs May 16 00:05:21.286706 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 16 00:05:21.286718 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:05:21.287774 ignition[788]: kargs: kargs passed May 16 00:05:21.287827 ignition[788]: Ignition finished successfully May 16 00:05:21.294104 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:05:21.307298 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:05:21.339943 ignition[797]: Ignition 2.20.0 May 16 00:05:21.339956 ignition[797]: Stage: disks May 16 00:05:21.340160 ignition[797]: no configs at "/usr/lib/ignition/base.d" May 16 00:05:21.340171 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:05:21.343955 ignition[797]: disks: disks passed May 16 00:05:21.344010 ignition[797]: Ignition finished successfully May 16 00:05:21.347679 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:05:21.348951 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:05:21.350877 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:05:21.352159 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:05:21.354212 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:05:21.356453 systemd[1]: Reached target basic.target - Basic System. May 16 00:05:21.394244 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:05:21.430857 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 00:05:21.611075 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:05:21.621300 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:05:21.731164 kernel: EXT4-fs (vda9): mounted filesystem 14ea3086-9247-48be-9c0b-44ef9d324f10 r/w with ordered data mode. Quota mode: none. May 16 00:05:21.731625 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:05:21.732458 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:05:21.744209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:05:21.746259 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:05:21.746936 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 00:05:21.746990 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:05:21.775097 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) May 16 00:05:21.775138 kernel: BTRFS info (device vda6): first mount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:05:21.775150 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:05:21.775163 kernel: BTRFS info (device vda6): using free space tree May 16 00:05:21.747021 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:05:21.779133 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:05:21.793313 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:05:21.799060 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:05:21.800486 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:05:21.836005 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:05:21.841646 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 16 00:05:21.845428 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:05:21.849525 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:05:21.938028 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:05:21.948190 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:05:21.951291 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:05:21.956137 kernel: BTRFS info (device vda6): last unmount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:05:21.974502 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:05:21.977257 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:05:22.012697 ignition[933]: INFO : Ignition 2.20.0 May 16 00:05:22.012697 ignition[933]: INFO : Stage: mount May 16 00:05:22.014754 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:05:22.014754 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:05:22.017610 ignition[933]: INFO : mount: mount passed May 16 00:05:22.018429 ignition[933]: INFO : Ignition finished successfully May 16 00:05:22.021090 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:05:22.036194 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:05:22.042996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:05:22.054141 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) May 16 00:05:22.054196 kernel: BTRFS info (device vda6): first mount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 00:05:22.055955 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:05:22.056658 kernel: BTRFS info (device vda6): using free space tree May 16 00:05:22.059132 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:05:22.061279 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:05:22.110125 ignition[959]: INFO : Ignition 2.20.0 May 16 00:05:22.110125 ignition[959]: INFO : Stage: files May 16 00:05:22.110125 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:05:22.110125 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:05:22.114158 ignition[959]: DEBUG : files: compiled without relabeling support, skipping May 16 00:05:22.114158 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:05:22.114158 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:05:22.114158 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:05:22.114158 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:05:22.114158 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:05:22.114130 unknown[959]: wrote ssh authorized keys file for user: core May 16 00:05:22.122963 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 00:05:22.122963 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 16 00:05:22.159428 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:05:22.314459 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 00:05:22.314459 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:05:22.318519 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 00:05:22.797788 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 00:05:22.933428 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:05:22.935803 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 16 00:05:23.070247 systemd-networkd[780]: eth0: Gained IPv6LL May 16 00:05:23.790362 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 00:05:24.251023 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:05:24.251023 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 00:05:24.255449 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:05:24.258291 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:05:24.258291 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 00:05:24.258291 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 00:05:24.263615 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:05:24.265915 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:05:24.265915 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 00:05:24.269805 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:05:24.315240 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:05:24.320631 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:05:24.322325 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:05:24.322325 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 00:05:24.322325 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:05:24.322325 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:05:24.322325 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:05:24.322325 ignition[959]: INFO : files: files passed May 16 00:05:24.322325 ignition[959]: INFO : Ignition finished successfully May 16 00:05:24.333794 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:05:24.344274 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:05:24.346253 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:05:24.348118 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:05:24.348240 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:05:24.356236 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory May 16 00:05:24.359065 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:05:24.359065 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:05:24.362675 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:05:24.366740 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:05:24.367304 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:05:24.383244 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:05:24.414755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:05:24.414885 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:05:24.415595 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:05:24.415854 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:05:24.416555 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:05:24.417365 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:05:24.442546 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:05:24.459438 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:05:24.472074 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:05:24.475277 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:05:24.475919 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:05:24.476616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:05:24.476749 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:05:24.483329 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:05:24.486234 systemd[1]: Stopped target basic.target - Basic System. May 16 00:05:24.486905 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:05:24.487489 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:05:24.487818 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:05:24.488399 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:05:24.496523 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:05:24.499057 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:05:24.499454 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:05:24.499847 systemd[1]: Stopped target swap.target - Swaps. May 16 00:05:24.500453 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:05:24.500572 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:05:24.507521 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:05:24.508202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:05:24.508659 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:05:24.508825 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:05:24.514460 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:05:24.514564 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:05:24.516705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:05:24.516808 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:05:24.519742 systemd[1]: Stopped target paths.target - Path Units. May 16 00:05:24.521803 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:05:24.525197 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:05:24.525922 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:05:24.528807 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:05:24.529166 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:05:24.529256 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:05:24.529730 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:05:24.529805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:05:24.530186 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:05:24.530296 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:05:24.530753 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:05:24.530850 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:05:24.542236 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:05:24.542648 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:05:24.542747 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:05:24.543706 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:05:24.547392 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:05:24.547565 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:05:24.553983 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:05:24.554096 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:05:24.561692 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:05:24.561835 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:05:24.566357 ignition[1013]: INFO : Ignition 2.20.0 May 16 00:05:24.566357 ignition[1013]: INFO : Stage: umount May 16 00:05:24.568082 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:05:24.568082 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:05:24.568082 ignition[1013]: INFO : umount: umount passed May 16 00:05:24.568082 ignition[1013]: INFO : Ignition finished successfully May 16 00:05:24.569250 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:05:24.569380 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:05:24.572133 systemd[1]: Stopped target network.target - Network. May 16 00:05:24.573559 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:05:24.573620 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:05:24.576193 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:05:24.576257 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:05:24.578502 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:05:24.578563 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:05:24.580542 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:05:24.580590 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:05:24.582589 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:05:24.584670 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:05:24.587891 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:05:24.591792 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:05:24.591927 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:05:24.596694 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 00:05:24.596927 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:05:24.597056 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:05:24.600723 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 00:05:24.601495 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:05:24.601558 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:05:24.612184 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:05:24.613334 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:05:24.613387 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:05:24.615799 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:05:24.615849 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:05:24.618465 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:05:24.618514 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:05:24.621288 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:05:24.621335 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:05:24.623570 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:05:24.625551 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:05:24.625616 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 00:05:24.632996 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:05:24.633149 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:05:24.635203 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:05:24.635364 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:05:24.637926 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:05:24.638004 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:05:24.639989 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:05:24.640027 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:05:24.642226 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:05:24.642275 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:05:24.644453 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:05:24.644499 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:05:24.646325 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:05:24.646371 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:05:24.658230 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:05:24.659316 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:05:24.659370 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:05:24.661782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:05:24.661830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:05:24.664727 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 00:05:24.664791 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 00:05:24.665191 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:05:24.665289 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:05:24.847497 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:05:24.847711 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:05:24.849180 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:05:24.852169 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:05:24.852267 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:05:24.859273 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:05:24.867670 systemd[1]: Switching root. May 16 00:05:24.901357 systemd-journald[193]: Journal stopped May 16 00:05:26.376590 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 16 00:05:26.376668 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:05:26.376682 kernel: SELinux: policy capability open_perms=1 May 16 00:05:26.376693 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:05:26.376709 kernel: SELinux: policy capability always_check_network=0 May 16 00:05:26.376726 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:05:26.376737 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:05:26.376748 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:05:26.376761 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:05:26.376772 kernel: audit: type=1403 audit(1747353925.540:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:05:26.376787 systemd[1]: Successfully loaded SELinux policy in 45.231ms. May 16 00:05:26.376808 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.482ms. May 16 00:05:26.376820 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:05:26.376833 systemd[1]: Detected virtualization kvm. May 16 00:05:26.376845 systemd[1]: Detected architecture x86-64. May 16 00:05:26.376856 systemd[1]: Detected first boot. May 16 00:05:26.376874 systemd[1]: Initializing machine ID from VM UUID. May 16 00:05:26.376886 zram_generator::config[1060]: No configuration found. May 16 00:05:26.376901 kernel: Guest personality initialized and is inactive May 16 00:05:26.376912 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 00:05:26.376930 kernel: Initialized host personality May 16 00:05:26.376942 kernel: NET: Registered PF_VSOCK protocol family May 16 00:05:26.376954 systemd[1]: Populated /etc with preset unit settings. May 16 00:05:26.376966 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 00:05:26.376978 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:05:26.376990 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 00:05:26.377002 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:05:26.377016 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:05:26.377029 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:05:26.377042 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:05:26.377054 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:05:26.377066 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:05:26.377078 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:05:26.377090 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:05:26.377101 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:05:26.377129 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:05:26.377141 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:05:26.377159 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:05:26.377171 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:05:26.377184 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:05:26.377196 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:05:26.377208 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 00:05:26.377220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:05:26.377234 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 00:05:26.377246 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 00:05:26.377258 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 00:05:26.377276 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:05:26.377288 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:05:26.377300 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:05:26.377312 systemd[1]: Reached target slices.target - Slice Units. May 16 00:05:26.377325 systemd[1]: Reached target swap.target - Swaps. May 16 00:05:26.377337 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:05:26.377351 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:05:26.377363 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 00:05:26.377375 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:05:26.377387 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:05:26.377399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:05:26.377411 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:05:26.377424 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:05:26.377436 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:05:26.377448 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:05:26.377463 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:05:26.377475 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:05:26.377487 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:05:26.377499 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:05:26.377511 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:05:26.377523 systemd[1]: Reached target machines.target - Containers. May 16 00:05:26.377535 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:05:26.377547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:05:26.377561 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:05:26.377573 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:05:26.377586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:05:26.377597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:05:26.377609 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:05:26.377621 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:05:26.377633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:05:26.377645 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:05:26.377657 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:05:26.377672 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 00:05:26.377683 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:05:26.377695 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:05:26.377707 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:05:26.377719 kernel: fuse: init (API version 7.39) May 16 00:05:26.377730 kernel: loop: module loaded May 16 00:05:26.377742 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:05:26.377753 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:05:26.377765 kernel: ACPI: bus type drm_connector registered May 16 00:05:26.377779 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:05:26.377791 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:05:26.377803 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 00:05:26.377814 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:05:26.377828 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:05:26.377841 systemd[1]: Stopped verity-setup.service. May 16 00:05:26.377853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:05:26.377882 systemd-journald[1131]: Collecting audit messages is disabled. May 16 00:05:26.377905 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:05:26.377924 systemd-journald[1131]: Journal started May 16 00:05:26.377950 systemd-journald[1131]: Runtime Journal (/run/log/journal/b80e3a1d928a41ee970d397839aea3ab) is 6M, max 48.2M, 42.2M free. May 16 00:05:26.140991 systemd[1]: Queued start job for default target multi-user.target. May 16 00:05:26.155203 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 00:05:26.155689 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:05:26.381202 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:05:26.382734 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:05:26.384071 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:05:26.385193 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:05:26.386388 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:05:26.387606 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:05:26.388891 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:05:26.390457 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:05:26.392292 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:05:26.392535 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:05:26.394133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:05:26.394477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:05:26.396003 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:05:26.396339 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:05:26.397783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:05:26.398019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:05:26.399542 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:05:26.399759 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:05:26.401251 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:05:26.401473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:05:26.402939 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:05:26.404496 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:05:26.406322 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:05:26.408016 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 00:05:26.446031 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:05:26.454191 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:05:26.456587 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:05:26.457734 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:05:26.457761 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:05:26.459772 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 00:05:26.462171 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:05:26.468061 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:05:26.469982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:05:26.474423 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:05:26.476953 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:05:26.478509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:05:26.480461 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:05:26.481692 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:05:26.485570 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:05:26.492217 systemd-journald[1131]: Time spent on flushing to /var/log/journal/b80e3a1d928a41ee970d397839aea3ab is 32.673ms for 1053 entries. May 16 00:05:26.492217 systemd-journald[1131]: System Journal (/var/log/journal/b80e3a1d928a41ee970d397839aea3ab) is 8M, max 195.6M, 187.6M free. May 16 00:05:26.561530 systemd-journald[1131]: Received client request to flush runtime journal. May 16 00:05:26.561584 kernel: loop0: detected capacity change from 0 to 138176 May 16 00:05:26.490097 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:05:26.493507 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:05:26.496838 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:05:26.498413 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:05:26.499717 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:05:26.501386 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:05:26.516287 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:05:26.524404 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:05:26.526174 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:05:26.537281 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 00:05:26.541037 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:05:26.564317 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:05:26.543624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:05:26.563349 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:05:26.569851 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 00:05:26.571682 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:05:26.579355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:05:26.587224 kernel: loop1: detected capacity change from 0 to 221472 May 16 00:05:26.603843 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 16 00:05:26.603864 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 16 00:05:26.610619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:05:26.615131 kernel: loop2: detected capacity change from 0 to 147912 May 16 00:05:26.701651 kernel: loop3: detected capacity change from 0 to 138176 May 16 00:05:26.719245 kernel: loop4: detected capacity change from 0 to 221472 May 16 00:05:26.727131 kernel: loop5: detected capacity change from 0 to 147912 May 16 00:05:26.738866 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 00:05:26.739542 (sd-merge)[1205]: Merged extensions into '/usr'. May 16 00:05:26.746872 systemd[1]: Reload requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:05:26.746889 systemd[1]: Reloading... May 16 00:05:26.834145 zram_generator::config[1236]: No configuration found. May 16 00:05:26.976808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:05:26.979421 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:05:27.046147 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:05:27.046349 systemd[1]: Reloading finished in 298 ms. May 16 00:05:27.074402 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:05:27.076075 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:05:27.090459 systemd[1]: Starting ensure-sysext.service... May 16 00:05:27.092376 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:05:27.106123 systemd[1]: Reload requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... May 16 00:05:27.106139 systemd[1]: Reloading... May 16 00:05:27.124617 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:05:27.124913 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:05:27.125912 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:05:27.126257 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 16 00:05:27.126341 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 16 00:05:27.130341 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:05:27.130352 systemd-tmpfiles[1271]: Skipping /boot May 16 00:05:27.144906 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:05:27.144920 systemd-tmpfiles[1271]: Skipping /boot May 16 00:05:27.194148 zram_generator::config[1300]: No configuration found. May 16 00:05:27.306539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:05:27.377621 systemd[1]: Reloading finished in 271 ms. May 16 00:05:27.393943 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:05:27.411761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:05:27.420912 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:05:27.423311 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:05:27.425615 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:05:27.432734 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:05:27.437392 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:05:27.443475 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:05:27.447555 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:05:27.447726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:05:27.448986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:05:27.453378 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:05:27.456030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:05:27.457231 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:05:27.457401 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:05:27.460220 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:05:27.461303 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:05:27.462593 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:05:27.465464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:05:27.465713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:05:27.468227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:05:27.468581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:05:27.470373 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:05:27.470593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:05:27.478236 systemd-udevd[1344]: Using default interface naming scheme 'v255'. May 16 00:05:27.479850 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:05:27.480138 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:05:27.495550 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:05:27.500927 augenrules[1373]: No rules May 16 00:05:27.500943 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:05:27.502893 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:05:27.503405 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:05:27.508747 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:05:27.510430 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:05:27.516932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:05:27.526303 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:05:27.527470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:05:27.529518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:05:27.533738 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:05:27.537376 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:05:27.543093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:05:27.544411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:05:27.544556 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:05:27.547942 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:05:27.549029 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:05:27.558321 augenrules[1395]: /sbin/augenrules: No change May 16 00:05:27.561724 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:05:27.563720 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:05:27.565546 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:05:27.565780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:05:27.567529 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:05:27.567861 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:05:27.568451 augenrules[1427]: No rules May 16 00:05:27.569816 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:05:27.570068 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:05:27.572581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:05:27.572805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:05:27.574640 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:05:27.575146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:05:27.586475 systemd[1]: Finished ensure-sysext.service. May 16 00:05:27.601174 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1396) May 16 00:05:27.604135 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 00:05:27.605128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:05:27.605255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:05:27.613269 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 00:05:27.614731 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:05:27.637910 systemd-resolved[1342]: Positive Trust Anchors: May 16 00:05:27.637931 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:05:27.637962 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:05:27.644148 systemd-resolved[1342]: Defaulting to hostname 'linux'. May 16 00:05:27.717032 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:05:27.728365 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 16 00:05:27.733140 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 16 00:05:27.740750 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:05:27.782323 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 16 00:05:27.782592 kernel: ACPI: button: Power Button [PWRF] May 16 00:05:27.782607 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 00:05:27.786521 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 16 00:05:27.786748 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 00:05:27.794728 systemd-networkd[1413]: lo: Link UP May 16 00:05:27.794740 systemd-networkd[1413]: lo: Gained carrier May 16 00:05:27.796515 systemd-networkd[1413]: Enumeration completed May 16 00:05:27.796615 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:05:27.797361 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:05:27.797371 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:05:27.797908 systemd[1]: Reached target network.target - Network. May 16 00:05:27.798070 systemd-networkd[1413]: eth0: Link UP May 16 00:05:27.798075 systemd-networkd[1413]: eth0: Gained carrier May 16 00:05:27.798087 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:05:27.808843 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 00:05:27.827234 systemd-networkd[1413]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:05:27.827853 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. May 16 00:05:29.259191 systemd-timesyncd[1443]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:05:29.259259 systemd-timesyncd[1443]: Initial clock synchronization to Fri 2025-05-16 00:05:29.259101 UTC. May 16 00:05:29.259300 systemd-resolved[1342]: Clock change detected. Flushing caches. May 16 00:05:29.273681 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:05:29.276737 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 00:05:29.286178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:05:29.294919 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:05:29.300495 kernel: mousedev: PS/2 mouse device common for all mice May 16 00:05:29.333472 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:05:29.337424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:05:29.341526 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 00:05:29.358018 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:05:29.369849 kernel: kvm_amd: TSC scaling supported May 16 00:05:29.369929 kernel: kvm_amd: Nested Virtualization enabled May 16 00:05:29.369943 kernel: kvm_amd: Nested Paging enabled May 16 00:05:29.369956 kernel: kvm_amd: LBR virtualization supported May 16 00:05:29.371008 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 00:05:29.371040 kernel: kvm_amd: Virtual GIF supported May 16 00:05:29.373711 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:05:29.374551 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:05:29.379557 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 00:05:29.391612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:05:29.395465 kernel: EDAC MC: Ver: 3.0.0 May 16 00:05:29.421842 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:05:29.432628 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:05:29.439186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:05:29.443479 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:05:29.481718 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:05:29.483322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:05:29.484520 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:05:29.485729 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:05:29.487063 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:05:29.488625 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:05:29.489828 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:05:29.491129 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:05:29.492450 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:05:29.492499 systemd[1]: Reached target paths.target - Path Units. May 16 00:05:29.497686 systemd[1]: Reached target timers.target - Timer Units. May 16 00:05:29.499546 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:05:29.502449 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:05:29.506140 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 00:05:29.507643 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 00:05:29.508913 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 00:05:29.512804 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:05:29.514300 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 00:05:29.516837 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:05:29.518536 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:05:29.519913 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:05:29.521056 systemd[1]: Reached target basic.target - Basic System. May 16 00:05:29.522152 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:05:29.522187 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:05:29.523238 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:05:29.525419 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:05:29.529532 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:05:29.533573 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:05:29.534637 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:05:29.535763 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:05:29.536649 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:05:29.539800 jq[1480]: false May 16 00:05:29.540644 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 00:05:29.543638 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:05:29.547713 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:05:29.564641 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:05:29.564640 dbus-daemon[1479]: [system] SELinux support is enabled May 16 00:05:29.565814 extend-filesystems[1481]: Found loop3 May 16 00:05:29.565814 extend-filesystems[1481]: Found loop4 May 16 00:05:29.565814 extend-filesystems[1481]: Found loop5 May 16 00:05:29.565814 extend-filesystems[1481]: Found sr0 May 16 00:05:29.565814 extend-filesystems[1481]: Found vda May 16 00:05:29.565814 extend-filesystems[1481]: Found vda1 May 16 00:05:29.565814 extend-filesystems[1481]: Found vda2 May 16 00:05:29.565814 extend-filesystems[1481]: Found vda3 May 16 00:05:29.565814 extend-filesystems[1481]: Found usr May 16 00:05:29.565814 extend-filesystems[1481]: Found vda4 May 16 00:05:29.565814 extend-filesystems[1481]: Found vda6 May 16 00:05:29.585612 extend-filesystems[1481]: Found vda7 May 16 00:05:29.585612 extend-filesystems[1481]: Found vda9 May 16 00:05:29.585612 extend-filesystems[1481]: Checking size of /dev/vda9 May 16 00:05:29.567869 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:05:29.568561 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:05:29.570189 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:05:29.589379 jq[1497]: true May 16 00:05:29.573272 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:05:29.576603 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:05:29.585189 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:05:29.587257 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:05:29.587541 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:05:29.587882 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:05:29.588128 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:05:29.593947 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:05:29.594301 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:05:29.595721 update_engine[1494]: I20250516 00:05:29.595642 1494 main.cc:92] Flatcar Update Engine starting May 16 00:05:29.596962 update_engine[1494]: I20250516 00:05:29.596879 1494 update_check_scheduler.cc:74] Next update check in 4m37s May 16 00:05:29.607788 jq[1502]: true May 16 00:05:29.613212 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:05:29.618348 extend-filesystems[1481]: Resized partition /dev/vda9 May 16 00:05:29.619522 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:05:29.619566 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:05:29.620967 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:05:29.620992 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:05:29.621594 extend-filesystems[1517]: resize2fs 1.47.1 (20-May-2024) May 16 00:05:29.623213 tar[1501]: linux-amd64/helm May 16 00:05:29.624979 systemd[1]: Started update-engine.service - Update Engine. May 16 00:05:29.631580 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:05:29.729388 systemd-logind[1492]: Watching system buttons on /dev/input/event1 (Power Button) May 16 00:05:29.729419 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 00:05:29.732586 systemd-logind[1492]: New seat seat0. May 16 00:05:29.734222 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:05:29.763615 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:05:29.763703 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1405) May 16 00:05:29.821755 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:05:29.830266 bash[1533]: Updated "/home/core/.ssh/authorized_keys" May 16 00:05:29.830739 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:05:29.832129 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:05:29.839049 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 00:05:29.854411 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:05:29.854411 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:05:29.854411 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:05:29.859268 extend-filesystems[1481]: Resized filesystem in /dev/vda9 May 16 00:05:29.858998 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:05:29.859811 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:05:29.859279 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:05:29.905582 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:05:29.918477 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:05:29.927535 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:05:29.927833 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:05:29.936656 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:05:29.974177 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:05:29.981909 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:05:29.984493 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 00:05:29.985786 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:05:30.115394 containerd[1505]: time="2025-05-16T00:05:30.115220897Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 00:05:30.142726 containerd[1505]: time="2025-05-16T00:05:30.142657109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:05:30.145083 containerd[1505]: time="2025-05-16T00:05:30.145032695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:05:30.145083 containerd[1505]: time="2025-05-16T00:05:30.145070677Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:05:30.145155 containerd[1505]: time="2025-05-16T00:05:30.145088490Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:05:30.145362 containerd[1505]: time="2025-05-16T00:05:30.145331235Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 00:05:30.145362 containerd[1505]: time="2025-05-16T00:05:30.145354619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 00:05:30.145614 containerd[1505]: time="2025-05-16T00:05:30.145453825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:05:30.145614 containerd[1505]: time="2025-05-16T00:05:30.145474364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:05:30.145796 containerd[1505]: time="2025-05-16T00:05:30.145757194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:05:30.145796 containerd[1505]: time="2025-05-16T00:05:30.145783303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:05:30.145838 containerd[1505]: time="2025-05-16T00:05:30.145798492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:05:30.145838 containerd[1505]: time="2025-05-16T00:05:30.145809021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:05:30.145943 containerd[1505]: time="2025-05-16T00:05:30.145921032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:05:30.146222 containerd[1505]: time="2025-05-16T00:05:30.146188363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:05:30.146395 containerd[1505]: time="2025-05-16T00:05:30.146365094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:05:30.146395 containerd[1505]: time="2025-05-16T00:05:30.146382587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:05:30.146554 containerd[1505]: time="2025-05-16T00:05:30.146520376Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:05:30.146635 containerd[1505]: time="2025-05-16T00:05:30.146606187Z" level=info msg="metadata content store policy set" policy=shared May 16 00:05:30.152848 containerd[1505]: time="2025-05-16T00:05:30.152796809Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:05:30.152898 containerd[1505]: time="2025-05-16T00:05:30.152867341Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:05:30.152939 containerd[1505]: time="2025-05-16T00:05:30.152917815Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 00:05:30.153135 containerd[1505]: time="2025-05-16T00:05:30.152949806Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 00:05:30.153135 containerd[1505]: time="2025-05-16T00:05:30.152971997Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:05:30.153231 containerd[1505]: time="2025-05-16T00:05:30.153191539Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:05:30.153576 containerd[1505]: time="2025-05-16T00:05:30.153553598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:05:30.153740 containerd[1505]: time="2025-05-16T00:05:30.153720090Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 00:05:30.153785 containerd[1505]: time="2025-05-16T00:05:30.153743655Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 00:05:30.153785 containerd[1505]: time="2025-05-16T00:05:30.153766117Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 00:05:30.153879 containerd[1505]: time="2025-05-16T00:05:30.153801713Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:05:30.153879 containerd[1505]: time="2025-05-16T00:05:30.153823975Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:05:30.153879 containerd[1505]: time="2025-05-16T00:05:30.153840566Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:05:30.153879 containerd[1505]: time="2025-05-16T00:05:30.153859251Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:05:30.153879 containerd[1505]: time="2025-05-16T00:05:30.153878788Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:05:30.154000 containerd[1505]: time="2025-05-16T00:05:30.153896812Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:05:30.154000 containerd[1505]: time="2025-05-16T00:05:30.153918533Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:05:30.154000 containerd[1505]: time="2025-05-16T00:05:30.153937318Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:05:30.154000 containerd[1505]: time="2025-05-16T00:05:30.153967645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154000 containerd[1505]: time="2025-05-16T00:05:30.153986290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154140 containerd[1505]: time="2025-05-16T00:05:30.154003532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154140 containerd[1505]: time="2025-05-16T00:05:30.154019632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154140 containerd[1505]: time="2025-05-16T00:05:30.154039660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154140 containerd[1505]: time="2025-05-16T00:05:30.154069646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154140 containerd[1505]: time="2025-05-16T00:05:30.154086758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154140 containerd[1505]: time="2025-05-16T00:05:30.154103990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154140 containerd[1505]: time="2025-05-16T00:05:30.154120592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154140 containerd[1505]: time="2025-05-16T00:05:30.154139737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154367 containerd[1505]: time="2025-05-16T00:05:30.154156499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154367 containerd[1505]: time="2025-05-16T00:05:30.154173000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154367 containerd[1505]: time="2025-05-16T00:05:30.154200431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154367 containerd[1505]: time="2025-05-16T00:05:30.154219998Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 00:05:30.154367 containerd[1505]: time="2025-05-16T00:05:30.154267637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154367 containerd[1505]: time="2025-05-16T00:05:30.154286904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154367 containerd[1505]: time="2025-05-16T00:05:30.154301731Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:05:30.154367 containerd[1505]: time="2025-05-16T00:05:30.154360612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:05:30.154587 containerd[1505]: time="2025-05-16T00:05:30.154382843Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 00:05:30.154587 containerd[1505]: time="2025-05-16T00:05:30.154397992Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:05:30.154587 containerd[1505]: time="2025-05-16T00:05:30.154415134Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 00:05:30.154587 containerd[1505]: time="2025-05-16T00:05:30.154429531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154587 containerd[1505]: time="2025-05-16T00:05:30.154469726Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 00:05:30.154587 containerd[1505]: time="2025-05-16T00:05:30.154484905Z" level=info msg="NRI interface is disabled by configuration." May 16 00:05:30.154587 containerd[1505]: time="2025-05-16T00:05:30.154512256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:05:30.154960 containerd[1505]: time="2025-05-16T00:05:30.154899473Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:05:30.154960 containerd[1505]: time="2025-05-16T00:05:30.154970085Z" level=info msg="Connect containerd service" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.155039585Z" level=info msg="using legacy CRI server" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.155052570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.155221016Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.156224718Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.156576248Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.156629137Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.156714086Z" level=info msg="Start subscribing containerd event" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.156767577Z" level=info msg="Start recovering state" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.156850192Z" level=info msg="Start event monitor" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.156875168Z" level=info msg="Start snapshots syncer" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.156887702Z" level=info msg="Start cni network conf syncer for default" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.156897851Z" level=info msg="Start streaming server" May 16 00:05:30.158935 containerd[1505]: time="2025-05-16T00:05:30.158327222Z" level=info msg="containerd successfully booted in 0.047516s" May 16 00:05:30.157069 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:05:30.230321 tar[1501]: linux-amd64/LICENSE May 16 00:05:30.230480 tar[1501]: linux-amd64/README.md May 16 00:05:30.249737 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 00:05:30.644671 systemd-networkd[1413]: eth0: Gained IPv6LL May 16 00:05:30.647246 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:05:30.650632 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:05:30.660667 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 00:05:30.663244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:05:30.665362 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:05:30.685057 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:05:30.685364 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 00:05:30.687176 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 00:05:30.690145 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:05:31.916694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:05:31.918562 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:05:31.920061 systemd[1]: Startup finished in 706ms (kernel) + 6.833s (initrd) + 4.994s (userspace) = 12.534s. May 16 00:05:31.922694 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:05:32.624561 kubelet[1593]: E0516 00:05:32.624486 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:05:32.628697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:05:32.628890 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:05:32.629261 systemd[1]: kubelet.service: Consumed 1.823s CPU time, 267.3M memory peak. May 16 00:05:33.650008 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:05:33.651300 systemd[1]: Started sshd@0-10.0.0.57:22-10.0.0.1:39122.service - OpenSSH per-connection server daemon (10.0.0.1:39122). May 16 00:05:33.708458 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 39122 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:05:33.710907 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:33.717618 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:05:33.731724 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:05:33.738638 systemd-logind[1492]: New session 1 of user core. May 16 00:05:33.744625 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:05:33.763674 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:05:33.766538 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:05:33.768614 systemd-logind[1492]: New session c1 of user core. May 16 00:05:33.910741 systemd[1610]: Queued start job for default target default.target. May 16 00:05:33.922993 systemd[1610]: Created slice app.slice - User Application Slice. May 16 00:05:33.923023 systemd[1610]: Reached target paths.target - Paths. May 16 00:05:33.923068 systemd[1610]: Reached target timers.target - Timers. May 16 00:05:33.924932 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:05:33.939160 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:05:33.939341 systemd[1610]: Reached target sockets.target - Sockets. May 16 00:05:33.939399 systemd[1610]: Reached target basic.target - Basic System. May 16 00:05:33.939473 systemd[1610]: Reached target default.target - Main User Target. May 16 00:05:33.939518 systemd[1610]: Startup finished in 164ms. May 16 00:05:33.939769 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:05:33.941521 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:05:34.007914 systemd[1]: Started sshd@1-10.0.0.57:22-10.0.0.1:39134.service - OpenSSH per-connection server daemon (10.0.0.1:39134). May 16 00:05:34.050226 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 39134 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:05:34.052010 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:34.056713 systemd-logind[1492]: New session 2 of user core. May 16 00:05:34.074622 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:05:34.128383 sshd[1623]: Connection closed by 10.0.0.1 port 39134 May 16 00:05:34.128689 sshd-session[1621]: pam_unix(sshd:session): session closed for user core May 16 00:05:34.147014 systemd[1]: sshd@1-10.0.0.57:22-10.0.0.1:39134.service: Deactivated successfully. May 16 00:05:34.148936 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:05:34.150631 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. May 16 00:05:34.151900 systemd[1]: Started sshd@2-10.0.0.57:22-10.0.0.1:39140.service - OpenSSH per-connection server daemon (10.0.0.1:39140). May 16 00:05:34.152862 systemd-logind[1492]: Removed session 2. May 16 00:05:34.208118 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 39140 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:05:34.209496 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:34.214135 systemd-logind[1492]: New session 3 of user core. May 16 00:05:34.222641 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:05:34.271846 sshd[1631]: Connection closed by 10.0.0.1 port 39140 May 16 00:05:34.272236 sshd-session[1628]: pam_unix(sshd:session): session closed for user core May 16 00:05:34.284955 systemd[1]: sshd@2-10.0.0.57:22-10.0.0.1:39140.service: Deactivated successfully. May 16 00:05:34.286794 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:05:34.288761 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. May 16 00:05:34.303745 systemd[1]: Started sshd@3-10.0.0.57:22-10.0.0.1:39150.service - OpenSSH per-connection server daemon (10.0.0.1:39150). May 16 00:05:34.304850 systemd-logind[1492]: Removed session 3. May 16 00:05:34.342511 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 39150 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:05:34.343989 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:34.348461 systemd-logind[1492]: New session 4 of user core. May 16 00:05:34.357611 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:05:34.412402 sshd[1639]: Connection closed by 10.0.0.1 port 39150 May 16 00:05:34.412861 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 16 00:05:34.422234 systemd[1]: sshd@3-10.0.0.57:22-10.0.0.1:39150.service: Deactivated successfully. May 16 00:05:34.424089 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:05:34.425704 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. May 16 00:05:34.437722 systemd[1]: Started sshd@4-10.0.0.57:22-10.0.0.1:39154.service - OpenSSH per-connection server daemon (10.0.0.1:39154). May 16 00:05:34.438663 systemd-logind[1492]: Removed session 4. May 16 00:05:34.476404 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 39154 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:05:34.477914 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:34.482467 systemd-logind[1492]: New session 5 of user core. May 16 00:05:34.492570 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:05:34.550680 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:05:34.551005 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:05:34.567548 sudo[1648]: pam_unix(sudo:session): session closed for user root May 16 00:05:34.569143 sshd[1647]: Connection closed by 10.0.0.1 port 39154 May 16 00:05:34.569635 sshd-session[1644]: pam_unix(sshd:session): session closed for user core May 16 00:05:34.586397 systemd[1]: sshd@4-10.0.0.57:22-10.0.0.1:39154.service: Deactivated successfully. May 16 00:05:34.588339 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:05:34.590132 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. May 16 00:05:34.604906 systemd[1]: Started sshd@5-10.0.0.57:22-10.0.0.1:39156.service - OpenSSH per-connection server daemon (10.0.0.1:39156). May 16 00:05:34.606222 systemd-logind[1492]: Removed session 5. May 16 00:05:34.643227 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 39156 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:05:34.644854 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:34.649242 systemd-logind[1492]: New session 6 of user core. May 16 00:05:34.658625 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:05:34.712012 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:05:34.712335 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:05:34.716794 sudo[1658]: pam_unix(sudo:session): session closed for user root May 16 00:05:34.723322 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:05:34.723742 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:05:34.743851 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:05:34.776862 augenrules[1680]: No rules May 16 00:05:34.778535 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:05:34.778802 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:05:34.780280 sudo[1657]: pam_unix(sudo:session): session closed for user root May 16 00:05:34.781899 sshd[1656]: Connection closed by 10.0.0.1 port 39156 May 16 00:05:34.782334 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 16 00:05:34.793121 systemd[1]: sshd@5-10.0.0.57:22-10.0.0.1:39156.service: Deactivated successfully. May 16 00:05:34.794801 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:05:34.796363 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. May 16 00:05:34.806669 systemd[1]: Started sshd@6-10.0.0.57:22-10.0.0.1:39170.service - OpenSSH per-connection server daemon (10.0.0.1:39170). May 16 00:05:34.807496 systemd-logind[1492]: Removed session 6. May 16 00:05:34.845259 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 39170 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:05:34.847033 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:05:34.851693 systemd-logind[1492]: New session 7 of user core. May 16 00:05:34.867556 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:05:34.920600 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:05:34.920913 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:05:35.754689 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 00:05:35.754786 (dockerd)[1711]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 00:05:36.420690 dockerd[1711]: time="2025-05-16T00:05:36.420591360Z" level=info msg="Starting up" May 16 00:05:36.992458 dockerd[1711]: time="2025-05-16T00:05:36.992383959Z" level=info msg="Loading containers: start." May 16 00:05:37.168460 kernel: Initializing XFRM netlink socket May 16 00:05:37.251023 systemd-networkd[1413]: docker0: Link UP May 16 00:05:37.288204 dockerd[1711]: time="2025-05-16T00:05:37.288152346Z" level=info msg="Loading containers: done." May 16 00:05:37.357010 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3660084989-merged.mount: Deactivated successfully. May 16 00:05:37.359043 dockerd[1711]: time="2025-05-16T00:05:37.358983675Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:05:37.359149 dockerd[1711]: time="2025-05-16T00:05:37.359118959Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 16 00:05:37.359299 dockerd[1711]: time="2025-05-16T00:05:37.359275823Z" level=info msg="Daemon has completed initialization" May 16 00:05:37.401634 dockerd[1711]: time="2025-05-16T00:05:37.401521715Z" level=info msg="API listen on /run/docker.sock" May 16 00:05:37.401699 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 00:05:38.252406 containerd[1505]: time="2025-05-16T00:05:38.252360795Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 00:05:39.223716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641339025.mount: Deactivated successfully. May 16 00:05:40.960413 containerd[1505]: time="2025-05-16T00:05:40.960339750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:40.969867 containerd[1505]: time="2025-05-16T00:05:40.969801748Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 16 00:05:41.001007 containerd[1505]: time="2025-05-16T00:05:41.000874602Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:41.013872 containerd[1505]: time="2025-05-16T00:05:41.013799365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:41.015035 containerd[1505]: time="2025-05-16T00:05:41.014972105Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 2.762567527s" May 16 00:05:41.015102 containerd[1505]: time="2025-05-16T00:05:41.015045612Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 16 00:05:41.015875 containerd[1505]: time="2025-05-16T00:05:41.015816919Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 00:05:42.576313 containerd[1505]: time="2025-05-16T00:05:42.576253061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:42.577038 containerd[1505]: time="2025-05-16T00:05:42.576985956Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 16 00:05:42.578668 containerd[1505]: time="2025-05-16T00:05:42.578637694Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:42.583212 containerd[1505]: time="2025-05-16T00:05:42.583168573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:42.584082 containerd[1505]: time="2025-05-16T00:05:42.584047842Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.568190798s" May 16 00:05:42.584082 containerd[1505]: time="2025-05-16T00:05:42.584081024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 16 00:05:42.584784 containerd[1505]: time="2025-05-16T00:05:42.584741423Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 00:05:42.879368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:05:42.887675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:05:43.086105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:05:43.091265 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:05:43.358525 kubelet[1974]: E0516 00:05:43.358358 1974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:05:43.365241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:05:43.365459 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:05:43.365853 systemd[1]: kubelet.service: Consumed 475ms CPU time, 109.1M memory peak. May 16 00:05:44.801941 containerd[1505]: time="2025-05-16T00:05:44.801881194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:44.804708 containerd[1505]: time="2025-05-16T00:05:44.804661789Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 16 00:05:44.807100 containerd[1505]: time="2025-05-16T00:05:44.807076829Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:44.810409 containerd[1505]: time="2025-05-16T00:05:44.810357833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:44.811287 containerd[1505]: time="2025-05-16T00:05:44.811258513Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 2.226467307s" May 16 00:05:44.811287 containerd[1505]: time="2025-05-16T00:05:44.811285964Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 16 00:05:44.811920 containerd[1505]: time="2025-05-16T00:05:44.811873376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 00:05:45.838326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132419868.mount: Deactivated successfully. May 16 00:05:46.469618 containerd[1505]: time="2025-05-16T00:05:46.469550985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:46.470586 containerd[1505]: time="2025-05-16T00:05:46.470547655Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 16 00:05:46.472045 containerd[1505]: time="2025-05-16T00:05:46.472010749Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:46.474240 containerd[1505]: time="2025-05-16T00:05:46.474205316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:46.474925 containerd[1505]: time="2025-05-16T00:05:46.474895941Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.662989673s" May 16 00:05:46.474958 containerd[1505]: time="2025-05-16T00:05:46.474927801Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 16 00:05:46.475398 containerd[1505]: time="2025-05-16T00:05:46.475369419Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:05:46.987075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484399065.mount: Deactivated successfully. May 16 00:05:48.869874 containerd[1505]: time="2025-05-16T00:05:48.869802501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:48.870737 containerd[1505]: time="2025-05-16T00:05:48.870698402Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 00:05:48.872245 containerd[1505]: time="2025-05-16T00:05:48.872205509Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:48.875394 containerd[1505]: time="2025-05-16T00:05:48.875357381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:48.876786 containerd[1505]: time="2025-05-16T00:05:48.876720888Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.401317646s" May 16 00:05:48.876786 containerd[1505]: time="2025-05-16T00:05:48.876784828Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 00:05:48.877416 containerd[1505]: time="2025-05-16T00:05:48.877367982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:05:49.407798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725988847.mount: Deactivated successfully. May 16 00:05:49.413504 containerd[1505]: time="2025-05-16T00:05:49.413455359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:49.414218 containerd[1505]: time="2025-05-16T00:05:49.414141896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 00:05:49.415331 containerd[1505]: time="2025-05-16T00:05:49.415297855Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:49.418063 containerd[1505]: time="2025-05-16T00:05:49.418017786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:49.418792 containerd[1505]: time="2025-05-16T00:05:49.418766250Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 541.360838ms" May 16 00:05:49.418847 containerd[1505]: time="2025-05-16T00:05:49.418794313Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 00:05:49.419288 containerd[1505]: time="2025-05-16T00:05:49.419267971Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 00:05:51.264575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316958794.mount: Deactivated successfully. May 16 00:05:53.615926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:05:53.630652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:05:53.790618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:05:53.797961 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:05:54.159479 kubelet[2070]: E0516 00:05:54.159415 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:05:54.163952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:05:54.164169 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:05:54.164530 systemd[1]: kubelet.service: Consumed 295ms CPU time, 112.7M memory peak. May 16 00:05:57.412794 containerd[1505]: time="2025-05-16T00:05:57.412704894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:57.452716 containerd[1505]: time="2025-05-16T00:05:57.452629041Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 16 00:05:57.486867 containerd[1505]: time="2025-05-16T00:05:57.486799555Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:57.513273 containerd[1505]: time="2025-05-16T00:05:57.513210424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:05:57.514756 containerd[1505]: time="2025-05-16T00:05:57.514702563Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 8.095339212s" May 16 00:05:57.514756 containerd[1505]: time="2025-05-16T00:05:57.514739783Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 16 00:06:00.164663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:06:00.164999 systemd[1]: kubelet.service: Consumed 295ms CPU time, 112.7M memory peak. May 16 00:06:00.180515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:06:00.252798 systemd[1]: Reload requested from client PID 2151 ('systemctl') (unit session-7.scope)... May 16 00:06:00.254964 systemd[1]: Reloading... May 16 00:06:00.445427 zram_generator::config[2198]: No configuration found. May 16 00:06:01.058997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:06:01.182084 systemd[1]: Reloading finished in 924 ms. May 16 00:06:01.240024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:06:01.246269 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:06:01.247469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:06:01.247920 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:06:01.248174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:06:01.248209 systemd[1]: kubelet.service: Consumed 177ms CPU time, 98.3M memory peak. May 16 00:06:01.250665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:06:01.427989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:06:01.432565 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:06:01.541781 kubelet[2246]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:06:01.541781 kubelet[2246]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:06:01.541781 kubelet[2246]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:06:01.542244 kubelet[2246]: I0516 00:06:01.541829 2246 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:06:02.113922 kubelet[2246]: I0516 00:06:02.113856 2246 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:06:02.113922 kubelet[2246]: I0516 00:06:02.113904 2246 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:06:02.114248 kubelet[2246]: I0516 00:06:02.114222 2246 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:06:02.139159 kubelet[2246]: E0516 00:06:02.139098 2246 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:02.145009 kubelet[2246]: I0516 00:06:02.144957 2246 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:06:02.153187 kubelet[2246]: E0516 00:06:02.153146 2246 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:06:02.153187 kubelet[2246]: I0516 00:06:02.153185 2246 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:06:02.159839 kubelet[2246]: I0516 00:06:02.159802 2246 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:06:02.160532 kubelet[2246]: I0516 00:06:02.160503 2246 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:06:02.160748 kubelet[2246]: I0516 00:06:02.160688 2246 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:06:02.160928 kubelet[2246]: I0516 00:06:02.160732 2246 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:06:02.161085 kubelet[2246]: I0516 00:06:02.160931 2246 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:06:02.161085 kubelet[2246]: I0516 00:06:02.160941 2246 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:06:02.161085 kubelet[2246]: I0516 00:06:02.161073 2246 state_mem.go:36] "Initialized new in-memory state store" May 16 00:06:02.582684 kubelet[2246]: I0516 00:06:02.582598 2246 kubelet.go:408] "Attempting to sync node with API server" May 16 00:06:02.582684 kubelet[2246]: I0516 00:06:02.582697 2246 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:06:02.583277 kubelet[2246]: I0516 00:06:02.582756 2246 kubelet.go:314] "Adding apiserver pod source" May 16 00:06:02.583277 kubelet[2246]: I0516 00:06:02.582789 2246 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:06:02.584233 kubelet[2246]: W0516 00:06:02.584111 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:02.584233 kubelet[2246]: E0516 00:06:02.584194 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:02.585694 kubelet[2246]: W0516 00:06:02.585627 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:02.585694 kubelet[2246]: E0516 00:06:02.585697 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:02.686578 kubelet[2246]: I0516 00:06:02.685246 2246 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:06:02.686578 kubelet[2246]: I0516 00:06:02.685999 2246 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:06:02.686578 kubelet[2246]: W0516 00:06:02.686084 2246 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:06:02.690316 kubelet[2246]: I0516 00:06:02.690189 2246 server.go:1274] "Started kubelet" May 16 00:06:02.690482 kubelet[2246]: I0516 00:06:02.690299 2246 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:06:02.691911 kubelet[2246]: I0516 00:06:02.691886 2246 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:06:02.691911 kubelet[2246]: I0516 00:06:02.691909 2246 server.go:449] "Adding debug handlers to kubelet server" May 16 00:06:02.694159 kubelet[2246]: I0516 00:06:02.694115 2246 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:06:02.694533 kubelet[2246]: I0516 00:06:02.694516 2246 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:06:02.694659 kubelet[2246]: I0516 00:06:02.694640 2246 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:06:02.694845 kubelet[2246]: E0516 00:06:02.694825 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:06:02.694923 kubelet[2246]: I0516 00:06:02.694907 2246 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:06:02.695019 kubelet[2246]: I0516 00:06:02.695001 2246 reconciler.go:26] "Reconciler: start to sync state" May 16 00:06:02.705517 kubelet[2246]: I0516 00:06:02.705495 2246 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:06:02.712539 kubelet[2246]: E0516 00:06:02.711634 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="200ms" May 16 00:06:02.712539 kubelet[2246]: W0516 00:06:02.712417 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:02.712539 kubelet[2246]: E0516 00:06:02.712496 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:02.714514 kubelet[2246]: I0516 00:06:02.714215 2246 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:06:02.714514 kubelet[2246]: E0516 00:06:02.712421 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd92bd2ff40e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:06:02.689806565 +0000 UTC m=+1.251689078,LastTimestamp:2025-05-16 00:06:02.689806565 +0000 UTC m=+1.251689078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:06:02.715386 kubelet[2246]: I0516 00:06:02.715358 2246 factory.go:221] Registration of the containerd container factory successfully May 16 00:06:02.715386 kubelet[2246]: I0516 00:06:02.715381 2246 factory.go:221] Registration of the systemd container factory successfully May 16 00:06:02.731236 kubelet[2246]: I0516 00:06:02.731125 2246 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:06:02.731236 kubelet[2246]: I0516 00:06:02.731144 2246 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:06:02.731236 kubelet[2246]: I0516 00:06:02.731161 2246 state_mem.go:36] "Initialized new in-memory state store" May 16 00:06:02.731470 kubelet[2246]: I0516 00:06:02.731284 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:06:02.733026 kubelet[2246]: I0516 00:06:02.732962 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:06:02.733026 kubelet[2246]: I0516 00:06:02.733008 2246 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:06:02.733200 kubelet[2246]: I0516 00:06:02.733052 2246 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:06:02.733200 kubelet[2246]: E0516 00:06:02.733107 2246 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:06:02.795302 kubelet[2246]: E0516 00:06:02.795196 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:06:02.833848 kubelet[2246]: E0516 00:06:02.833648 2246 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:06:02.860230 kubelet[2246]: W0516 00:06:02.860139 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:02.860316 kubelet[2246]: E0516 00:06:02.860239 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:02.878759 kubelet[2246]: I0516 00:06:02.878711 2246 policy_none.go:49] "None policy: Start" May 16 00:06:02.879778 kubelet[2246]: I0516 00:06:02.879734 2246 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:06:02.879778 kubelet[2246]: I0516 00:06:02.879764 2246 state_mem.go:35] "Initializing new in-memory state store" May 16 00:06:02.895328 kubelet[2246]: E0516 00:06:02.895287 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:06:02.906064 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 00:06:02.913084 kubelet[2246]: E0516 00:06:02.913029 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="400ms" May 16 00:06:02.919279 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 00:06:02.922856 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 00:06:02.935425 kubelet[2246]: I0516 00:06:02.935392 2246 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:06:02.935643 kubelet[2246]: I0516 00:06:02.935620 2246 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:06:02.935679 kubelet[2246]: I0516 00:06:02.935638 2246 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:06:02.935914 kubelet[2246]: I0516 00:06:02.935895 2246 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:06:02.946108 kubelet[2246]: E0516 00:06:02.946074 2246 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:06:03.036640 kubelet[2246]: I0516 00:06:03.036600 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:06:03.037367 kubelet[2246]: E0516 00:06:03.037080 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" May 16 00:06:03.043347 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 16 00:06:03.056371 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 16 00:06:03.074852 systemd[1]: Created slice kubepods-burstable-pod68e5d9bf3288308ee5e626f828b98d95.slice - libcontainer container kubepods-burstable-pod68e5d9bf3288308ee5e626f828b98d95.slice. May 16 00:06:03.097589 kubelet[2246]: I0516 00:06:03.097377 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:03.097589 kubelet[2246]: I0516 00:06:03.097421 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:03.097589 kubelet[2246]: I0516 00:06:03.097471 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:06:03.097589 kubelet[2246]: I0516 00:06:03.097489 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68e5d9bf3288308ee5e626f828b98d95-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"68e5d9bf3288308ee5e626f828b98d95\") " pod="kube-system/kube-apiserver-localhost" May 16 00:06:03.097589 kubelet[2246]: I0516 00:06:03.097505 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:03.097825 kubelet[2246]: I0516 00:06:03.097518 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:03.097825 kubelet[2246]: I0516 00:06:03.097566 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:03.097825 kubelet[2246]: I0516 00:06:03.097611 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68e5d9bf3288308ee5e626f828b98d95-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"68e5d9bf3288308ee5e626f828b98d95\") " pod="kube-system/kube-apiserver-localhost" May 16 00:06:03.097825 kubelet[2246]: I0516 00:06:03.097636 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68e5d9bf3288308ee5e626f828b98d95-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"68e5d9bf3288308ee5e626f828b98d95\") " pod="kube-system/kube-apiserver-localhost" May 16 00:06:03.239010 kubelet[2246]: I0516 00:06:03.238963 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:06:03.239478 kubelet[2246]: E0516 00:06:03.239407 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" May 16 00:06:03.314694 kubelet[2246]: E0516 00:06:03.314605 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="800ms" May 16 00:06:03.354785 kubelet[2246]: E0516 00:06:03.354633 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:03.355510 containerd[1505]: time="2025-05-16T00:06:03.355460044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 00:06:03.372651 kubelet[2246]: E0516 00:06:03.372590 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:03.373150 containerd[1505]: time="2025-05-16T00:06:03.373103226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 00:06:03.378324 kubelet[2246]: E0516 00:06:03.378280 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:03.378671 containerd[1505]: time="2025-05-16T00:06:03.378595220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:68e5d9bf3288308ee5e626f828b98d95,Namespace:kube-system,Attempt:0,}" May 16 00:06:03.461810 kubelet[2246]: W0516 00:06:03.461673 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:03.461810 kubelet[2246]: E0516 00:06:03.461768 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:03.526183 kubelet[2246]: W0516 00:06:03.526074 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:03.526183 kubelet[2246]: E0516 00:06:03.526134 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:03.598660 kubelet[2246]: W0516 00:06:03.598584 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:03.598660 kubelet[2246]: E0516 00:06:03.598650 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:03.641473 kubelet[2246]: I0516 00:06:03.641319 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:06:03.641811 kubelet[2246]: E0516 00:06:03.641751 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" May 16 00:06:04.115413 kubelet[2246]: E0516 00:06:04.115329 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="1.6s" May 16 00:06:04.235938 kubelet[2246]: E0516 00:06:04.235893 2246 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:04.356416 kubelet[2246]: W0516 00:06:04.356344 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:04.356416 kubelet[2246]: E0516 00:06:04.356411 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:04.443455 kubelet[2246]: I0516 00:06:04.443317 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:06:04.443670 kubelet[2246]: E0516 00:06:04.443633 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" May 16 00:06:05.015667 kubelet[2246]: E0516 00:06:05.015520 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd92bd2ff40e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:06:02.689806565 +0000 UTC m=+1.251689078,LastTimestamp:2025-05-16 00:06:02.689806565 +0000 UTC m=+1.251689078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:06:05.223662 kubelet[2246]: W0516 00:06:05.223603 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:05.223662 kubelet[2246]: E0516 00:06:05.223657 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:05.716467 kubelet[2246]: E0516 00:06:05.716358 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="3.2s" May 16 00:06:05.748206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749562280.mount: Deactivated successfully. May 16 00:06:05.865937 containerd[1505]: time="2025-05-16T00:06:05.865841173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:06:05.956921 containerd[1505]: time="2025-05-16T00:06:05.956830387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 16 00:06:05.960693 containerd[1505]: time="2025-05-16T00:06:05.960646161Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:06:05.963671 containerd[1505]: time="2025-05-16T00:06:05.963641250Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:06:05.964559 containerd[1505]: time="2025-05-16T00:06:05.964489248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:06:05.965933 containerd[1505]: time="2025-05-16T00:06:05.965890407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:06:05.966963 containerd[1505]: time="2025-05-16T00:06:05.966795455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:06:05.967974 containerd[1505]: time="2025-05-16T00:06:05.967924361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:06:05.968899 containerd[1505]: time="2025-05-16T00:06:05.968874215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.613282468s" May 16 00:06:05.972138 containerd[1505]: time="2025-05-16T00:06:05.972095999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.598867722s" May 16 00:06:05.973121 containerd[1505]: time="2025-05-16T00:06:05.973088915Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.59442939s" May 16 00:06:06.045776 kubelet[2246]: I0516 00:06:06.045720 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:06:06.046310 kubelet[2246]: E0516 00:06:06.046102 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" May 16 00:06:06.261113 containerd[1505]: time="2025-05-16T00:06:06.260166489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:06:06.261113 containerd[1505]: time="2025-05-16T00:06:06.260202409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:06:06.261113 containerd[1505]: time="2025-05-16T00:06:06.260212728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:06.261113 containerd[1505]: time="2025-05-16T00:06:06.260311607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:06.261113 containerd[1505]: time="2025-05-16T00:06:06.258076934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:06:06.261113 containerd[1505]: time="2025-05-16T00:06:06.260204012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:06:06.261113 containerd[1505]: time="2025-05-16T00:06:06.260431517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:06.261113 containerd[1505]: time="2025-05-16T00:06:06.260531148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:06.262387 containerd[1505]: time="2025-05-16T00:06:06.261487721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:06:06.262387 containerd[1505]: time="2025-05-16T00:06:06.261537747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:06:06.262387 containerd[1505]: time="2025-05-16T00:06:06.261547947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:06.262387 containerd[1505]: time="2025-05-16T00:06:06.261624043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:06.332711 systemd[1]: Started cri-containerd-3d13c34e7d8658727e9ba138be11f8c73938fcbafa6343281387ac9148ca0279.scope - libcontainer container 3d13c34e7d8658727e9ba138be11f8c73938fcbafa6343281387ac9148ca0279. May 16 00:06:06.336872 systemd[1]: Started cri-containerd-241544cc249e50fcc00c347c4bb0e673dbdf8b5185f5a44dc6e79929848a66d4.scope - libcontainer container 241544cc249e50fcc00c347c4bb0e673dbdf8b5185f5a44dc6e79929848a66d4. May 16 00:06:06.338225 systemd[1]: Started cri-containerd-b33ec25b08954c6f98fff54239feddb2be1290daa8188f1caae1d877c9570702.scope - libcontainer container b33ec25b08954c6f98fff54239feddb2be1290daa8188f1caae1d877c9570702. May 16 00:06:06.350489 kubelet[2246]: W0516 00:06:06.350377 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:06.350597 kubelet[2246]: E0516 00:06:06.350504 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:06.398281 containerd[1505]: time="2025-05-16T00:06:06.398192692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d13c34e7d8658727e9ba138be11f8c73938fcbafa6343281387ac9148ca0279\"" May 16 00:06:06.399774 kubelet[2246]: E0516 00:06:06.399713 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:06.401885 containerd[1505]: time="2025-05-16T00:06:06.401804185Z" level=info msg="CreateContainer within sandbox \"3d13c34e7d8658727e9ba138be11f8c73938fcbafa6343281387ac9148ca0279\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:06:06.419143 containerd[1505]: time="2025-05-16T00:06:06.415527229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"241544cc249e50fcc00c347c4bb0e673dbdf8b5185f5a44dc6e79929848a66d4\"" May 16 00:06:06.419143 containerd[1505]: time="2025-05-16T00:06:06.418491631Z" level=info msg="CreateContainer within sandbox \"241544cc249e50fcc00c347c4bb0e673dbdf8b5185f5a44dc6e79929848a66d4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:06:06.419349 kubelet[2246]: E0516 00:06:06.416358 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:06.430788 containerd[1505]: time="2025-05-16T00:06:06.430742403Z" level=info msg="CreateContainer within sandbox \"3d13c34e7d8658727e9ba138be11f8c73938fcbafa6343281387ac9148ca0279\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"155dfe338f6d0d82f14c6876e574dcaeaf6919481afa3362478fbbd55fdceae9\"" May 16 00:06:06.433139 containerd[1505]: time="2025-05-16T00:06:06.432698422Z" level=info msg="StartContainer for \"155dfe338f6d0d82f14c6876e574dcaeaf6919481afa3362478fbbd55fdceae9\"" May 16 00:06:06.441748 containerd[1505]: time="2025-05-16T00:06:06.441688704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:68e5d9bf3288308ee5e626f828b98d95,Namespace:kube-system,Attempt:0,} returns sandbox id \"b33ec25b08954c6f98fff54239feddb2be1290daa8188f1caae1d877c9570702\"" May 16 00:06:06.442878 containerd[1505]: time="2025-05-16T00:06:06.442842296Z" level=info msg="CreateContainer within sandbox \"241544cc249e50fcc00c347c4bb0e673dbdf8b5185f5a44dc6e79929848a66d4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f20a9dd99da9eb3e3e7ff92e6ecd949d5459c2aaccb64944293034578cfbb7e1\"" May 16 00:06:06.443088 kubelet[2246]: E0516 00:06:06.443058 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:06.443889 containerd[1505]: time="2025-05-16T00:06:06.443852552Z" level=info msg="StartContainer for \"f20a9dd99da9eb3e3e7ff92e6ecd949d5459c2aaccb64944293034578cfbb7e1\"" May 16 00:06:06.445884 containerd[1505]: time="2025-05-16T00:06:06.445728457Z" level=info msg="CreateContainer within sandbox \"b33ec25b08954c6f98fff54239feddb2be1290daa8188f1caae1d877c9570702\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:06:06.469687 systemd[1]: Started cri-containerd-155dfe338f6d0d82f14c6876e574dcaeaf6919481afa3362478fbbd55fdceae9.scope - libcontainer container 155dfe338f6d0d82f14c6876e574dcaeaf6919481afa3362478fbbd55fdceae9. May 16 00:06:06.478590 systemd[1]: Started cri-containerd-f20a9dd99da9eb3e3e7ff92e6ecd949d5459c2aaccb64944293034578cfbb7e1.scope - libcontainer container f20a9dd99da9eb3e3e7ff92e6ecd949d5459c2aaccb64944293034578cfbb7e1. May 16 00:06:06.558579 containerd[1505]: time="2025-05-16T00:06:06.558399942Z" level=info msg="StartContainer for \"155dfe338f6d0d82f14c6876e574dcaeaf6919481afa3362478fbbd55fdceae9\" returns successfully" May 16 00:06:06.558579 containerd[1505]: time="2025-05-16T00:06:06.558509913Z" level=info msg="CreateContainer within sandbox \"b33ec25b08954c6f98fff54239feddb2be1290daa8188f1caae1d877c9570702\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bedcec9be0bf8ea2031ebef348c2d5de4721f16a82ffa3561169badbc672f528\"" May 16 00:06:06.558822 containerd[1505]: time="2025-05-16T00:06:06.558624001Z" level=info msg="StartContainer for \"f20a9dd99da9eb3e3e7ff92e6ecd949d5459c2aaccb64944293034578cfbb7e1\" returns successfully" May 16 00:06:06.560917 containerd[1505]: time="2025-05-16T00:06:06.559935675Z" level=info msg="StartContainer for \"bedcec9be0bf8ea2031ebef348c2d5de4721f16a82ffa3561169badbc672f528\"" May 16 00:06:06.595666 systemd[1]: Started cri-containerd-bedcec9be0bf8ea2031ebef348c2d5de4721f16a82ffa3561169badbc672f528.scope - libcontainer container bedcec9be0bf8ea2031ebef348c2d5de4721f16a82ffa3561169badbc672f528. May 16 00:06:06.610084 kubelet[2246]: W0516 00:06:06.610002 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused May 16 00:06:06.610233 kubelet[2246]: E0516 00:06:06.610097 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" May 16 00:06:06.646190 containerd[1505]: time="2025-05-16T00:06:06.646130400Z" level=info msg="StartContainer for \"bedcec9be0bf8ea2031ebef348c2d5de4721f16a82ffa3561169badbc672f528\" returns successfully" May 16 00:06:06.743865 kubelet[2246]: E0516 00:06:06.743832 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:06.751651 kubelet[2246]: E0516 00:06:06.751630 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:06.752337 kubelet[2246]: E0516 00:06:06.751831 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:07.751902 kubelet[2246]: E0516 00:06:07.751866 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:08.120189 kubelet[2246]: E0516 00:06:08.120047 2246 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 16 00:06:08.461929 kubelet[2246]: E0516 00:06:08.461889 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:08.493903 kubelet[2246]: E0516 00:06:08.493869 2246 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 16 00:06:08.586534 kubelet[2246]: I0516 00:06:08.586431 2246 apiserver.go:52] "Watching apiserver" May 16 00:06:08.595048 kubelet[2246]: I0516 00:06:08.595004 2246 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:06:08.992347 kubelet[2246]: E0516 00:06:08.992274 2246 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 16 00:06:08.992347 kubelet[2246]: E0516 00:06:08.992298 2246 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 00:06:09.247989 kubelet[2246]: I0516 00:06:09.247878 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:06:09.254938 kubelet[2246]: I0516 00:06:09.254893 2246 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:06:09.254938 kubelet[2246]: E0516 00:06:09.254929 2246 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 00:06:10.373164 systemd[1]: Reload requested from client PID 2528 ('systemctl') (unit session-7.scope)... May 16 00:06:10.373182 systemd[1]: Reloading... May 16 00:06:10.679485 zram_generator::config[2605]: No configuration found. May 16 00:06:10.685040 kubelet[2246]: E0516 00:06:10.684983 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:10.755522 kubelet[2246]: E0516 00:06:10.755488 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:10.768140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:06:10.896919 systemd[1]: Reloading finished in 523 ms. May 16 00:06:10.926969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:06:10.948980 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:06:10.949293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:06:10.949351 systemd[1]: kubelet.service: Consumed 1.239s CPU time, 135.8M memory peak. May 16 00:06:10.956829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:06:11.142177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:06:11.147113 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:06:11.183769 kubelet[2617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:06:11.183769 kubelet[2617]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:06:11.183769 kubelet[2617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:06:11.184265 kubelet[2617]: I0516 00:06:11.183844 2617 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:06:11.192796 kubelet[2617]: I0516 00:06:11.192764 2617 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:06:11.192796 kubelet[2617]: I0516 00:06:11.192791 2617 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:06:11.193034 kubelet[2617]: I0516 00:06:11.193019 2617 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:06:11.194245 kubelet[2617]: I0516 00:06:11.194222 2617 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:06:11.196880 kubelet[2617]: I0516 00:06:11.196453 2617 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:06:11.203539 kubelet[2617]: E0516 00:06:11.202050 2617 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:06:11.203539 kubelet[2617]: I0516 00:06:11.202145 2617 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:06:11.213965 kubelet[2617]: I0516 00:06:11.213895 2617 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:06:11.214134 kubelet[2617]: I0516 00:06:11.214016 2617 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:06:11.214161 kubelet[2617]: I0516 00:06:11.214133 2617 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:06:11.215187 kubelet[2617]: I0516 00:06:11.214160 2617 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:06:11.215187 kubelet[2617]: I0516 00:06:11.214322 2617 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:06:11.215187 kubelet[2617]: I0516 00:06:11.214331 2617 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:06:11.215187 kubelet[2617]: I0516 00:06:11.214357 2617 state_mem.go:36] "Initialized new in-memory state store" May 16 00:06:11.215187 kubelet[2617]: I0516 00:06:11.214500 2617 kubelet.go:408] "Attempting to sync node with API server" May 16 00:06:11.215391 kubelet[2617]: I0516 00:06:11.214511 2617 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:06:11.215391 kubelet[2617]: I0516 00:06:11.214536 2617 kubelet.go:314] "Adding apiserver pod source" May 16 00:06:11.215391 kubelet[2617]: I0516 00:06:11.214546 2617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:06:11.215391 kubelet[2617]: I0516 00:06:11.215115 2617 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:06:11.215580 kubelet[2617]: I0516 00:06:11.215563 2617 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:06:11.216185 kubelet[2617]: I0516 00:06:11.215982 2617 server.go:1274] "Started kubelet" May 16 00:06:11.216538 kubelet[2617]: I0516 00:06:11.216513 2617 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:06:11.217473 kubelet[2617]: I0516 00:06:11.217452 2617 server.go:449] "Adding debug handlers to kubelet server" May 16 00:06:11.219339 kubelet[2617]: I0516 00:06:11.216516 2617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:06:11.219568 kubelet[2617]: I0516 00:06:11.219547 2617 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:06:11.221107 kubelet[2617]: I0516 00:06:11.221092 2617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:06:11.225659 kubelet[2617]: E0516 00:06:11.225604 2617 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:06:11.226455 kubelet[2617]: I0516 00:06:11.226127 2617 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:06:11.226547 kubelet[2617]: E0516 00:06:11.226521 2617 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:06:11.226652 kubelet[2617]: I0516 00:06:11.226632 2617 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:06:11.226874 kubelet[2617]: I0516 00:06:11.226861 2617 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:06:11.227483 kubelet[2617]: I0516 00:06:11.227471 2617 reconciler.go:26] "Reconciler: start to sync state" May 16 00:06:11.227882 kubelet[2617]: I0516 00:06:11.227862 2617 factory.go:221] Registration of the systemd container factory successfully May 16 00:06:11.228280 kubelet[2617]: I0516 00:06:11.227949 2617 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:06:11.229904 kubelet[2617]: I0516 00:06:11.229871 2617 factory.go:221] Registration of the containerd container factory successfully May 16 00:06:11.237487 kubelet[2617]: I0516 00:06:11.237433 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:06:11.239135 kubelet[2617]: I0516 00:06:11.238865 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:06:11.239135 kubelet[2617]: I0516 00:06:11.238895 2617 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:06:11.239135 kubelet[2617]: I0516 00:06:11.238917 2617 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:06:11.239135 kubelet[2617]: E0516 00:06:11.238958 2617 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:06:11.269311 kubelet[2617]: I0516 00:06:11.269271 2617 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:06:11.269311 kubelet[2617]: I0516 00:06:11.269295 2617 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:06:11.269311 kubelet[2617]: I0516 00:06:11.269316 2617 state_mem.go:36] "Initialized new in-memory state store" May 16 00:06:11.269556 kubelet[2617]: I0516 00:06:11.269492 2617 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:06:11.269556 kubelet[2617]: I0516 00:06:11.269503 2617 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:06:11.269556 kubelet[2617]: I0516 00:06:11.269522 2617 policy_none.go:49] "None policy: Start" May 16 00:06:11.270181 kubelet[2617]: I0516 00:06:11.269966 2617 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:06:11.270181 kubelet[2617]: I0516 00:06:11.269987 2617 state_mem.go:35] "Initializing new in-memory state store" May 16 00:06:11.270181 kubelet[2617]: I0516 00:06:11.270112 2617 state_mem.go:75] "Updated machine memory state" May 16 00:06:11.274673 kubelet[2617]: I0516 00:06:11.274637 2617 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:06:11.274952 kubelet[2617]: I0516 00:06:11.274832 2617 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:06:11.274952 kubelet[2617]: I0516 00:06:11.274848 2617 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:06:11.275061 kubelet[2617]: I0516 00:06:11.275040 2617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:06:11.348546 kubelet[2617]: E0516 00:06:11.348492 2617 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:06:11.372308 sudo[2653]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 00:06:11.372705 sudo[2653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 00:06:11.380576 kubelet[2617]: I0516 00:06:11.380549 2617 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:06:11.390058 kubelet[2617]: I0516 00:06:11.390006 2617 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 00:06:11.390185 kubelet[2617]: I0516 00:06:11.390166 2617 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:06:11.428182 kubelet[2617]: I0516 00:06:11.427976 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:11.428182 kubelet[2617]: I0516 00:06:11.428009 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:11.428182 kubelet[2617]: I0516 00:06:11.428037 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:11.428182 kubelet[2617]: I0516 00:06:11.428054 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68e5d9bf3288308ee5e626f828b98d95-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"68e5d9bf3288308ee5e626f828b98d95\") " pod="kube-system/kube-apiserver-localhost" May 16 00:06:11.428182 kubelet[2617]: I0516 00:06:11.428070 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68e5d9bf3288308ee5e626f828b98d95-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"68e5d9bf3288308ee5e626f828b98d95\") " pod="kube-system/kube-apiserver-localhost" May 16 00:06:11.428401 kubelet[2617]: I0516 00:06:11.428085 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:11.428401 kubelet[2617]: I0516 00:06:11.428098 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:06:11.428401 kubelet[2617]: I0516 00:06:11.428112 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:06:11.428401 kubelet[2617]: I0516 00:06:11.428128 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68e5d9bf3288308ee5e626f828b98d95-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"68e5d9bf3288308ee5e626f828b98d95\") " pod="kube-system/kube-apiserver-localhost" May 16 00:06:11.646154 kubelet[2617]: E0516 00:06:11.646113 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:11.649356 kubelet[2617]: E0516 00:06:11.649217 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:11.649356 kubelet[2617]: E0516 00:06:11.649276 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:11.876690 sudo[2653]: pam_unix(sudo:session): session closed for user root May 16 00:06:12.217945 kubelet[2617]: I0516 00:06:12.217899 2617 apiserver.go:52] "Watching apiserver" May 16 00:06:12.227343 kubelet[2617]: I0516 00:06:12.227296 2617 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:06:12.250148 kubelet[2617]: E0516 00:06:12.249664 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:12.250148 kubelet[2617]: E0516 00:06:12.249777 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:12.250425 kubelet[2617]: E0516 00:06:12.250407 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:12.266253 kubelet[2617]: I0516 00:06:12.266191 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.26616979 podStartE2EDuration="2.26616979s" podCreationTimestamp="2025-05-16 00:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:06:12.266051245 +0000 UTC m=+1.115221199" watchObservedRunningTime="2025-05-16 00:06:12.26616979 +0000 UTC m=+1.115339735" May 16 00:06:12.273069 kubelet[2617]: I0516 00:06:12.272998 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.272973973 podStartE2EDuration="1.272973973s" podCreationTimestamp="2025-05-16 00:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:06:12.272937193 +0000 UTC m=+1.122107137" watchObservedRunningTime="2025-05-16 00:06:12.272973973 +0000 UTC m=+1.122143917" May 16 00:06:12.290132 kubelet[2617]: I0516 00:06:12.290055 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.289997349 podStartE2EDuration="1.289997349s" podCreationTimestamp="2025-05-16 00:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:06:12.282157425 +0000 UTC m=+1.131327379" watchObservedRunningTime="2025-05-16 00:06:12.289997349 +0000 UTC m=+1.139167293" May 16 00:06:13.085232 sudo[1692]: pam_unix(sudo:session): session closed for user root May 16 00:06:13.086628 sshd[1691]: Connection closed by 10.0.0.1 port 39170 May 16 00:06:13.087204 sshd-session[1688]: pam_unix(sshd:session): session closed for user core May 16 00:06:13.091628 systemd[1]: sshd@6-10.0.0.57:22-10.0.0.1:39170.service: Deactivated successfully. May 16 00:06:13.093981 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:06:13.094229 systemd[1]: session-7.scope: Consumed 4.994s CPU time, 251.3M memory peak. May 16 00:06:13.095522 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. May 16 00:06:13.096433 systemd-logind[1492]: Removed session 7. May 16 00:06:13.251195 kubelet[2617]: E0516 00:06:13.251160 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:14.387753 update_engine[1494]: I20250516 00:06:14.387648 1494 update_attempter.cc:509] Updating boot flags... May 16 00:06:14.465478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2703) May 16 00:06:14.513457 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2700) May 16 00:06:14.554708 kubelet[2617]: E0516 00:06:14.553361 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:14.555538 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2700) May 16 00:06:14.578148 kubelet[2617]: I0516 00:06:14.578115 2617 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:06:14.578697 containerd[1505]: time="2025-05-16T00:06:14.578652317Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:06:14.579095 kubelet[2617]: I0516 00:06:14.578855 2617 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:06:15.540469 systemd[1]: Created slice kubepods-besteffort-pod2198d78b_f46f_4325_9e27_326ffc82b84a.slice - libcontainer container kubepods-besteffort-pod2198d78b_f46f_4325_9e27_326ffc82b84a.slice. May 16 00:06:15.551478 kubelet[2617]: I0516 00:06:15.551413 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-etc-cni-netd\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551478 kubelet[2617]: I0516 00:06:15.551472 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-host-proc-sys-net\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551478 kubelet[2617]: I0516 00:06:15.551490 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2198d78b-f46f-4325-9e27-326ffc82b84a-kube-proxy\") pod \"kube-proxy-5lqkt\" (UID: \"2198d78b-f46f-4325-9e27-326ffc82b84a\") " pod="kube-system/kube-proxy-5lqkt" May 16 00:06:15.551721 kubelet[2617]: I0516 00:06:15.551505 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdc88c46-0efa-4255-bf67-afa530d0e584-hubble-tls\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551721 kubelet[2617]: I0516 00:06:15.551519 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdc88c46-0efa-4255-bf67-afa530d0e584-clustermesh-secrets\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551721 kubelet[2617]: I0516 00:06:15.551532 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wj9\" (UniqueName: \"kubernetes.io/projected/fdc88c46-0efa-4255-bf67-afa530d0e584-kube-api-access-j8wj9\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551721 kubelet[2617]: I0516 00:06:15.551546 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-cgroup\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551721 kubelet[2617]: I0516 00:06:15.551560 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-lib-modules\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551721 kubelet[2617]: I0516 00:06:15.551574 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-xtables-lock\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551891 kubelet[2617]: I0516 00:06:15.551586 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-config-path\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551891 kubelet[2617]: I0516 00:06:15.551601 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-run\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551891 kubelet[2617]: I0516 00:06:15.551613 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-host-proc-sys-kernel\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551891 kubelet[2617]: I0516 00:06:15.551625 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cni-path\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551891 kubelet[2617]: I0516 00:06:15.551638 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-hostproc\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.551891 kubelet[2617]: I0516 00:06:15.551659 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-bpf-maps\") pod \"cilium-fk7mc\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " pod="kube-system/cilium-fk7mc" May 16 00:06:15.552074 kubelet[2617]: I0516 00:06:15.551671 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkglk\" (UniqueName: \"kubernetes.io/projected/2198d78b-f46f-4325-9e27-326ffc82b84a-kube-api-access-gkglk\") pod \"kube-proxy-5lqkt\" (UID: \"2198d78b-f46f-4325-9e27-326ffc82b84a\") " pod="kube-system/kube-proxy-5lqkt" May 16 00:06:15.552074 kubelet[2617]: I0516 00:06:15.551687 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2198d78b-f46f-4325-9e27-326ffc82b84a-xtables-lock\") pod \"kube-proxy-5lqkt\" (UID: \"2198d78b-f46f-4325-9e27-326ffc82b84a\") " pod="kube-system/kube-proxy-5lqkt" May 16 00:06:15.552074 kubelet[2617]: I0516 00:06:15.551705 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2198d78b-f46f-4325-9e27-326ffc82b84a-lib-modules\") pod \"kube-proxy-5lqkt\" (UID: \"2198d78b-f46f-4325-9e27-326ffc82b84a\") " pod="kube-system/kube-proxy-5lqkt" May 16 00:06:15.565702 systemd[1]: Created slice kubepods-burstable-podfdc88c46_0efa_4255_bf67_afa530d0e584.slice - libcontainer container kubepods-burstable-podfdc88c46_0efa_4255_bf67_afa530d0e584.slice. May 16 00:06:15.627642 systemd[1]: Created slice kubepods-besteffort-podd4d3b9d9_ae56_4554_bac7_7dffc7d59d5f.slice - libcontainer container kubepods-besteffort-podd4d3b9d9_ae56_4554_bac7_7dffc7d59d5f.slice. May 16 00:06:15.652088 kubelet[2617]: I0516 00:06:15.652003 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrzb8\" (UniqueName: \"kubernetes.io/projected/d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f-kube-api-access-nrzb8\") pod \"cilium-operator-5d85765b45-qd7bv\" (UID: \"d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f\") " pod="kube-system/cilium-operator-5d85765b45-qd7bv" May 16 00:06:15.652668 kubelet[2617]: I0516 00:06:15.652557 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f-cilium-config-path\") pod \"cilium-operator-5d85765b45-qd7bv\" (UID: \"d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f\") " pod="kube-system/cilium-operator-5d85765b45-qd7bv" May 16 00:06:15.864347 kubelet[2617]: E0516 00:06:15.864238 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:15.864805 containerd[1505]: time="2025-05-16T00:06:15.864764598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lqkt,Uid:2198d78b-f46f-4325-9e27-326ffc82b84a,Namespace:kube-system,Attempt:0,}" May 16 00:06:15.871259 kubelet[2617]: E0516 00:06:15.871236 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:15.871599 containerd[1505]: time="2025-05-16T00:06:15.871575389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fk7mc,Uid:fdc88c46-0efa-4255-bf67-afa530d0e584,Namespace:kube-system,Attempt:0,}" May 16 00:06:15.930755 kubelet[2617]: E0516 00:06:15.930714 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:15.931330 containerd[1505]: time="2025-05-16T00:06:15.931291931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qd7bv,Uid:d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f,Namespace:kube-system,Attempt:0,}" May 16 00:06:16.065333 containerd[1505]: time="2025-05-16T00:06:16.065214285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:06:16.065333 containerd[1505]: time="2025-05-16T00:06:16.065277775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:06:16.065333 containerd[1505]: time="2025-05-16T00:06:16.065287764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:16.066182 containerd[1505]: time="2025-05-16T00:06:16.065409064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:16.071969 containerd[1505]: time="2025-05-16T00:06:16.071842871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:06:16.071969 containerd[1505]: time="2025-05-16T00:06:16.071917822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:06:16.071969 containerd[1505]: time="2025-05-16T00:06:16.071934094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:16.072089 containerd[1505]: time="2025-05-16T00:06:16.072017822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:16.074753 containerd[1505]: time="2025-05-16T00:06:16.074198350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:06:16.074753 containerd[1505]: time="2025-05-16T00:06:16.074297167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:06:16.074753 containerd[1505]: time="2025-05-16T00:06:16.074324549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:16.074964 containerd[1505]: time="2025-05-16T00:06:16.074658772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:16.087617 systemd[1]: Started cri-containerd-a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350.scope - libcontainer container a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350. May 16 00:06:16.092602 systemd[1]: Started cri-containerd-42c5a2650ba3385e5ef1fcd1bbbf0c532adb06fdb62bf739bd0871bdfde2cce6.scope - libcontainer container 42c5a2650ba3385e5ef1fcd1bbbf0c532adb06fdb62bf739bd0871bdfde2cce6. May 16 00:06:16.095294 systemd[1]: Started cri-containerd-b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52.scope - libcontainer container b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52. May 16 00:06:16.119836 containerd[1505]: time="2025-05-16T00:06:16.119518122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fk7mc,Uid:fdc88c46-0efa-4255-bf67-afa530d0e584,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\"" May 16 00:06:16.120237 kubelet[2617]: E0516 00:06:16.120211 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:16.121934 containerd[1505]: time="2025-05-16T00:06:16.121897245Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:06:16.130177 containerd[1505]: time="2025-05-16T00:06:16.130146386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lqkt,Uid:2198d78b-f46f-4325-9e27-326ffc82b84a,Namespace:kube-system,Attempt:0,} returns sandbox id \"42c5a2650ba3385e5ef1fcd1bbbf0c532adb06fdb62bf739bd0871bdfde2cce6\"" May 16 00:06:16.131051 kubelet[2617]: E0516 00:06:16.131018 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:16.133635 containerd[1505]: time="2025-05-16T00:06:16.133587725Z" level=info msg="CreateContainer within sandbox \"42c5a2650ba3385e5ef1fcd1bbbf0c532adb06fdb62bf739bd0871bdfde2cce6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:06:16.146897 containerd[1505]: time="2025-05-16T00:06:16.146854797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qd7bv,Uid:d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52\"" May 16 00:06:16.147609 kubelet[2617]: E0516 00:06:16.147578 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:16.154649 containerd[1505]: time="2025-05-16T00:06:16.154597957Z" level=info msg="CreateContainer within sandbox \"42c5a2650ba3385e5ef1fcd1bbbf0c532adb06fdb62bf739bd0871bdfde2cce6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"72ca2c57653eeec18186c1bff8d2cd478716729672ff2f749cb5b5105307f718\"" May 16 00:06:16.155081 containerd[1505]: time="2025-05-16T00:06:16.155046137Z" level=info msg="StartContainer for \"72ca2c57653eeec18186c1bff8d2cd478716729672ff2f749cb5b5105307f718\"" May 16 00:06:16.187576 systemd[1]: Started cri-containerd-72ca2c57653eeec18186c1bff8d2cd478716729672ff2f749cb5b5105307f718.scope - libcontainer container 72ca2c57653eeec18186c1bff8d2cd478716729672ff2f749cb5b5105307f718. May 16 00:06:16.223937 containerd[1505]: time="2025-05-16T00:06:16.223894499Z" level=info msg="StartContainer for \"72ca2c57653eeec18186c1bff8d2cd478716729672ff2f749cb5b5105307f718\" returns successfully" May 16 00:06:16.262320 kubelet[2617]: E0516 00:06:16.261475 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:16.270564 kubelet[2617]: I0516 00:06:16.270482 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5lqkt" podStartSLOduration=1.270402797 podStartE2EDuration="1.270402797s" podCreationTimestamp="2025-05-16 00:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:06:16.27029894 +0000 UTC m=+5.119468884" watchObservedRunningTime="2025-05-16 00:06:16.270402797 +0000 UTC m=+5.119572741" May 16 00:06:16.569747 kubelet[2617]: E0516 00:06:16.569712 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:17.263003 kubelet[2617]: E0516 00:06:17.262964 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:18.264626 kubelet[2617]: E0516 00:06:18.264594 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:18.616860 kubelet[2617]: E0516 00:06:18.616665 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:19.266540 kubelet[2617]: E0516 00:06:19.266512 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:23.170574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032146644.mount: Deactivated successfully. May 16 00:06:24.805878 kubelet[2617]: E0516 00:06:24.805826 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:27.192840 containerd[1505]: time="2025-05-16T00:06:27.192777525Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:06:27.193574 containerd[1505]: time="2025-05-16T00:06:27.193533672Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 00:06:27.194743 containerd[1505]: time="2025-05-16T00:06:27.194687748Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:06:27.196994 containerd[1505]: time="2025-05-16T00:06:27.196955266Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.07492628s" May 16 00:06:27.196994 containerd[1505]: time="2025-05-16T00:06:27.196985522Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 00:06:27.204867 containerd[1505]: time="2025-05-16T00:06:27.204772231Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:06:27.222070 containerd[1505]: time="2025-05-16T00:06:27.222022560Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:06:27.235966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231694112.mount: Deactivated successfully. May 16 00:06:27.237289 containerd[1505]: time="2025-05-16T00:06:27.237242780Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\"" May 16 00:06:27.237922 containerd[1505]: time="2025-05-16T00:06:27.237876475Z" level=info msg="StartContainer for \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\"" May 16 00:06:27.269583 systemd[1]: Started cri-containerd-09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1.scope - libcontainer container 09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1. May 16 00:06:27.298481 containerd[1505]: time="2025-05-16T00:06:27.298423114Z" level=info msg="StartContainer for \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\" returns successfully" May 16 00:06:27.309199 systemd[1]: cri-containerd-09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1.scope: Deactivated successfully. May 16 00:06:28.105218 containerd[1505]: time="2025-05-16T00:06:28.105133183Z" level=info msg="shim disconnected" id=09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1 namespace=k8s.io May 16 00:06:28.105218 containerd[1505]: time="2025-05-16T00:06:28.105207223Z" level=warning msg="cleaning up after shim disconnected" id=09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1 namespace=k8s.io May 16 00:06:28.105218 containerd[1505]: time="2025-05-16T00:06:28.105216590Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:06:28.233428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1-rootfs.mount: Deactivated successfully. May 16 00:06:28.290595 kubelet[2617]: E0516 00:06:28.290547 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:28.292795 containerd[1505]: time="2025-05-16T00:06:28.292743658Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:06:28.314431 containerd[1505]: time="2025-05-16T00:06:28.314342245Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\"" May 16 00:06:28.315551 containerd[1505]: time="2025-05-16T00:06:28.315518193Z" level=info msg="StartContainer for \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\"" May 16 00:06:28.345563 systemd[1]: Started cri-containerd-f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63.scope - libcontainer container f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63. May 16 00:06:28.374799 containerd[1505]: time="2025-05-16T00:06:28.374631879Z" level=info msg="StartContainer for \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\" returns successfully" May 16 00:06:28.388234 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:06:28.388714 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:06:28.389022 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 00:06:28.394797 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:06:28.395011 systemd[1]: cri-containerd-f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63.scope: Deactivated successfully. May 16 00:06:28.412244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:06:28.415463 containerd[1505]: time="2025-05-16T00:06:28.415376309Z" level=info msg="shim disconnected" id=f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63 namespace=k8s.io May 16 00:06:28.415576 containerd[1505]: time="2025-05-16T00:06:28.415459556Z" level=warning msg="cleaning up after shim disconnected" id=f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63 namespace=k8s.io May 16 00:06:28.415576 containerd[1505]: time="2025-05-16T00:06:28.415472590Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:06:29.233329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63-rootfs.mount: Deactivated successfully. May 16 00:06:29.292233 kubelet[2617]: E0516 00:06:29.292095 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:29.295193 containerd[1505]: time="2025-05-16T00:06:29.295140053Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:06:29.310276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211177521.mount: Deactivated successfully. May 16 00:06:29.316073 containerd[1505]: time="2025-05-16T00:06:29.316019922Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\"" May 16 00:06:29.316687 containerd[1505]: time="2025-05-16T00:06:29.316655670Z" level=info msg="StartContainer for \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\"" May 16 00:06:29.322737 containerd[1505]: time="2025-05-16T00:06:29.322694413Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:06:29.323648 containerd[1505]: time="2025-05-16T00:06:29.323602355Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 00:06:29.325414 containerd[1505]: time="2025-05-16T00:06:29.325378282Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:06:29.326495 containerd[1505]: time="2025-05-16T00:06:29.326367707Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.121563005s" May 16 00:06:29.326495 containerd[1505]: time="2025-05-16T00:06:29.326396171Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 00:06:29.330793 containerd[1505]: time="2025-05-16T00:06:29.330697359Z" level=info msg="CreateContainer within sandbox \"b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:06:29.345100 containerd[1505]: time="2025-05-16T00:06:29.345055243Z" level=info msg="CreateContainer within sandbox \"b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\"" May 16 00:06:29.345521 containerd[1505]: time="2025-05-16T00:06:29.345487528Z" level=info msg="StartContainer for \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\"" May 16 00:06:29.347738 systemd[1]: Started cri-containerd-9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a.scope - libcontainer container 9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a. May 16 00:06:29.382577 systemd[1]: Started cri-containerd-a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6.scope - libcontainer container a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6. May 16 00:06:29.400046 containerd[1505]: time="2025-05-16T00:06:29.400007390Z" level=info msg="StartContainer for \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\" returns successfully" May 16 00:06:29.400633 systemd[1]: cri-containerd-9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a.scope: Deactivated successfully. May 16 00:06:29.410502 containerd[1505]: time="2025-05-16T00:06:29.410304260Z" level=info msg="StartContainer for \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\" returns successfully" May 16 00:06:29.718802 containerd[1505]: time="2025-05-16T00:06:29.718720731Z" level=info msg="shim disconnected" id=9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a namespace=k8s.io May 16 00:06:29.718802 containerd[1505]: time="2025-05-16T00:06:29.718784931Z" level=warning msg="cleaning up after shim disconnected" id=9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a namespace=k8s.io May 16 00:06:29.718802 containerd[1505]: time="2025-05-16T00:06:29.718794359Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:06:30.234126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a-rootfs.mount: Deactivated successfully. May 16 00:06:30.301705 kubelet[2617]: E0516 00:06:30.301668 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:30.301705 kubelet[2617]: E0516 00:06:30.301683 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:30.303577 containerd[1505]: time="2025-05-16T00:06:30.303532916Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:06:30.508708 containerd[1505]: time="2025-05-16T00:06:30.508579683Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\"" May 16 00:06:30.511100 kubelet[2617]: I0516 00:06:30.511045 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-qd7bv" podStartSLOduration=2.33183613 podStartE2EDuration="15.511027897s" podCreationTimestamp="2025-05-16 00:06:15 +0000 UTC" firstStartedPulling="2025-05-16 00:06:16.148388306 +0000 UTC m=+4.997558250" lastFinishedPulling="2025-05-16 00:06:29.327580073 +0000 UTC m=+18.176750017" observedRunningTime="2025-05-16 00:06:30.510726499 +0000 UTC m=+19.359896443" watchObservedRunningTime="2025-05-16 00:06:30.511027897 +0000 UTC m=+19.360197841" May 16 00:06:30.511460 containerd[1505]: time="2025-05-16T00:06:30.511412271Z" level=info msg="StartContainer for \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\"" May 16 00:06:30.573598 systemd[1]: Started cri-containerd-ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c.scope - libcontainer container ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c. May 16 00:06:30.598482 systemd[1]: cri-containerd-ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c.scope: Deactivated successfully. May 16 00:06:30.600617 containerd[1505]: time="2025-05-16T00:06:30.600576956Z" level=info msg="StartContainer for \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\" returns successfully" May 16 00:06:30.634426 containerd[1505]: time="2025-05-16T00:06:30.634361181Z" level=info msg="shim disconnected" id=ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c namespace=k8s.io May 16 00:06:30.634426 containerd[1505]: time="2025-05-16T00:06:30.634419440Z" level=warning msg="cleaning up after shim disconnected" id=ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c namespace=k8s.io May 16 00:06:30.634426 containerd[1505]: time="2025-05-16T00:06:30.634428848Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:06:31.233284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c-rootfs.mount: Deactivated successfully. May 16 00:06:31.303679 kubelet[2617]: E0516 00:06:31.303614 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:31.303679 kubelet[2617]: E0516 00:06:31.303614 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:31.305255 containerd[1505]: time="2025-05-16T00:06:31.305182966Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:06:31.324733 containerd[1505]: time="2025-05-16T00:06:31.324693713Z" level=info msg="CreateContainer within sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\"" May 16 00:06:31.325431 containerd[1505]: time="2025-05-16T00:06:31.325400434Z" level=info msg="StartContainer for \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\"" May 16 00:06:31.361690 systemd[1]: Started cri-containerd-7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05.scope - libcontainer container 7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05. May 16 00:06:31.392153 containerd[1505]: time="2025-05-16T00:06:31.392099157Z" level=info msg="StartContainer for \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\" returns successfully" May 16 00:06:31.537397 kubelet[2617]: I0516 00:06:31.537290 2617 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 00:06:31.574508 systemd[1]: Created slice kubepods-burstable-pod0b7c7a19_2294_4d7e_8401_973fdfaf804e.slice - libcontainer container kubepods-burstable-pod0b7c7a19_2294_4d7e_8401_973fdfaf804e.slice. May 16 00:06:31.587324 systemd[1]: Created slice kubepods-burstable-podc8a3eb4d_2e6d_43fb_a983_86beb8ae07f0.slice - libcontainer container kubepods-burstable-podc8a3eb4d_2e6d_43fb_a983_86beb8ae07f0.slice. May 16 00:06:31.761568 kubelet[2617]: I0516 00:06:31.761522 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8a3eb4d-2e6d-43fb-a983-86beb8ae07f0-config-volume\") pod \"coredns-7c65d6cfc9-z9gb7\" (UID: \"c8a3eb4d-2e6d-43fb-a983-86beb8ae07f0\") " pod="kube-system/coredns-7c65d6cfc9-z9gb7" May 16 00:06:31.761568 kubelet[2617]: I0516 00:06:31.761570 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fv8g\" (UniqueName: \"kubernetes.io/projected/c8a3eb4d-2e6d-43fb-a983-86beb8ae07f0-kube-api-access-2fv8g\") pod \"coredns-7c65d6cfc9-z9gb7\" (UID: \"c8a3eb4d-2e6d-43fb-a983-86beb8ae07f0\") " pod="kube-system/coredns-7c65d6cfc9-z9gb7" May 16 00:06:31.761792 kubelet[2617]: I0516 00:06:31.761593 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b7c7a19-2294-4d7e-8401-973fdfaf804e-config-volume\") pod \"coredns-7c65d6cfc9-2ml7v\" (UID: \"0b7c7a19-2294-4d7e-8401-973fdfaf804e\") " pod="kube-system/coredns-7c65d6cfc9-2ml7v" May 16 00:06:31.761792 kubelet[2617]: I0516 00:06:31.761611 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk9vr\" (UniqueName: \"kubernetes.io/projected/0b7c7a19-2294-4d7e-8401-973fdfaf804e-kube-api-access-qk9vr\") pod \"coredns-7c65d6cfc9-2ml7v\" (UID: \"0b7c7a19-2294-4d7e-8401-973fdfaf804e\") " pod="kube-system/coredns-7c65d6cfc9-2ml7v" May 16 00:06:31.880170 kubelet[2617]: E0516 00:06:31.880059 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:31.881237 containerd[1505]: time="2025-05-16T00:06:31.881186074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2ml7v,Uid:0b7c7a19-2294-4d7e-8401-973fdfaf804e,Namespace:kube-system,Attempt:0,}" May 16 00:06:31.889664 kubelet[2617]: E0516 00:06:31.889622 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:31.890204 containerd[1505]: time="2025-05-16T00:06:31.890121567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z9gb7,Uid:c8a3eb4d-2e6d-43fb-a983-86beb8ae07f0,Namespace:kube-system,Attempt:0,}" May 16 00:06:32.308112 kubelet[2617]: E0516 00:06:32.308081 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:32.320815 kubelet[2617]: I0516 00:06:32.320720 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fk7mc" podStartSLOduration=6.238358133 podStartE2EDuration="17.320698394s" podCreationTimestamp="2025-05-16 00:06:15 +0000 UTC" firstStartedPulling="2025-05-16 00:06:16.121469725 +0000 UTC m=+4.970639669" lastFinishedPulling="2025-05-16 00:06:27.203809966 +0000 UTC m=+16.052979930" observedRunningTime="2025-05-16 00:06:32.320330521 +0000 UTC m=+21.169500495" watchObservedRunningTime="2025-05-16 00:06:32.320698394 +0000 UTC m=+21.169868338" May 16 00:06:33.309639 kubelet[2617]: E0516 00:06:33.309604 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:33.581825 systemd-networkd[1413]: cilium_host: Link UP May 16 00:06:33.582048 systemd-networkd[1413]: cilium_net: Link UP May 16 00:06:33.582298 systemd-networkd[1413]: cilium_net: Gained carrier May 16 00:06:33.583208 systemd-networkd[1413]: cilium_host: Gained carrier May 16 00:06:33.583594 systemd-networkd[1413]: cilium_net: Gained IPv6LL May 16 00:06:33.680659 systemd-networkd[1413]: cilium_vxlan: Link UP May 16 00:06:33.680670 systemd-networkd[1413]: cilium_vxlan: Gained carrier May 16 00:06:33.890482 kernel: NET: Registered PF_ALG protocol family May 16 00:06:34.311262 kubelet[2617]: E0516 00:06:34.311229 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:34.550628 systemd-networkd[1413]: lxc_health: Link UP May 16 00:06:34.561128 systemd-networkd[1413]: lxc_health: Gained carrier May 16 00:06:34.587848 systemd-networkd[1413]: cilium_host: Gained IPv6LL May 16 00:06:34.836647 systemd-networkd[1413]: cilium_vxlan: Gained IPv6LL May 16 00:06:34.933060 systemd-networkd[1413]: lxc8cf89d724408: Link UP May 16 00:06:34.934472 kernel: eth0: renamed from tmp78027 May 16 00:06:34.945159 systemd-networkd[1413]: lxc8cf89d724408: Gained carrier May 16 00:06:34.973123 systemd-networkd[1413]: lxcdca38aef33f8: Link UP May 16 00:06:34.973467 kernel: eth0: renamed from tmp97334 May 16 00:06:34.981638 systemd-networkd[1413]: lxcdca38aef33f8: Gained carrier May 16 00:06:35.873030 kubelet[2617]: E0516 00:06:35.872982 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:36.236765 systemd[1]: Started sshd@7-10.0.0.57:22-10.0.0.1:40568.service - OpenSSH per-connection server daemon (10.0.0.1:40568). May 16 00:06:36.245102 systemd-networkd[1413]: lxcdca38aef33f8: Gained IPv6LL May 16 00:06:36.286574 sshd[3847]: Accepted publickey for core from 10.0.0.1 port 40568 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:06:36.287994 sshd-session[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:06:36.292105 systemd-logind[1492]: New session 8 of user core. May 16 00:06:36.301547 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 00:06:36.313812 kubelet[2617]: E0516 00:06:36.313761 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:36.424968 sshd[3850]: Connection closed by 10.0.0.1 port 40568 May 16 00:06:36.425299 sshd-session[3847]: pam_unix(sshd:session): session closed for user core May 16 00:06:36.428724 systemd[1]: sshd@7-10.0.0.57:22-10.0.0.1:40568.service: Deactivated successfully. May 16 00:06:36.430649 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:06:36.431293 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. May 16 00:06:36.432049 systemd-logind[1492]: Removed session 8. May 16 00:06:36.436581 systemd-networkd[1413]: lxc8cf89d724408: Gained IPv6LL May 16 00:06:36.564576 systemd-networkd[1413]: lxc_health: Gained IPv6LL May 16 00:06:37.315846 kubelet[2617]: E0516 00:06:37.315796 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:38.393705 containerd[1505]: time="2025-05-16T00:06:38.393613020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:06:38.393705 containerd[1505]: time="2025-05-16T00:06:38.393658245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:06:38.393705 containerd[1505]: time="2025-05-16T00:06:38.393670077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:38.394340 containerd[1505]: time="2025-05-16T00:06:38.394290584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:38.397148 containerd[1505]: time="2025-05-16T00:06:38.395849447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:06:38.397148 containerd[1505]: time="2025-05-16T00:06:38.395903469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:06:38.397148 containerd[1505]: time="2025-05-16T00:06:38.395922985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:38.397148 containerd[1505]: time="2025-05-16T00:06:38.396007434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:06:38.426598 systemd[1]: Started cri-containerd-78027decd2a9f5b517f2ae9729ad40ced971d5746f84e522a6a858d019152986.scope - libcontainer container 78027decd2a9f5b517f2ae9729ad40ced971d5746f84e522a6a858d019152986. May 16 00:06:38.428585 systemd[1]: Started cri-containerd-973343c3d5e956f083ccc0e42c97735c342f7da271342e566dea268f474f28ba.scope - libcontainer container 973343c3d5e956f083ccc0e42c97735c342f7da271342e566dea268f474f28ba. May 16 00:06:38.440576 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:06:38.442629 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:06:38.467133 containerd[1505]: time="2025-05-16T00:06:38.466872996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2ml7v,Uid:0b7c7a19-2294-4d7e-8401-973fdfaf804e,Namespace:kube-system,Attempt:0,} returns sandbox id \"78027decd2a9f5b517f2ae9729ad40ced971d5746f84e522a6a858d019152986\"" May 16 00:06:38.468271 kubelet[2617]: E0516 00:06:38.468242 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:38.470059 containerd[1505]: time="2025-05-16T00:06:38.470011199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-z9gb7,Uid:c8a3eb4d-2e6d-43fb-a983-86beb8ae07f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"973343c3d5e956f083ccc0e42c97735c342f7da271342e566dea268f474f28ba\"" May 16 00:06:38.471414 kubelet[2617]: E0516 00:06:38.470853 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:38.471883 containerd[1505]: time="2025-05-16T00:06:38.471844147Z" level=info msg="CreateContainer within sandbox \"78027decd2a9f5b517f2ae9729ad40ced971d5746f84e522a6a858d019152986\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:06:38.473928 containerd[1505]: time="2025-05-16T00:06:38.473902649Z" level=info msg="CreateContainer within sandbox \"973343c3d5e956f083ccc0e42c97735c342f7da271342e566dea268f474f28ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:06:38.495521 containerd[1505]: time="2025-05-16T00:06:38.495467250Z" level=info msg="CreateContainer within sandbox \"973343c3d5e956f083ccc0e42c97735c342f7da271342e566dea268f474f28ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9a44377dcd66eacf21b3b45e895364cecd9f08cad6fa4c01cd16c9b8fde888c\"" May 16 00:06:38.496080 containerd[1505]: time="2025-05-16T00:06:38.496042171Z" level=info msg="StartContainer for \"d9a44377dcd66eacf21b3b45e895364cecd9f08cad6fa4c01cd16c9b8fde888c\"" May 16 00:06:38.510175 containerd[1505]: time="2025-05-16T00:06:38.510044799Z" level=info msg="CreateContainer within sandbox \"78027decd2a9f5b517f2ae9729ad40ced971d5746f84e522a6a858d019152986\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ce6d67929fc6ef37c40408f2d891751c78b10cb1f9ab11e9cfc63aede238f4f\"" May 16 00:06:38.511204 containerd[1505]: time="2025-05-16T00:06:38.511121795Z" level=info msg="StartContainer for \"2ce6d67929fc6ef37c40408f2d891751c78b10cb1f9ab11e9cfc63aede238f4f\"" May 16 00:06:38.522620 systemd[1]: Started cri-containerd-d9a44377dcd66eacf21b3b45e895364cecd9f08cad6fa4c01cd16c9b8fde888c.scope - libcontainer container d9a44377dcd66eacf21b3b45e895364cecd9f08cad6fa4c01cd16c9b8fde888c. May 16 00:06:38.548613 systemd[1]: Started cri-containerd-2ce6d67929fc6ef37c40408f2d891751c78b10cb1f9ab11e9cfc63aede238f4f.scope - libcontainer container 2ce6d67929fc6ef37c40408f2d891751c78b10cb1f9ab11e9cfc63aede238f4f. May 16 00:06:38.574546 containerd[1505]: time="2025-05-16T00:06:38.574247209Z" level=info msg="StartContainer for \"d9a44377dcd66eacf21b3b45e895364cecd9f08cad6fa4c01cd16c9b8fde888c\" returns successfully" May 16 00:06:38.578901 containerd[1505]: time="2025-05-16T00:06:38.578870535Z" level=info msg="StartContainer for \"2ce6d67929fc6ef37c40408f2d891751c78b10cb1f9ab11e9cfc63aede238f4f\" returns successfully" May 16 00:06:39.321225 kubelet[2617]: E0516 00:06:39.321090 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:39.338834 kubelet[2617]: E0516 00:06:39.338310 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:39.343469 kubelet[2617]: I0516 00:06:39.342233 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-z9gb7" podStartSLOduration=24.342213561 podStartE2EDuration="24.342213561s" podCreationTimestamp="2025-05-16 00:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:06:39.331947774 +0000 UTC m=+28.181117718" watchObservedRunningTime="2025-05-16 00:06:39.342213561 +0000 UTC m=+28.191383505" May 16 00:06:39.353523 kubelet[2617]: I0516 00:06:39.353330 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2ml7v" podStartSLOduration=24.353308516 podStartE2EDuration="24.353308516s" podCreationTimestamp="2025-05-16 00:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:06:39.351711753 +0000 UTC m=+28.200881697" watchObservedRunningTime="2025-05-16 00:06:39.353308516 +0000 UTC m=+28.202478480" May 16 00:06:40.340288 kubelet[2617]: E0516 00:06:40.340252 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:40.340288 kubelet[2617]: E0516 00:06:40.340251 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:41.342916 kubelet[2617]: E0516 00:06:41.342870 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:41.343367 kubelet[2617]: E0516 00:06:41.343167 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:06:41.437113 systemd[1]: Started sshd@8-10.0.0.57:22-10.0.0.1:40576.service - OpenSSH per-connection server daemon (10.0.0.1:40576). May 16 00:06:41.486431 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 40576 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:06:41.488316 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:06:41.492657 systemd-logind[1492]: New session 9 of user core. May 16 00:06:41.502591 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 00:06:41.673837 sshd[4046]: Connection closed by 10.0.0.1 port 40576 May 16 00:06:41.674203 sshd-session[4044]: pam_unix(sshd:session): session closed for user core May 16 00:06:41.678228 systemd[1]: sshd@8-10.0.0.57:22-10.0.0.1:40576.service: Deactivated successfully. May 16 00:06:41.680754 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:06:41.681734 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. May 16 00:06:41.682823 systemd-logind[1492]: Removed session 9. May 16 00:06:46.687296 systemd[1]: Started sshd@9-10.0.0.57:22-10.0.0.1:36152.service - OpenSSH per-connection server daemon (10.0.0.1:36152). May 16 00:06:46.733354 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 36152 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:06:46.735254 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:06:46.740024 systemd-logind[1492]: New session 10 of user core. May 16 00:06:46.753587 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 00:06:46.871084 sshd[4064]: Connection closed by 10.0.0.1 port 36152 May 16 00:06:46.871459 sshd-session[4062]: pam_unix(sshd:session): session closed for user core May 16 00:06:46.875288 systemd[1]: sshd@9-10.0.0.57:22-10.0.0.1:36152.service: Deactivated successfully. May 16 00:06:46.878003 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:06:46.878683 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. May 16 00:06:46.879712 systemd-logind[1492]: Removed session 10. May 16 00:06:51.885839 systemd[1]: Started sshd@10-10.0.0.57:22-10.0.0.1:36160.service - OpenSSH per-connection server daemon (10.0.0.1:36160). May 16 00:06:51.927750 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 36160 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:06:51.929377 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:06:51.933340 systemd-logind[1492]: New session 11 of user core. May 16 00:06:51.949555 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 00:06:52.062110 sshd[4081]: Connection closed by 10.0.0.1 port 36160 May 16 00:06:52.062557 sshd-session[4079]: pam_unix(sshd:session): session closed for user core May 16 00:06:52.079336 systemd[1]: sshd@10-10.0.0.57:22-10.0.0.1:36160.service: Deactivated successfully. May 16 00:06:52.081397 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:06:52.082955 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. May 16 00:06:52.091706 systemd[1]: Started sshd@11-10.0.0.57:22-10.0.0.1:36176.service - OpenSSH per-connection server daemon (10.0.0.1:36176). May 16 00:06:52.092585 systemd-logind[1492]: Removed session 11. May 16 00:06:52.129941 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 36176 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:06:52.131344 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:06:52.135715 systemd-logind[1492]: New session 12 of user core. May 16 00:06:52.146572 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 00:06:52.299079 sshd[4097]: Connection closed by 10.0.0.1 port 36176 May 16 00:06:52.301093 sshd-session[4094]: pam_unix(sshd:session): session closed for user core May 16 00:06:52.314319 systemd[1]: sshd@11-10.0.0.57:22-10.0.0.1:36176.service: Deactivated successfully. May 16 00:06:52.316321 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:06:52.318064 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. May 16 00:06:52.322675 systemd[1]: Started sshd@12-10.0.0.57:22-10.0.0.1:36184.service - OpenSSH per-connection server daemon (10.0.0.1:36184). May 16 00:06:52.323642 systemd-logind[1492]: Removed session 12. May 16 00:06:52.366620 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 36184 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:06:52.368018 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:06:52.372348 systemd-logind[1492]: New session 13 of user core. May 16 00:06:52.382577 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 00:06:52.501822 sshd[4110]: Connection closed by 10.0.0.1 port 36184 May 16 00:06:52.502193 sshd-session[4107]: pam_unix(sshd:session): session closed for user core May 16 00:06:52.506521 systemd[1]: sshd@12-10.0.0.57:22-10.0.0.1:36184.service: Deactivated successfully. May 16 00:06:52.508777 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:06:52.509420 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. May 16 00:06:52.510321 systemd-logind[1492]: Removed session 13. May 16 00:06:57.542965 systemd[1]: Started sshd@13-10.0.0.57:22-10.0.0.1:35640.service - OpenSSH per-connection server daemon (10.0.0.1:35640). May 16 00:06:57.633413 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 35640 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:06:57.634188 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:06:57.647680 systemd-logind[1492]: New session 14 of user core. May 16 00:06:57.653735 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 00:06:57.846403 sshd[4127]: Connection closed by 10.0.0.1 port 35640 May 16 00:06:57.847121 sshd-session[4125]: pam_unix(sshd:session): session closed for user core May 16 00:06:57.852200 systemd[1]: sshd@13-10.0.0.57:22-10.0.0.1:35640.service: Deactivated successfully. May 16 00:06:57.855065 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:06:57.856406 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. May 16 00:06:57.857490 systemd-logind[1492]: Removed session 14. May 16 00:07:02.862772 systemd[1]: Started sshd@14-10.0.0.57:22-10.0.0.1:35642.service - OpenSSH per-connection server daemon (10.0.0.1:35642). May 16 00:07:02.904247 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 35642 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:02.905778 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:02.909980 systemd-logind[1492]: New session 15 of user core. May 16 00:07:02.918591 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 00:07:03.029026 sshd[4142]: Connection closed by 10.0.0.1 port 35642 May 16 00:07:03.029358 sshd-session[4140]: pam_unix(sshd:session): session closed for user core May 16 00:07:03.032743 systemd[1]: sshd@14-10.0.0.57:22-10.0.0.1:35642.service: Deactivated successfully. May 16 00:07:03.034613 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:07:03.035243 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. May 16 00:07:03.036098 systemd-logind[1492]: Removed session 15. May 16 00:07:08.041634 systemd[1]: Started sshd@15-10.0.0.57:22-10.0.0.1:54120.service - OpenSSH per-connection server daemon (10.0.0.1:54120). May 16 00:07:08.086035 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 54120 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:08.088246 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:08.093497 systemd-logind[1492]: New session 16 of user core. May 16 00:07:08.101658 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 00:07:08.221133 sshd[4157]: Connection closed by 10.0.0.1 port 54120 May 16 00:07:08.221551 sshd-session[4155]: pam_unix(sshd:session): session closed for user core May 16 00:07:08.235222 systemd[1]: sshd@15-10.0.0.57:22-10.0.0.1:54120.service: Deactivated successfully. May 16 00:07:08.237453 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:07:08.239233 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. May 16 00:07:08.245825 systemd[1]: Started sshd@16-10.0.0.57:22-10.0.0.1:54122.service - OpenSSH per-connection server daemon (10.0.0.1:54122). May 16 00:07:08.246941 systemd-logind[1492]: Removed session 16. May 16 00:07:08.289243 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 54122 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:08.290899 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:08.295543 systemd-logind[1492]: New session 17 of user core. May 16 00:07:08.307670 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 00:07:08.607492 sshd[4173]: Connection closed by 10.0.0.1 port 54122 May 16 00:07:08.607990 sshd-session[4170]: pam_unix(sshd:session): session closed for user core May 16 00:07:08.618072 systemd[1]: sshd@16-10.0.0.57:22-10.0.0.1:54122.service: Deactivated successfully. May 16 00:07:08.620367 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:07:08.622565 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. May 16 00:07:08.634772 systemd[1]: Started sshd@17-10.0.0.57:22-10.0.0.1:54126.service - OpenSSH per-connection server daemon (10.0.0.1:54126). May 16 00:07:08.635899 systemd-logind[1492]: Removed session 17. May 16 00:07:08.681869 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 54126 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:08.684033 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:08.689295 systemd-logind[1492]: New session 18 of user core. May 16 00:07:08.698671 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 00:07:10.028727 sshd[4188]: Connection closed by 10.0.0.1 port 54126 May 16 00:07:10.029176 sshd-session[4185]: pam_unix(sshd:session): session closed for user core May 16 00:07:10.040841 systemd[1]: sshd@17-10.0.0.57:22-10.0.0.1:54126.service: Deactivated successfully. May 16 00:07:10.042990 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:07:10.045648 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. May 16 00:07:10.053123 systemd[1]: Started sshd@18-10.0.0.57:22-10.0.0.1:54130.service - OpenSSH per-connection server daemon (10.0.0.1:54130). May 16 00:07:10.055745 systemd-logind[1492]: Removed session 18. May 16 00:07:10.094025 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 54130 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:10.095520 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:10.099763 systemd-logind[1492]: New session 19 of user core. May 16 00:07:10.106560 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 00:07:10.361480 sshd[4210]: Connection closed by 10.0.0.1 port 54130 May 16 00:07:10.362669 sshd-session[4207]: pam_unix(sshd:session): session closed for user core May 16 00:07:10.371886 systemd[1]: sshd@18-10.0.0.57:22-10.0.0.1:54130.service: Deactivated successfully. May 16 00:07:10.374061 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:07:10.375010 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. May 16 00:07:10.385742 systemd[1]: Started sshd@19-10.0.0.57:22-10.0.0.1:54144.service - OpenSSH per-connection server daemon (10.0.0.1:54144). May 16 00:07:10.386947 systemd-logind[1492]: Removed session 19. May 16 00:07:10.423873 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 54144 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:10.425700 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:10.432237 systemd-logind[1492]: New session 20 of user core. May 16 00:07:10.448587 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 00:07:10.565008 sshd[4223]: Connection closed by 10.0.0.1 port 54144 May 16 00:07:10.565404 sshd-session[4220]: pam_unix(sshd:session): session closed for user core May 16 00:07:10.570241 systemd[1]: sshd@19-10.0.0.57:22-10.0.0.1:54144.service: Deactivated successfully. May 16 00:07:10.572616 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:07:10.573322 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. May 16 00:07:10.574216 systemd-logind[1492]: Removed session 20. May 16 00:07:15.578691 systemd[1]: Started sshd@20-10.0.0.57:22-10.0.0.1:46816.service - OpenSSH per-connection server daemon (10.0.0.1:46816). May 16 00:07:15.621681 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 46816 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:15.623153 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:15.627471 systemd-logind[1492]: New session 21 of user core. May 16 00:07:15.640575 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 00:07:15.747875 sshd[4240]: Connection closed by 10.0.0.1 port 46816 May 16 00:07:15.748228 sshd-session[4238]: pam_unix(sshd:session): session closed for user core May 16 00:07:15.752384 systemd[1]: sshd@20-10.0.0.57:22-10.0.0.1:46816.service: Deactivated successfully. May 16 00:07:15.754316 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:07:15.754984 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. May 16 00:07:15.755770 systemd-logind[1492]: Removed session 21. May 16 00:07:20.764476 systemd[1]: Started sshd@21-10.0.0.57:22-10.0.0.1:46820.service - OpenSSH per-connection server daemon (10.0.0.1:46820). May 16 00:07:20.808505 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 46820 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:20.810484 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:20.814733 systemd-logind[1492]: New session 22 of user core. May 16 00:07:20.824567 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 00:07:20.938161 sshd[4260]: Connection closed by 10.0.0.1 port 46820 May 16 00:07:20.938534 sshd-session[4258]: pam_unix(sshd:session): session closed for user core May 16 00:07:20.942890 systemd[1]: sshd@21-10.0.0.57:22-10.0.0.1:46820.service: Deactivated successfully. May 16 00:07:20.944827 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:07:20.945495 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. May 16 00:07:20.946668 systemd-logind[1492]: Removed session 22. May 16 00:07:25.950184 systemd[1]: Started sshd@22-10.0.0.57:22-10.0.0.1:33536.service - OpenSSH per-connection server daemon (10.0.0.1:33536). May 16 00:07:25.992293 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 33536 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:25.993668 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:25.997384 systemd-logind[1492]: New session 23 of user core. May 16 00:07:26.007564 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 00:07:26.114919 sshd[4276]: Connection closed by 10.0.0.1 port 33536 May 16 00:07:26.115299 sshd-session[4274]: pam_unix(sshd:session): session closed for user core May 16 00:07:26.118970 systemd[1]: sshd@22-10.0.0.57:22-10.0.0.1:33536.service: Deactivated successfully. May 16 00:07:26.121295 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:07:26.122053 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. May 16 00:07:26.122959 systemd-logind[1492]: Removed session 23. May 16 00:07:31.127263 systemd[1]: Started sshd@23-10.0.0.57:22-10.0.0.1:33550.service - OpenSSH per-connection server daemon (10.0.0.1:33550). May 16 00:07:31.171327 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 33550 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:31.172932 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:31.176973 systemd-logind[1492]: New session 24 of user core. May 16 00:07:31.186567 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 00:07:31.291721 sshd[4291]: Connection closed by 10.0.0.1 port 33550 May 16 00:07:31.292173 sshd-session[4289]: pam_unix(sshd:session): session closed for user core May 16 00:07:31.306585 systemd[1]: sshd@23-10.0.0.57:22-10.0.0.1:33550.service: Deactivated successfully. May 16 00:07:31.308852 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:07:31.310579 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. May 16 00:07:31.318712 systemd[1]: Started sshd@24-10.0.0.57:22-10.0.0.1:33560.service - OpenSSH per-connection server daemon (10.0.0.1:33560). May 16 00:07:31.319598 systemd-logind[1492]: Removed session 24. May 16 00:07:31.357663 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 33560 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:31.359107 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:31.364021 systemd-logind[1492]: New session 25 of user core. May 16 00:07:31.371564 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 00:07:32.705596 containerd[1505]: time="2025-05-16T00:07:32.705516228Z" level=info msg="StopContainer for \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\" with timeout 30 (s)" May 16 00:07:32.716342 containerd[1505]: time="2025-05-16T00:07:32.716175174Z" level=info msg="Stop container \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\" with signal terminated" May 16 00:07:32.729370 systemd[1]: cri-containerd-a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6.scope: Deactivated successfully. May 16 00:07:32.740883 containerd[1505]: time="2025-05-16T00:07:32.740788135Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:07:32.742344 containerd[1505]: time="2025-05-16T00:07:32.742311089Z" level=info msg="StopContainer for \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\" with timeout 2 (s)" May 16 00:07:32.742672 containerd[1505]: time="2025-05-16T00:07:32.742590671Z" level=info msg="Stop container \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\" with signal terminated" May 16 00:07:32.749414 systemd-networkd[1413]: lxc_health: Link DOWN May 16 00:07:32.749424 systemd-networkd[1413]: lxc_health: Lost carrier May 16 00:07:32.761780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6-rootfs.mount: Deactivated successfully. May 16 00:07:32.766701 systemd[1]: cri-containerd-7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05.scope: Deactivated successfully. May 16 00:07:32.767083 systemd[1]: cri-containerd-7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05.scope: Consumed 6.968s CPU time, 124.8M memory peak, 240K read from disk, 13.3M written to disk. May 16 00:07:32.783202 containerd[1505]: time="2025-05-16T00:07:32.783110420Z" level=info msg="shim disconnected" id=a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6 namespace=k8s.io May 16 00:07:32.783202 containerd[1505]: time="2025-05-16T00:07:32.783186695Z" level=warning msg="cleaning up after shim disconnected" id=a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6 namespace=k8s.io May 16 00:07:32.783202 containerd[1505]: time="2025-05-16T00:07:32.783199400Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:07:32.786355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05-rootfs.mount: Deactivated successfully. May 16 00:07:32.791494 containerd[1505]: time="2025-05-16T00:07:32.791238903Z" level=info msg="shim disconnected" id=7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05 namespace=k8s.io May 16 00:07:32.791494 containerd[1505]: time="2025-05-16T00:07:32.791306892Z" level=warning msg="cleaning up after shim disconnected" id=7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05 namespace=k8s.io May 16 00:07:32.791494 containerd[1505]: time="2025-05-16T00:07:32.791318114Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:07:32.803954 containerd[1505]: time="2025-05-16T00:07:32.803902189Z" level=info msg="StopContainer for \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\" returns successfully" May 16 00:07:32.808599 containerd[1505]: time="2025-05-16T00:07:32.808535290Z" level=info msg="StopPodSandbox for \"b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52\"" May 16 00:07:32.810614 containerd[1505]: time="2025-05-16T00:07:32.810576600Z" level=info msg="StopContainer for \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\" returns successfully" May 16 00:07:32.811262 containerd[1505]: time="2025-05-16T00:07:32.811150965Z" level=info msg="StopPodSandbox for \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\"" May 16 00:07:32.817357 containerd[1505]: time="2025-05-16T00:07:32.811193155Z" level=info msg="Container to stop \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:07:32.817357 containerd[1505]: time="2025-05-16T00:07:32.817342194Z" level=info msg="Container to stop \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:07:32.817357 containerd[1505]: time="2025-05-16T00:07:32.817356362Z" level=info msg="Container to stop \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:07:32.817587 containerd[1505]: time="2025-05-16T00:07:32.817369477Z" level=info msg="Container to stop \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:07:32.817587 containerd[1505]: time="2025-05-16T00:07:32.817382232Z" level=info msg="Container to stop \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:07:32.820496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350-shm.mount: Deactivated successfully. May 16 00:07:32.825945 containerd[1505]: time="2025-05-16T00:07:32.808585545Z" level=info msg="Container to stop \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:07:32.828692 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52-shm.mount: Deactivated successfully. May 16 00:07:32.829776 systemd[1]: cri-containerd-a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350.scope: Deactivated successfully. May 16 00:07:32.836282 systemd[1]: cri-containerd-b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52.scope: Deactivated successfully. May 16 00:07:32.851342 containerd[1505]: time="2025-05-16T00:07:32.851274569Z" level=info msg="shim disconnected" id=a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350 namespace=k8s.io May 16 00:07:32.851342 containerd[1505]: time="2025-05-16T00:07:32.851331378Z" level=warning msg="cleaning up after shim disconnected" id=a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350 namespace=k8s.io May 16 00:07:32.851342 containerd[1505]: time="2025-05-16T00:07:32.851342288Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:07:32.868482 containerd[1505]: time="2025-05-16T00:07:32.866387804Z" level=info msg="shim disconnected" id=b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52 namespace=k8s.io May 16 00:07:32.868482 containerd[1505]: time="2025-05-16T00:07:32.866462076Z" level=warning msg="cleaning up after shim disconnected" id=b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52 namespace=k8s.io May 16 00:07:32.868482 containerd[1505]: time="2025-05-16T00:07:32.866473047Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:07:32.875297 containerd[1505]: time="2025-05-16T00:07:32.875229136Z" level=info msg="TearDown network for sandbox \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" successfully" May 16 00:07:32.875297 containerd[1505]: time="2025-05-16T00:07:32.875263451Z" level=info msg="StopPodSandbox for \"a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350\" returns successfully" May 16 00:07:32.886183 containerd[1505]: time="2025-05-16T00:07:32.886122818Z" level=info msg="TearDown network for sandbox \"b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52\" successfully" May 16 00:07:32.886183 containerd[1505]: time="2025-05-16T00:07:32.886169487Z" level=info msg="StopPodSandbox for \"b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52\" returns successfully" May 16 00:07:33.011022 kubelet[2617]: I0516 00:07:33.010955 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-host-proc-sys-kernel\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.011720 kubelet[2617]: I0516 00:07:33.011683 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-xtables-lock\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.011720 kubelet[2617]: I0516 00:07:33.011718 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-host-proc-sys-net\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.011720 kubelet[2617]: I0516 00:07:33.011112 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.012003 kubelet[2617]: I0516 00:07:33.011740 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-run\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012003 kubelet[2617]: I0516 00:07:33.011774 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdc88c46-0efa-4255-bf67-afa530d0e584-hubble-tls\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012003 kubelet[2617]: I0516 00:07:33.011787 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.012003 kubelet[2617]: I0516 00:07:33.011783 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.012003 kubelet[2617]: I0516 00:07:33.011798 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-config-path\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012003 kubelet[2617]: I0516 00:07:33.011861 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-bpf-maps\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012215 kubelet[2617]: I0516 00:07:33.011900 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cni-path\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012215 kubelet[2617]: I0516 00:07:33.011933 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f-cilium-config-path\") pod \"d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f\" (UID: \"d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f\") " May 16 00:07:33.012215 kubelet[2617]: I0516 00:07:33.011958 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrzb8\" (UniqueName: \"kubernetes.io/projected/d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f-kube-api-access-nrzb8\") pod \"d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f\" (UID: \"d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f\") " May 16 00:07:33.012215 kubelet[2617]: I0516 00:07:33.011980 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdc88c46-0efa-4255-bf67-afa530d0e584-clustermesh-secrets\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012215 kubelet[2617]: I0516 00:07:33.011999 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-hostproc\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012215 kubelet[2617]: I0516 00:07:33.012017 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-etc-cni-netd\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012408 kubelet[2617]: I0516 00:07:33.012039 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8wj9\" (UniqueName: \"kubernetes.io/projected/fdc88c46-0efa-4255-bf67-afa530d0e584-kube-api-access-j8wj9\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012408 kubelet[2617]: I0516 00:07:33.012059 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-cgroup\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012408 kubelet[2617]: I0516 00:07:33.012077 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-lib-modules\") pod \"fdc88c46-0efa-4255-bf67-afa530d0e584\" (UID: \"fdc88c46-0efa-4255-bf67-afa530d0e584\") " May 16 00:07:33.012408 kubelet[2617]: I0516 00:07:33.012115 2617 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.012408 kubelet[2617]: I0516 00:07:33.012128 2617 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.012408 kubelet[2617]: I0516 00:07:33.012142 2617 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.012627 kubelet[2617]: I0516 00:07:33.011815 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.012627 kubelet[2617]: I0516 00:07:33.012171 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.012627 kubelet[2617]: I0516 00:07:33.012202 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.012627 kubelet[2617]: I0516 00:07:33.012223 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cni-path" (OuterVolumeSpecName: "cni-path") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.012627 kubelet[2617]: I0516 00:07:33.012393 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-hostproc" (OuterVolumeSpecName: "hostproc") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.015252 kubelet[2617]: I0516 00:07:33.015200 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.015473 kubelet[2617]: I0516 00:07:33.015416 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:07:33.016932 kubelet[2617]: I0516 00:07:33.016884 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:07:33.017642 kubelet[2617]: I0516 00:07:33.017617 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f-kube-api-access-nrzb8" (OuterVolumeSpecName: "kube-api-access-nrzb8") pod "d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f" (UID: "d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f"). InnerVolumeSpecName "kube-api-access-nrzb8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:07:33.017758 kubelet[2617]: I0516 00:07:33.017653 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f" (UID: "d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:07:33.017834 kubelet[2617]: I0516 00:07:33.017785 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc88c46-0efa-4255-bf67-afa530d0e584-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:07:33.018268 kubelet[2617]: I0516 00:07:33.018231 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc88c46-0efa-4255-bf67-afa530d0e584-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:07:33.019396 kubelet[2617]: I0516 00:07:33.019360 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc88c46-0efa-4255-bf67-afa530d0e584-kube-api-access-j8wj9" (OuterVolumeSpecName: "kube-api-access-j8wj9") pod "fdc88c46-0efa-4255-bf67-afa530d0e584" (UID: "fdc88c46-0efa-4255-bf67-afa530d0e584"). InnerVolumeSpecName "kube-api-access-j8wj9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:07:33.112771 kubelet[2617]: I0516 00:07:33.112708 2617 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdc88c46-0efa-4255-bf67-afa530d0e584-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.112771 kubelet[2617]: I0516 00:07:33.112749 2617 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.112771 kubelet[2617]: I0516 00:07:33.112761 2617 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.112771 kubelet[2617]: I0516 00:07:33.112769 2617 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.112771 kubelet[2617]: I0516 00:07:33.112778 2617 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.112771 kubelet[2617]: I0516 00:07:33.112786 2617 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrzb8\" (UniqueName: \"kubernetes.io/projected/d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f-kube-api-access-nrzb8\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.112771 kubelet[2617]: I0516 00:07:33.112795 2617 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdc88c46-0efa-4255-bf67-afa530d0e584-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.113114 kubelet[2617]: I0516 00:07:33.112803 2617 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.113114 kubelet[2617]: I0516 00:07:33.112812 2617 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.113114 kubelet[2617]: I0516 00:07:33.112820 2617 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8wj9\" (UniqueName: \"kubernetes.io/projected/fdc88c46-0efa-4255-bf67-afa530d0e584-kube-api-access-j8wj9\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.113114 kubelet[2617]: I0516 00:07:33.112828 2617 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.113114 kubelet[2617]: I0516 00:07:33.112836 2617 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.113114 kubelet[2617]: I0516 00:07:33.112844 2617 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdc88c46-0efa-4255-bf67-afa530d0e584-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:07:33.248551 systemd[1]: Removed slice kubepods-besteffort-podd4d3b9d9_ae56_4554_bac7_7dffc7d59d5f.slice - libcontainer container kubepods-besteffort-podd4d3b9d9_ae56_4554_bac7_7dffc7d59d5f.slice. May 16 00:07:33.249798 systemd[1]: Removed slice kubepods-burstable-podfdc88c46_0efa_4255_bf67_afa530d0e584.slice - libcontainer container kubepods-burstable-podfdc88c46_0efa_4255_bf67_afa530d0e584.slice. May 16 00:07:33.249883 systemd[1]: kubepods-burstable-podfdc88c46_0efa_4255_bf67_afa530d0e584.slice: Consumed 7.074s CPU time, 125.1M memory peak, 260K read from disk, 13.3M written to disk. May 16 00:07:33.445967 kubelet[2617]: I0516 00:07:33.445110 2617 scope.go:117] "RemoveContainer" containerID="a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6" May 16 00:07:33.452209 containerd[1505]: time="2025-05-16T00:07:33.452163420Z" level=info msg="RemoveContainer for \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\"" May 16 00:07:33.459339 containerd[1505]: time="2025-05-16T00:07:33.459298965Z" level=info msg="RemoveContainer for \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\" returns successfully" May 16 00:07:33.459729 kubelet[2617]: I0516 00:07:33.459640 2617 scope.go:117] "RemoveContainer" containerID="a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6" May 16 00:07:33.459993 containerd[1505]: time="2025-05-16T00:07:33.459928153Z" level=error msg="ContainerStatus for \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\": not found" May 16 00:07:33.468617 kubelet[2617]: E0516 00:07:33.468581 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\": not found" containerID="a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6" May 16 00:07:33.468743 kubelet[2617]: I0516 00:07:33.468623 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6"} err="failed to get container status \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4506b79d6772c2feeed354d6a1627a14a52a3c79d73fd8ee26c738ba50a64e6\": not found" May 16 00:07:33.468790 kubelet[2617]: I0516 00:07:33.468706 2617 scope.go:117] "RemoveContainer" containerID="7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05" May 16 00:07:33.470123 containerd[1505]: time="2025-05-16T00:07:33.470081898Z" level=info msg="RemoveContainer for \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\"" May 16 00:07:33.477565 containerd[1505]: time="2025-05-16T00:07:33.477511914Z" level=info msg="RemoveContainer for \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\" returns successfully" May 16 00:07:33.477790 kubelet[2617]: I0516 00:07:33.477762 2617 scope.go:117] "RemoveContainer" containerID="ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c" May 16 00:07:33.478930 containerd[1505]: time="2025-05-16T00:07:33.478872697Z" level=info msg="RemoveContainer for \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\"" May 16 00:07:33.482777 containerd[1505]: time="2025-05-16T00:07:33.482717212Z" level=info msg="RemoveContainer for \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\" returns successfully" May 16 00:07:33.482987 kubelet[2617]: I0516 00:07:33.482954 2617 scope.go:117] "RemoveContainer" containerID="9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a" May 16 00:07:33.484506 containerd[1505]: time="2025-05-16T00:07:33.484464842Z" level=info msg="RemoveContainer for \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\"" May 16 00:07:33.490294 containerd[1505]: time="2025-05-16T00:07:33.490239172Z" level=info msg="RemoveContainer for \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\" returns successfully" May 16 00:07:33.490559 kubelet[2617]: I0516 00:07:33.490534 2617 scope.go:117] "RemoveContainer" containerID="f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63" May 16 00:07:33.491669 containerd[1505]: time="2025-05-16T00:07:33.491640082Z" level=info msg="RemoveContainer for \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\"" May 16 00:07:33.495323 containerd[1505]: time="2025-05-16T00:07:33.495276349Z" level=info msg="RemoveContainer for \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\" returns successfully" May 16 00:07:33.495652 kubelet[2617]: I0516 00:07:33.495604 2617 scope.go:117] "RemoveContainer" containerID="09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1" May 16 00:07:33.496874 containerd[1505]: time="2025-05-16T00:07:33.496817566Z" level=info msg="RemoveContainer for \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\"" May 16 00:07:33.565571 containerd[1505]: time="2025-05-16T00:07:33.565520632Z" level=info msg="RemoveContainer for \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\" returns successfully" May 16 00:07:33.565804 kubelet[2617]: I0516 00:07:33.565771 2617 scope.go:117] "RemoveContainer" containerID="7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05" May 16 00:07:33.566062 containerd[1505]: time="2025-05-16T00:07:33.566023209Z" level=error msg="ContainerStatus for \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\": not found" May 16 00:07:33.566160 kubelet[2617]: E0516 00:07:33.566141 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\": not found" containerID="7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05" May 16 00:07:33.566204 kubelet[2617]: I0516 00:07:33.566169 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05"} err="failed to get container status \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d8125a934a6bf485a14d7d33e1965931da8809914a59c9f739775a12e26ce05\": not found" May 16 00:07:33.566204 kubelet[2617]: I0516 00:07:33.566195 2617 scope.go:117] "RemoveContainer" containerID="ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c" May 16 00:07:33.566419 containerd[1505]: time="2025-05-16T00:07:33.566383715Z" level=error msg="ContainerStatus for \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\": not found" May 16 00:07:33.566590 kubelet[2617]: E0516 00:07:33.566563 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\": not found" containerID="ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c" May 16 00:07:33.566631 kubelet[2617]: I0516 00:07:33.566602 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c"} err="failed to get container status \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad0620b46cfc48c81cd58210b517bff838c55cbf56337eb1385c7587d798723c\": not found" May 16 00:07:33.566664 kubelet[2617]: I0516 00:07:33.566632 2617 scope.go:117] "RemoveContainer" containerID="9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a" May 16 00:07:33.566857 containerd[1505]: time="2025-05-16T00:07:33.566829845Z" level=error msg="ContainerStatus for \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\": not found" May 16 00:07:33.566965 kubelet[2617]: E0516 00:07:33.566937 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\": not found" containerID="9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a" May 16 00:07:33.566965 kubelet[2617]: I0516 00:07:33.566961 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a"} err="failed to get container status \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c32d3281d5181ae202393f58d9641beeec3592606fa06f1af096219477d7d9a\": not found" May 16 00:07:33.566965 kubelet[2617]: I0516 00:07:33.566977 2617 scope.go:117] "RemoveContainer" containerID="f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63" May 16 00:07:33.567164 containerd[1505]: time="2025-05-16T00:07:33.567123014Z" level=error msg="ContainerStatus for \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\": not found" May 16 00:07:33.567232 kubelet[2617]: E0516 00:07:33.567212 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\": not found" containerID="f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63" May 16 00:07:33.567267 kubelet[2617]: I0516 00:07:33.567239 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63"} err="failed to get container status \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5ecdd34a4d4fba00a7eb5ad43769089508215b9d4a7159db007d538620e6f63\": not found" May 16 00:07:33.567267 kubelet[2617]: I0516 00:07:33.567254 2617 scope.go:117] "RemoveContainer" containerID="09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1" May 16 00:07:33.567409 containerd[1505]: time="2025-05-16T00:07:33.567383401Z" level=error msg="ContainerStatus for \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\": not found" May 16 00:07:33.567488 kubelet[2617]: E0516 00:07:33.567470 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\": not found" containerID="09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1" May 16 00:07:33.567522 kubelet[2617]: I0516 00:07:33.567490 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1"} err="failed to get container status \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"09c56db0ece2dc043453c4861f466402123fb9a918b8e5188c0e17e1659d28b1\": not found" May 16 00:07:33.716020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b296a492caea28f008a4155ff283df3a88a21ef644f47f77d7a043c52f9a6f52-rootfs.mount: Deactivated successfully. May 16 00:07:33.716141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3b2f984a6e99198bd00222155d090d4f45721e504ecd103243c231f5088f350-rootfs.mount: Deactivated successfully. May 16 00:07:33.716229 systemd[1]: var-lib-kubelet-pods-d4d3b9d9\x2dae56\x2d4554\x2dbac7\x2d7dffc7d59d5f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnrzb8.mount: Deactivated successfully. May 16 00:07:33.716334 systemd[1]: var-lib-kubelet-pods-fdc88c46\x2d0efa\x2d4255\x2dbf67\x2dafa530d0e584-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj8wj9.mount: Deactivated successfully. May 16 00:07:33.716605 systemd[1]: var-lib-kubelet-pods-fdc88c46\x2d0efa\x2d4255\x2dbf67\x2dafa530d0e584-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:07:33.716692 systemd[1]: var-lib-kubelet-pods-fdc88c46\x2d0efa\x2d4255\x2dbf67\x2dafa530d0e584-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:07:34.675017 sshd[4307]: Connection closed by 10.0.0.1 port 33560 May 16 00:07:34.675557 sshd-session[4304]: pam_unix(sshd:session): session closed for user core May 16 00:07:34.686395 systemd[1]: sshd@24-10.0.0.57:22-10.0.0.1:33560.service: Deactivated successfully. May 16 00:07:34.688384 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:07:34.689993 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. May 16 00:07:34.699929 systemd[1]: Started sshd@25-10.0.0.57:22-10.0.0.1:55410.service - OpenSSH per-connection server daemon (10.0.0.1:55410). May 16 00:07:34.700913 systemd-logind[1492]: Removed session 25. May 16 00:07:34.737511 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 55410 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:34.739055 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:34.744529 systemd-logind[1492]: New session 26 of user core. May 16 00:07:34.759715 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 00:07:35.241772 kubelet[2617]: I0516 00:07:35.241721 2617 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f" path="/var/lib/kubelet/pods/d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f/volumes" May 16 00:07:35.242302 kubelet[2617]: I0516 00:07:35.242274 2617 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc88c46-0efa-4255-bf67-afa530d0e584" path="/var/lib/kubelet/pods/fdc88c46-0efa-4255-bf67-afa530d0e584/volumes" May 16 00:07:35.594832 sshd[4474]: Connection closed by 10.0.0.1 port 55410 May 16 00:07:35.595080 sshd-session[4471]: pam_unix(sshd:session): session closed for user core May 16 00:07:35.607289 systemd[1]: sshd@25-10.0.0.57:22-10.0.0.1:55410.service: Deactivated successfully. May 16 00:07:35.609208 systemd[1]: session-26.scope: Deactivated successfully. May 16 00:07:35.610718 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. May 16 00:07:35.618685 systemd[1]: Started sshd@26-10.0.0.57:22-10.0.0.1:55422.service - OpenSSH per-connection server daemon (10.0.0.1:55422). May 16 00:07:35.619628 systemd-logind[1492]: Removed session 26. May 16 00:07:35.660190 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 55422 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:35.660031 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:35.666733 systemd-logind[1492]: New session 27 of user core. May 16 00:07:35.671970 kubelet[2617]: E0516 00:07:35.671911 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdc88c46-0efa-4255-bf67-afa530d0e584" containerName="apply-sysctl-overwrites" May 16 00:07:35.671970 kubelet[2617]: E0516 00:07:35.671944 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdc88c46-0efa-4255-bf67-afa530d0e584" containerName="mount-bpf-fs" May 16 00:07:35.671970 kubelet[2617]: E0516 00:07:35.671953 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f" containerName="cilium-operator" May 16 00:07:35.671970 kubelet[2617]: E0516 00:07:35.671959 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdc88c46-0efa-4255-bf67-afa530d0e584" containerName="clean-cilium-state" May 16 00:07:35.671970 kubelet[2617]: E0516 00:07:35.671966 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdc88c46-0efa-4255-bf67-afa530d0e584" containerName="mount-cgroup" May 16 00:07:35.671970 kubelet[2617]: E0516 00:07:35.671972 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdc88c46-0efa-4255-bf67-afa530d0e584" containerName="cilium-agent" May 16 00:07:35.672227 kubelet[2617]: I0516 00:07:35.672008 2617 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc88c46-0efa-4255-bf67-afa530d0e584" containerName="cilium-agent" May 16 00:07:35.672227 kubelet[2617]: I0516 00:07:35.672015 2617 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d3b9d9-ae56-4554-bac7-7dffc7d59d5f" containerName="cilium-operator" May 16 00:07:35.677654 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 00:07:35.690403 systemd[1]: Created slice kubepods-burstable-poddf89dded_835c_4d56_8490_f4d5dae5ef69.slice - libcontainer container kubepods-burstable-poddf89dded_835c_4d56_8490_f4d5dae5ef69.slice. May 16 00:07:35.738642 sshd[4488]: Connection closed by 10.0.0.1 port 55422 May 16 00:07:35.739011 sshd-session[4485]: pam_unix(sshd:session): session closed for user core May 16 00:07:35.751577 systemd[1]: sshd@26-10.0.0.57:22-10.0.0.1:55422.service: Deactivated successfully. May 16 00:07:35.753771 systemd[1]: session-27.scope: Deactivated successfully. May 16 00:07:35.755480 systemd-logind[1492]: Session 27 logged out. Waiting for processes to exit. May 16 00:07:35.761679 systemd[1]: Started sshd@27-10.0.0.57:22-10.0.0.1:55428.service - OpenSSH per-connection server daemon (10.0.0.1:55428). May 16 00:07:35.762464 systemd-logind[1492]: Removed session 27. May 16 00:07:35.799943 sshd[4494]: Accepted publickey for core from 10.0.0.1 port 55428 ssh2: RSA SHA256:piiwWI58B4gY/CseqZU4sdTMx+nAU1M4z6TZx2ovOQo May 16 00:07:35.801277 sshd-session[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:07:35.806121 systemd-logind[1492]: New session 28 of user core. May 16 00:07:35.817584 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 00:07:35.827772 kubelet[2617]: I0516 00:07:35.827733 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-etc-cni-netd\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.827880 kubelet[2617]: I0516 00:07:35.827783 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df89dded-835c-4d56-8490-f4d5dae5ef69-hubble-tls\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.827880 kubelet[2617]: I0516 00:07:35.827819 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk5x8\" (UniqueName: \"kubernetes.io/projected/df89dded-835c-4d56-8490-f4d5dae5ef69-kube-api-access-wk5x8\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.827880 kubelet[2617]: I0516 00:07:35.827839 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-cilium-run\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.827880 kubelet[2617]: I0516 00:07:35.827856 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-bpf-maps\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.827992 kubelet[2617]: I0516 00:07:35.827887 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-hostproc\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.827992 kubelet[2617]: I0516 00:07:35.827915 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-cni-path\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.827992 kubelet[2617]: I0516 00:07:35.827933 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-host-proc-sys-kernel\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.827992 kubelet[2617]: I0516 00:07:35.827966 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-cilium-cgroup\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.827992 kubelet[2617]: I0516 00:07:35.827983 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-host-proc-sys-net\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.828108 kubelet[2617]: I0516 00:07:35.828005 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/df89dded-835c-4d56-8490-f4d5dae5ef69-cilium-ipsec-secrets\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.828108 kubelet[2617]: I0516 00:07:35.828087 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-lib-modules\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.828162 kubelet[2617]: I0516 00:07:35.828139 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df89dded-835c-4d56-8490-f4d5dae5ef69-clustermesh-secrets\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.828188 kubelet[2617]: I0516 00:07:35.828175 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df89dded-835c-4d56-8490-f4d5dae5ef69-xtables-lock\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.828216 kubelet[2617]: I0516 00:07:35.828209 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df89dded-835c-4d56-8490-f4d5dae5ef69-cilium-config-path\") pod \"cilium-dpht4\" (UID: \"df89dded-835c-4d56-8490-f4d5dae5ef69\") " pod="kube-system/cilium-dpht4" May 16 00:07:35.994188 kubelet[2617]: E0516 00:07:35.994140 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:35.994775 containerd[1505]: time="2025-05-16T00:07:35.994687544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dpht4,Uid:df89dded-835c-4d56-8490-f4d5dae5ef69,Namespace:kube-system,Attempt:0,}" May 16 00:07:36.015609 containerd[1505]: time="2025-05-16T00:07:36.015495228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:07:36.015609 containerd[1505]: time="2025-05-16T00:07:36.015583065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:07:36.015609 containerd[1505]: time="2025-05-16T00:07:36.015597543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:07:36.015777 containerd[1505]: time="2025-05-16T00:07:36.015679408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:07:36.041657 systemd[1]: Started cri-containerd-ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f.scope - libcontainer container ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f. May 16 00:07:36.065626 containerd[1505]: time="2025-05-16T00:07:36.065581830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dpht4,Uid:df89dded-835c-4d56-8490-f4d5dae5ef69,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\"" May 16 00:07:36.066211 kubelet[2617]: E0516 00:07:36.066191 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:36.073608 containerd[1505]: time="2025-05-16T00:07:36.073563140Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:07:36.086068 containerd[1505]: time="2025-05-16T00:07:36.086032381Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"45929f8a6815080b61552eb9cff93569becaed2316042e8d7cbb1d2bea863353\"" May 16 00:07:36.086466 containerd[1505]: time="2025-05-16T00:07:36.086432462Z" level=info msg="StartContainer for \"45929f8a6815080b61552eb9cff93569becaed2316042e8d7cbb1d2bea863353\"" May 16 00:07:36.117689 systemd[1]: Started cri-containerd-45929f8a6815080b61552eb9cff93569becaed2316042e8d7cbb1d2bea863353.scope - libcontainer container 45929f8a6815080b61552eb9cff93569becaed2316042e8d7cbb1d2bea863353. May 16 00:07:36.146759 containerd[1505]: time="2025-05-16T00:07:36.146707486Z" level=info msg="StartContainer for \"45929f8a6815080b61552eb9cff93569becaed2316042e8d7cbb1d2bea863353\" returns successfully" May 16 00:07:36.156046 systemd[1]: cri-containerd-45929f8a6815080b61552eb9cff93569becaed2316042e8d7cbb1d2bea863353.scope: Deactivated successfully. May 16 00:07:36.195123 containerd[1505]: time="2025-05-16T00:07:36.195027195Z" level=info msg="shim disconnected" id=45929f8a6815080b61552eb9cff93569becaed2316042e8d7cbb1d2bea863353 namespace=k8s.io May 16 00:07:36.195123 containerd[1505]: time="2025-05-16T00:07:36.195098220Z" level=warning msg="cleaning up after shim disconnected" id=45929f8a6815080b61552eb9cff93569becaed2316042e8d7cbb1d2bea863353 namespace=k8s.io May 16 00:07:36.195123 containerd[1505]: time="2025-05-16T00:07:36.195108981Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:07:36.299590 kubelet[2617]: E0516 00:07:36.299454 2617 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:07:36.461725 kubelet[2617]: E0516 00:07:36.461293 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:36.473007 containerd[1505]: time="2025-05-16T00:07:36.472934026Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:07:36.487201 containerd[1505]: time="2025-05-16T00:07:36.487136566Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4a4f20e54d8d32b041c8e79a4ee4189ce7e1d800fd58013c59faae60aad5c32c\"" May 16 00:07:36.487875 containerd[1505]: time="2025-05-16T00:07:36.487693305Z" level=info msg="StartContainer for \"4a4f20e54d8d32b041c8e79a4ee4189ce7e1d800fd58013c59faae60aad5c32c\"" May 16 00:07:36.515686 systemd[1]: Started cri-containerd-4a4f20e54d8d32b041c8e79a4ee4189ce7e1d800fd58013c59faae60aad5c32c.scope - libcontainer container 4a4f20e54d8d32b041c8e79a4ee4189ce7e1d800fd58013c59faae60aad5c32c. May 16 00:07:36.542638 containerd[1505]: time="2025-05-16T00:07:36.542587617Z" level=info msg="StartContainer for \"4a4f20e54d8d32b041c8e79a4ee4189ce7e1d800fd58013c59faae60aad5c32c\" returns successfully" May 16 00:07:36.549242 systemd[1]: cri-containerd-4a4f20e54d8d32b041c8e79a4ee4189ce7e1d800fd58013c59faae60aad5c32c.scope: Deactivated successfully. May 16 00:07:36.573145 containerd[1505]: time="2025-05-16T00:07:36.573018643Z" level=info msg="shim disconnected" id=4a4f20e54d8d32b041c8e79a4ee4189ce7e1d800fd58013c59faae60aad5c32c namespace=k8s.io May 16 00:07:36.573145 containerd[1505]: time="2025-05-16T00:07:36.573086201Z" level=warning msg="cleaning up after shim disconnected" id=4a4f20e54d8d32b041c8e79a4ee4189ce7e1d800fd58013c59faae60aad5c32c namespace=k8s.io May 16 00:07:36.573145 containerd[1505]: time="2025-05-16T00:07:36.573096010Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:07:37.464667 kubelet[2617]: E0516 00:07:37.464637 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:37.466218 containerd[1505]: time="2025-05-16T00:07:37.466182492Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:07:37.800531 containerd[1505]: time="2025-05-16T00:07:37.800393413Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c8f98afdf5737e6b9283b56b5d1a2bae69b9652791b4117e546a58e790361ae\"" May 16 00:07:37.801169 containerd[1505]: time="2025-05-16T00:07:37.801122741Z" level=info msg="StartContainer for \"1c8f98afdf5737e6b9283b56b5d1a2bae69b9652791b4117e546a58e790361ae\"" May 16 00:07:37.833604 systemd[1]: Started cri-containerd-1c8f98afdf5737e6b9283b56b5d1a2bae69b9652791b4117e546a58e790361ae.scope - libcontainer container 1c8f98afdf5737e6b9283b56b5d1a2bae69b9652791b4117e546a58e790361ae. May 16 00:07:37.887639 systemd[1]: cri-containerd-1c8f98afdf5737e6b9283b56b5d1a2bae69b9652791b4117e546a58e790361ae.scope: Deactivated successfully. May 16 00:07:37.898562 containerd[1505]: time="2025-05-16T00:07:37.898489521Z" level=info msg="StartContainer for \"1c8f98afdf5737e6b9283b56b5d1a2bae69b9652791b4117e546a58e790361ae\" returns successfully" May 16 00:07:37.934158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c8f98afdf5737e6b9283b56b5d1a2bae69b9652791b4117e546a58e790361ae-rootfs.mount: Deactivated successfully. May 16 00:07:38.021095 containerd[1505]: time="2025-05-16T00:07:38.021002241Z" level=info msg="shim disconnected" id=1c8f98afdf5737e6b9283b56b5d1a2bae69b9652791b4117e546a58e790361ae namespace=k8s.io May 16 00:07:38.021095 containerd[1505]: time="2025-05-16T00:07:38.021063086Z" level=warning msg="cleaning up after shim disconnected" id=1c8f98afdf5737e6b9283b56b5d1a2bae69b9652791b4117e546a58e790361ae namespace=k8s.io May 16 00:07:38.021095 containerd[1505]: time="2025-05-16T00:07:38.021073687Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:07:38.033051 containerd[1505]: time="2025-05-16T00:07:38.032995420Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:07:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 16 00:07:38.467966 kubelet[2617]: E0516 00:07:38.467936 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:38.469726 containerd[1505]: time="2025-05-16T00:07:38.469676787Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:07:39.207245 containerd[1505]: time="2025-05-16T00:07:39.207180898Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"095625e35468357b288cf514652ff052c69fed5d2f79ec5bd58ddb67d68985fb\"" May 16 00:07:39.207957 containerd[1505]: time="2025-05-16T00:07:39.207848518Z" level=info msg="StartContainer for \"095625e35468357b288cf514652ff052c69fed5d2f79ec5bd58ddb67d68985fb\"" May 16 00:07:39.239596 systemd[1]: Started cri-containerd-095625e35468357b288cf514652ff052c69fed5d2f79ec5bd58ddb67d68985fb.scope - libcontainer container 095625e35468357b288cf514652ff052c69fed5d2f79ec5bd58ddb67d68985fb. May 16 00:07:39.240728 kubelet[2617]: E0516 00:07:39.240706 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:39.263224 systemd[1]: cri-containerd-095625e35468357b288cf514652ff052c69fed5d2f79ec5bd58ddb67d68985fb.scope: Deactivated successfully. May 16 00:07:39.389506 containerd[1505]: time="2025-05-16T00:07:39.388067243Z" level=info msg="StartContainer for \"095625e35468357b288cf514652ff052c69fed5d2f79ec5bd58ddb67d68985fb\" returns successfully" May 16 00:07:39.405614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-095625e35468357b288cf514652ff052c69fed5d2f79ec5bd58ddb67d68985fb-rootfs.mount: Deactivated successfully. May 16 00:07:39.472421 kubelet[2617]: E0516 00:07:39.471945 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:39.567757 containerd[1505]: time="2025-05-16T00:07:39.567668414Z" level=info msg="shim disconnected" id=095625e35468357b288cf514652ff052c69fed5d2f79ec5bd58ddb67d68985fb namespace=k8s.io May 16 00:07:39.567757 containerd[1505]: time="2025-05-16T00:07:39.567752724Z" level=warning msg="cleaning up after shim disconnected" id=095625e35468357b288cf514652ff052c69fed5d2f79ec5bd58ddb67d68985fb namespace=k8s.io May 16 00:07:39.567757 containerd[1505]: time="2025-05-16T00:07:39.567762593Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:07:40.475705 kubelet[2617]: E0516 00:07:40.475660 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:40.480337 containerd[1505]: time="2025-05-16T00:07:40.477864867Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:07:40.565618 containerd[1505]: time="2025-05-16T00:07:40.565555379Z" level=info msg="CreateContainer within sandbox \"ef049cc4dbd1776f387f70a24fb8bb4d22c3218a021cd64688c5e26eff91ff3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad5922a129f39e8fc722cd83bfc52ce7951ecf3a0530a7a2807de66d61704391\"" May 16 00:07:40.566284 containerd[1505]: time="2025-05-16T00:07:40.566195055Z" level=info msg="StartContainer for \"ad5922a129f39e8fc722cd83bfc52ce7951ecf3a0530a7a2807de66d61704391\"" May 16 00:07:40.600692 systemd[1]: Started cri-containerd-ad5922a129f39e8fc722cd83bfc52ce7951ecf3a0530a7a2807de66d61704391.scope - libcontainer container ad5922a129f39e8fc722cd83bfc52ce7951ecf3a0530a7a2807de66d61704391. May 16 00:07:40.750997 containerd[1505]: time="2025-05-16T00:07:40.750871628Z" level=info msg="StartContainer for \"ad5922a129f39e8fc722cd83bfc52ce7951ecf3a0530a7a2807de66d61704391\" returns successfully" May 16 00:07:41.095471 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 16 00:07:41.240312 kubelet[2617]: E0516 00:07:41.240278 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:41.480379 kubelet[2617]: E0516 00:07:41.480344 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:41.495970 kubelet[2617]: I0516 00:07:41.495801 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dpht4" podStartSLOduration=6.495781855 podStartE2EDuration="6.495781855s" podCreationTimestamp="2025-05-16 00:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:07:41.495185301 +0000 UTC m=+90.344355235" watchObservedRunningTime="2025-05-16 00:07:41.495781855 +0000 UTC m=+90.344951799" May 16 00:07:42.482166 kubelet[2617]: E0516 00:07:42.482132 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:44.214960 systemd-networkd[1413]: lxc_health: Link UP May 16 00:07:44.229211 systemd-networkd[1413]: lxc_health: Gained carrier May 16 00:07:44.240700 kubelet[2617]: E0516 00:07:44.239743 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:45.995512 kubelet[2617]: E0516 00:07:45.995465 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:46.004626 systemd-networkd[1413]: lxc_health: Gained IPv6LL May 16 00:07:46.299432 systemd[1]: run-containerd-runc-k8s.io-ad5922a129f39e8fc722cd83bfc52ce7951ecf3a0530a7a2807de66d61704391-runc.t3euoM.mount: Deactivated successfully. May 16 00:07:46.489010 kubelet[2617]: E0516 00:07:46.488981 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:47.491081 kubelet[2617]: E0516 00:07:47.491041 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:48.239621 kubelet[2617]: E0516 00:07:48.239589 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:07:48.403855 systemd[1]: run-containerd-runc-k8s.io-ad5922a129f39e8fc722cd83bfc52ce7951ecf3a0530a7a2807de66d61704391-runc.pRfXNh.mount: Deactivated successfully. May 16 00:07:52.651056 sshd[4498]: Connection closed by 10.0.0.1 port 55428 May 16 00:07:52.651547 sshd-session[4494]: pam_unix(sshd:session): session closed for user core May 16 00:07:52.655365 systemd[1]: sshd@27-10.0.0.57:22-10.0.0.1:55428.service: Deactivated successfully. May 16 00:07:52.657668 systemd[1]: session-28.scope: Deactivated successfully. May 16 00:07:52.658403 systemd-logind[1492]: Session 28 logged out. Waiting for processes to exit. May 16 00:07:52.659502 systemd-logind[1492]: Removed session 28.