May 27 17:39:24.895327 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:39:24.895368 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:39:24.895380 kernel: BIOS-provided physical RAM map: May 27 17:39:24.895386 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 17:39:24.895392 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 17:39:24.895399 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable May 27 17:39:24.895414 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 17:39:24.895421 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable May 27 17:39:24.895427 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 17:39:24.895434 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 17:39:24.895441 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 17:39:24.895449 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 17:39:24.895456 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 17:39:24.895462 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 17:39:24.895470 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 17:39:24.895477 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 17:39:24.895486 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 17:39:24.895493 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 17:39:24.895500 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 17:39:24.895507 kernel: NX (Execute Disable) protection: active May 27 17:39:24.895513 kernel: APIC: Static calls initialized May 27 17:39:24.895520 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable May 27 17:39:24.895527 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable May 27 17:39:24.895534 kernel: extended physical RAM map: May 27 17:39:24.895541 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 17:39:24.895548 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 17:39:24.895555 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable May 27 17:39:24.895564 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 17:39:24.895575 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable May 27 17:39:24.895582 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable May 27 17:39:24.895589 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable May 27 17:39:24.895596 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable May 27 17:39:24.895603 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable May 27 17:39:24.895610 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 17:39:24.895617 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 17:39:24.895624 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 17:39:24.895631 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 17:39:24.895637 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 17:39:24.895647 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 17:39:24.895656 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 17:39:24.895668 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 17:39:24.895678 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 17:39:24.895689 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 17:39:24.895697 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 17:39:24.895708 kernel: efi: EFI v2.7 by EDK II May 27 17:39:24.895715 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 May 27 17:39:24.895722 kernel: random: crng init done May 27 17:39:24.895730 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 27 17:39:24.895737 kernel: secureboot: Secure boot enabled May 27 17:39:24.895744 kernel: SMBIOS 2.8 present. May 27 17:39:24.895751 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 27 17:39:24.895758 kernel: DMI: Memory slots populated: 1/1 May 27 17:39:24.895765 kernel: Hypervisor detected: KVM May 27 17:39:24.895772 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 17:39:24.895780 kernel: kvm-clock: using sched offset of 5683100558 cycles May 27 17:39:24.895789 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:39:24.895797 kernel: tsc: Detected 2794.748 MHz processor May 27 17:39:24.895804 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:39:24.895812 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:39:24.895819 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 May 27 17:39:24.895827 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 17:39:24.895834 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:39:24.895842 kernel: Using GB pages for direct mapping May 27 17:39:24.895849 kernel: ACPI: Early table checksum verification disabled May 27 17:39:24.895858 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) May 27 17:39:24.895866 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 27 17:39:24.895873 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:39:24.895908 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:39:24.895915 kernel: ACPI: FACS 0x000000009BBDD000 000040 May 27 17:39:24.895923 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:39:24.895930 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:39:24.895937 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:39:24.895945 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:39:24.895955 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 27 17:39:24.895962 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] May 27 17:39:24.895970 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] May 27 17:39:24.895977 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] May 27 17:39:24.895984 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] May 27 17:39:24.895992 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] May 27 17:39:24.895999 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] May 27 17:39:24.896006 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] May 27 17:39:24.896013 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] May 27 17:39:24.896023 kernel: No NUMA configuration found May 27 17:39:24.896030 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] May 27 17:39:24.896038 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] May 27 17:39:24.896046 kernel: Zone ranges: May 27 17:39:24.896053 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:39:24.896060 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] May 27 17:39:24.896068 kernel: Normal empty May 27 17:39:24.896075 kernel: Device empty May 27 17:39:24.896082 kernel: Movable zone start for each node May 27 17:39:24.896092 kernel: Early memory node ranges May 27 17:39:24.896099 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] May 27 17:39:24.896106 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] May 27 17:39:24.896114 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] May 27 17:39:24.896121 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] May 27 17:39:24.896128 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] May 27 17:39:24.896136 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] May 27 17:39:24.896143 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:39:24.896150 kernel: On node 0, zone DMA: 32 pages in unavailable ranges May 27 17:39:24.896158 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 17:39:24.896168 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 17:39:24.896175 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 27 17:39:24.896182 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges May 27 17:39:24.896190 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 17:39:24.896197 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 17:39:24.896205 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 17:39:24.896212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 17:39:24.896219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 17:39:24.896227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:39:24.896236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 17:39:24.896243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 17:39:24.896251 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:39:24.896258 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 17:39:24.896265 kernel: TSC deadline timer available May 27 17:39:24.896273 kernel: CPU topo: Max. logical packages: 1 May 27 17:39:24.896280 kernel: CPU topo: Max. logical dies: 1 May 27 17:39:24.896287 kernel: CPU topo: Max. dies per package: 1 May 27 17:39:24.896303 kernel: CPU topo: Max. threads per core: 1 May 27 17:39:24.896311 kernel: CPU topo: Num. cores per package: 4 May 27 17:39:24.896318 kernel: CPU topo: Num. threads per package: 4 May 27 17:39:24.896327 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 17:39:24.896339 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 17:39:24.896349 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 17:39:24.896359 kernel: kvm-guest: setup PV sched yield May 27 17:39:24.896367 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 27 17:39:24.896374 kernel: Booting paravirtualized kernel on KVM May 27 17:39:24.896384 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:39:24.896392 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 17:39:24.896400 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 17:39:24.896416 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 17:39:24.896424 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 17:39:24.896432 kernel: kvm-guest: PV spinlocks enabled May 27 17:39:24.896440 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 17:39:24.896449 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:39:24.896459 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:39:24.896467 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:39:24.896475 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:39:24.896482 kernel: Fallback order for Node 0: 0 May 27 17:39:24.896490 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 May 27 17:39:24.896498 kernel: Policy zone: DMA32 May 27 17:39:24.896505 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:39:24.896513 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 17:39:24.896521 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:39:24.896530 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:39:24.896538 kernel: Dynamic Preempt: voluntary May 27 17:39:24.896545 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:39:24.896554 kernel: rcu: RCU event tracing is enabled. May 27 17:39:24.896561 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 17:39:24.896569 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:39:24.896577 kernel: Rude variant of Tasks RCU enabled. May 27 17:39:24.896585 kernel: Tracing variant of Tasks RCU enabled. May 27 17:39:24.896593 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:39:24.896602 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 17:39:24.896610 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:39:24.896618 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:39:24.896626 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:39:24.896633 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 17:39:24.896641 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:39:24.896649 kernel: Console: colour dummy device 80x25 May 27 17:39:24.896657 kernel: printk: legacy console [ttyS0] enabled May 27 17:39:24.896665 kernel: ACPI: Core revision 20240827 May 27 17:39:24.896675 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 17:39:24.896682 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:39:24.896690 kernel: x2apic enabled May 27 17:39:24.896698 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:39:24.896706 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 17:39:24.896723 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 17:39:24.896738 kernel: kvm-guest: setup PV IPIs May 27 17:39:24.896747 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 17:39:24.896755 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:39:24.896765 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 17:39:24.896773 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 17:39:24.896781 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 17:39:24.896788 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 17:39:24.896796 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:39:24.896804 kernel: Spectre V2 : Mitigation: Retpolines May 27 17:39:24.896812 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:39:24.896819 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 17:39:24.896827 kernel: RETBleed: Mitigation: untrained return thunk May 27 17:39:24.896837 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 17:39:24.896845 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 17:39:24.896853 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 17:39:24.896861 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 17:39:24.896869 kernel: x86/bugs: return thunk changed May 27 17:39:24.896953 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 17:39:24.896961 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:39:24.896969 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:39:24.896979 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:39:24.896987 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:39:24.896995 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 17:39:24.897002 kernel: Freeing SMP alternatives memory: 32K May 27 17:39:24.897010 kernel: pid_max: default: 32768 minimum: 301 May 27 17:39:24.897018 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:39:24.897025 kernel: landlock: Up and running. May 27 17:39:24.897033 kernel: SELinux: Initializing. May 27 17:39:24.897043 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:39:24.897055 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:39:24.897064 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 17:39:24.897072 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 17:39:24.897079 kernel: ... version: 0 May 27 17:39:24.897087 kernel: ... bit width: 48 May 27 17:39:24.897094 kernel: ... generic registers: 6 May 27 17:39:24.897102 kernel: ... value mask: 0000ffffffffffff May 27 17:39:24.897110 kernel: ... max period: 00007fffffffffff May 27 17:39:24.897117 kernel: ... fixed-purpose events: 0 May 27 17:39:24.897128 kernel: ... event mask: 000000000000003f May 27 17:39:24.897135 kernel: signal: max sigframe size: 1776 May 27 17:39:24.897143 kernel: rcu: Hierarchical SRCU implementation. May 27 17:39:24.897151 kernel: rcu: Max phase no-delay instances is 400. May 27 17:39:24.897159 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:39:24.897166 kernel: smp: Bringing up secondary CPUs ... May 27 17:39:24.897174 kernel: smpboot: x86: Booting SMP configuration: May 27 17:39:24.897182 kernel: .... node #0, CPUs: #1 #2 #3 May 27 17:39:24.897190 kernel: smp: Brought up 1 node, 4 CPUs May 27 17:39:24.897197 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 17:39:24.897207 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 137064K reserved, 0K cma-reserved) May 27 17:39:24.897215 kernel: devtmpfs: initialized May 27 17:39:24.897223 kernel: x86/mm: Memory block size: 128MB May 27 17:39:24.897230 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) May 27 17:39:24.897238 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) May 27 17:39:24.897246 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:39:24.897255 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 17:39:24.897266 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:39:24.897278 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:39:24.897288 kernel: audit: initializing netlink subsys (disabled) May 27 17:39:24.897296 kernel: audit: type=2000 audit(1748367563.625:1): state=initialized audit_enabled=0 res=1 May 27 17:39:24.897304 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:39:24.897312 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:39:24.897319 kernel: cpuidle: using governor menu May 27 17:39:24.897327 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:39:24.897335 kernel: dca service started, version 1.12.1 May 27 17:39:24.897343 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 27 17:39:24.897352 kernel: PCI: Using configuration type 1 for base access May 27 17:39:24.897360 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:39:24.897368 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:39:24.897375 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:39:24.897383 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:39:24.897391 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:39:24.897398 kernel: ACPI: Added _OSI(Module Device) May 27 17:39:24.897414 kernel: ACPI: Added _OSI(Processor Device) May 27 17:39:24.897421 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:39:24.897431 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:39:24.897439 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:39:24.897446 kernel: ACPI: Interpreter enabled May 27 17:39:24.897454 kernel: ACPI: PM: (supports S0 S5) May 27 17:39:24.897461 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:39:24.897469 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:39:24.897477 kernel: PCI: Using E820 reservations for host bridge windows May 27 17:39:24.897485 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 17:39:24.897492 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:39:24.897678 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 17:39:24.897800 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 17:39:24.897961 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 17:39:24.897973 kernel: PCI host bridge to bus 0000:00 May 27 17:39:24.898099 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 17:39:24.898207 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 17:39:24.898329 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 17:39:24.898465 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 27 17:39:24.898617 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 27 17:39:24.898738 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 27 17:39:24.898872 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:39:24.899041 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 17:39:24.899166 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 17:39:24.899294 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 27 17:39:24.899421 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 27 17:39:24.899538 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 17:39:24.899659 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 17:39:24.899814 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 17:39:24.899953 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 27 17:39:24.900071 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 27 17:39:24.900192 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 27 17:39:24.900326 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 17:39:24.900454 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 27 17:39:24.900583 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 27 17:39:24.900704 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 27 17:39:24.900830 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 17:39:24.900969 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 27 17:39:24.901086 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 27 17:39:24.901201 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 27 17:39:24.901326 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 27 17:39:24.901460 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 17:39:24.901577 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 17:39:24.901712 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 17:39:24.901839 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 27 17:39:24.901971 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 27 17:39:24.902098 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 17:39:24.902215 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 27 17:39:24.902226 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 17:39:24.902234 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 17:39:24.902242 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 17:39:24.902249 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 17:39:24.902265 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 17:39:24.902275 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 17:39:24.902284 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 17:39:24.902292 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 17:39:24.902300 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 17:39:24.902307 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 17:39:24.902315 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 17:39:24.902323 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 17:39:24.902333 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 17:39:24.902341 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 17:39:24.902349 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 17:39:24.902357 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 17:39:24.902365 kernel: iommu: Default domain type: Translated May 27 17:39:24.902373 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:39:24.902380 kernel: efivars: Registered efivars operations May 27 17:39:24.902388 kernel: PCI: Using ACPI for IRQ routing May 27 17:39:24.902396 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 17:39:24.902404 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] May 27 17:39:24.902420 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] May 27 17:39:24.902427 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] May 27 17:39:24.902435 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] May 27 17:39:24.902443 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] May 27 17:39:24.902565 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 17:39:24.902680 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 17:39:24.902794 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 17:39:24.902804 kernel: vgaarb: loaded May 27 17:39:24.902815 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 17:39:24.902823 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 17:39:24.902831 kernel: clocksource: Switched to clocksource kvm-clock May 27 17:39:24.902838 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:39:24.902846 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:39:24.902854 kernel: pnp: PnP ACPI init May 27 17:39:24.903008 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 27 17:39:24.903021 kernel: pnp: PnP ACPI: found 6 devices May 27 17:39:24.903032 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:39:24.903040 kernel: NET: Registered PF_INET protocol family May 27 17:39:24.903048 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:39:24.903056 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:39:24.903063 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:39:24.903071 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:39:24.903079 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:39:24.903087 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:39:24.903094 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:39:24.903105 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:39:24.903113 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:39:24.903120 kernel: NET: Registered PF_XDP protocol family May 27 17:39:24.903237 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 27 17:39:24.903364 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 27 17:39:24.903492 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 17:39:24.903633 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 17:39:24.903763 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 17:39:24.903902 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 27 17:39:24.904039 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 27 17:39:24.904171 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 27 17:39:24.904185 kernel: PCI: CLS 0 bytes, default 64 May 27 17:39:24.904197 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 17:39:24.904207 kernel: Initialise system trusted keyrings May 27 17:39:24.904217 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:39:24.904227 kernel: Key type asymmetric registered May 27 17:39:24.904238 kernel: Asymmetric key parser 'x509' registered May 27 17:39:24.904271 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:39:24.904284 kernel: io scheduler mq-deadline registered May 27 17:39:24.904295 kernel: io scheduler kyber registered May 27 17:39:24.904305 kernel: io scheduler bfq registered May 27 17:39:24.904316 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:39:24.904327 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 17:39:24.904337 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 17:39:24.904348 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 17:39:24.904358 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:39:24.904368 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:39:24.904379 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 17:39:24.904387 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 17:39:24.904395 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 17:39:24.904530 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 17:39:24.904655 kernel: rtc_cmos 00:04: registered as rtc0 May 27 17:39:24.904670 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 17:39:24.904777 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T17:39:24 UTC (1748367564) May 27 17:39:24.904904 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 17:39:24.904915 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 17:39:24.904924 kernel: efifb: probing for efifb May 27 17:39:24.904932 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 27 17:39:24.904940 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 27 17:39:24.904948 kernel: efifb: scrolling: redraw May 27 17:39:24.904955 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 17:39:24.904963 kernel: Console: switching to colour frame buffer device 160x50 May 27 17:39:24.904971 kernel: fb0: EFI VGA frame buffer device May 27 17:39:24.904983 kernel: pstore: Using crash dump compression: deflate May 27 17:39:24.904993 kernel: pstore: Registered efi_pstore as persistent store backend May 27 17:39:24.905001 kernel: NET: Registered PF_INET6 protocol family May 27 17:39:24.905009 kernel: Segment Routing with IPv6 May 27 17:39:24.905017 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:39:24.905027 kernel: NET: Registered PF_PACKET protocol family May 27 17:39:24.905035 kernel: Key type dns_resolver registered May 27 17:39:24.905042 kernel: IPI shorthand broadcast: enabled May 27 17:39:24.905051 kernel: sched_clock: Marking stable (2958001880, 147007143)->(3126419423, -21410400) May 27 17:39:24.905059 kernel: registered taskstats version 1 May 27 17:39:24.905066 kernel: Loading compiled-in X.509 certificates May 27 17:39:24.905075 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:39:24.905083 kernel: Demotion targets for Node 0: null May 27 17:39:24.905090 kernel: Key type .fscrypt registered May 27 17:39:24.905101 kernel: Key type fscrypt-provisioning registered May 27 17:39:24.905109 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:39:24.905116 kernel: ima: Allocated hash algorithm: sha1 May 27 17:39:24.905124 kernel: ima: No architecture policies found May 27 17:39:24.905132 kernel: clk: Disabling unused clocks May 27 17:39:24.905140 kernel: Warning: unable to open an initial console. May 27 17:39:24.905149 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:39:24.905157 kernel: Write protecting the kernel read-only data: 24576k May 27 17:39:24.905165 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:39:24.905175 kernel: Run /init as init process May 27 17:39:24.905183 kernel: with arguments: May 27 17:39:24.905191 kernel: /init May 27 17:39:24.905199 kernel: with environment: May 27 17:39:24.905206 kernel: HOME=/ May 27 17:39:24.905214 kernel: TERM=linux May 27 17:39:24.905222 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:39:24.905231 systemd[1]: Successfully made /usr/ read-only. May 27 17:39:24.905244 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:39:24.905255 systemd[1]: Detected virtualization kvm. May 27 17:39:24.905266 systemd[1]: Detected architecture x86-64. May 27 17:39:24.905277 systemd[1]: Running in initrd. May 27 17:39:24.905288 systemd[1]: No hostname configured, using default hostname. May 27 17:39:24.905297 systemd[1]: Hostname set to . May 27 17:39:24.905305 systemd[1]: Initializing machine ID from VM UUID. May 27 17:39:24.905313 systemd[1]: Queued start job for default target initrd.target. May 27 17:39:24.905324 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:39:24.905333 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:39:24.905342 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:39:24.905351 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:39:24.905359 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:39:24.905369 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:39:24.905381 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:39:24.905389 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:39:24.905398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:39:24.905414 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:39:24.905422 systemd[1]: Reached target paths.target - Path Units. May 27 17:39:24.905431 systemd[1]: Reached target slices.target - Slice Units. May 27 17:39:24.905440 systemd[1]: Reached target swap.target - Swaps. May 27 17:39:24.905448 systemd[1]: Reached target timers.target - Timer Units. May 27 17:39:24.905456 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:39:24.905467 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:39:24.905476 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:39:24.905484 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:39:24.905493 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:39:24.905501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:39:24.905510 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:39:24.905518 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:39:24.905527 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:39:24.905538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:39:24.905546 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:39:24.905555 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:39:24.905564 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:39:24.905573 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:39:24.905581 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:39:24.905590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:39:24.905599 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:39:24.905610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:39:24.905618 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:39:24.905655 systemd-journald[220]: Collecting audit messages is disabled. May 27 17:39:24.905678 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:39:24.905687 systemd-journald[220]: Journal started May 27 17:39:24.905705 systemd-journald[220]: Runtime Journal (/run/log/journal/7d1c3773d8984884a39e444b967bf36e) is 6M, max 48.2M, 42.2M free. May 27 17:39:24.896763 systemd-modules-load[222]: Inserted module 'overlay' May 27 17:39:24.918980 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:39:24.921085 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:39:24.924910 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:39:24.925397 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:39:24.928628 kernel: Bridge firewalling registered May 27 17:39:24.927722 systemd-modules-load[222]: Inserted module 'br_netfilter' May 27 17:39:24.929830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:39:24.936101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:39:24.936552 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:39:24.939659 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:39:24.944030 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:39:24.945243 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:39:24.950865 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:39:24.953099 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:39:24.956020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:39:24.957167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:39:24.963771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:39:24.972549 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:39:24.999105 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:39:25.009938 systemd-resolved[257]: Positive Trust Anchors: May 27 17:39:25.009959 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:39:25.009998 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:39:25.013267 systemd-resolved[257]: Defaulting to hostname 'linux'. May 27 17:39:25.014390 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:39:25.020725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:39:25.123923 kernel: SCSI subsystem initialized May 27 17:39:25.135911 kernel: Loading iSCSI transport class v2.0-870. May 27 17:39:25.146917 kernel: iscsi: registered transport (tcp) May 27 17:39:25.171297 kernel: iscsi: registered transport (qla4xxx) May 27 17:39:25.171374 kernel: QLogic iSCSI HBA Driver May 27 17:39:25.192333 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:39:25.217025 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:39:25.218457 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:39:25.266776 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:39:25.269520 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:39:25.322906 kernel: raid6: avx2x4 gen() 24197 MB/s May 27 17:39:25.339902 kernel: raid6: avx2x2 gen() 27468 MB/s May 27 17:39:25.357009 kernel: raid6: avx2x1 gen() 23701 MB/s May 27 17:39:25.357036 kernel: raid6: using algorithm avx2x2 gen() 27468 MB/s May 27 17:39:25.374980 kernel: raid6: .... xor() 19692 MB/s, rmw enabled May 27 17:39:25.375003 kernel: raid6: using avx2x2 recovery algorithm May 27 17:39:25.394909 kernel: xor: automatically using best checksumming function avx May 27 17:39:25.564918 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:39:25.572088 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:39:25.575205 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:39:25.610546 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 27 17:39:25.616125 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:39:25.619951 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:39:25.660716 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation May 27 17:39:25.693717 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:39:25.695479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:39:25.768891 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:39:25.770321 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:39:25.805914 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 17:39:25.813806 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 17:39:25.822019 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:39:25.822051 kernel: GPT:9289727 != 19775487 May 27 17:39:25.822068 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:39:25.822083 kernel: GPT:9289727 != 19775487 May 27 17:39:25.822096 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:39:25.822111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:39:25.830929 kernel: libata version 3.00 loaded. May 27 17:39:25.835929 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 17:39:25.836902 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:39:25.845338 kernel: ahci 0000:00:1f.2: version 3.0 May 27 17:39:25.845592 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 17:39:25.851936 kernel: AES CTR mode by8 optimization enabled May 27 17:39:25.852078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:39:25.858793 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 17:39:25.859009 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 17:39:25.859148 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 17:39:25.852232 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:39:25.855416 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:39:25.869733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:39:25.883253 kernel: scsi host0: ahci May 27 17:39:25.883531 kernel: scsi host1: ahci May 27 17:39:25.885350 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:39:25.889680 kernel: scsi host2: ahci May 27 17:39:25.890814 kernel: scsi host3: ahci May 27 17:39:25.896186 kernel: scsi host4: ahci May 27 17:39:25.896422 kernel: scsi host5: ahci May 27 17:39:25.896646 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 27 17:39:25.898918 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 27 17:39:25.898947 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 27 17:39:25.900268 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 27 17:39:25.900294 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 27 17:39:25.902119 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 27 17:39:25.922042 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 17:39:25.931845 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 17:39:25.941558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:39:25.949581 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 17:39:25.949842 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 17:39:25.951130 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:39:25.955372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:39:25.955442 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:39:25.959452 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:39:25.964495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:39:25.967050 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:39:25.973664 disk-uuid[635]: Primary Header is updated. May 27 17:39:25.973664 disk-uuid[635]: Secondary Entries is updated. May 27 17:39:25.973664 disk-uuid[635]: Secondary Header is updated. May 27 17:39:25.977933 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:39:25.982898 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:39:25.985552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:39:26.210566 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 17:39:26.210648 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 17:39:26.210664 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 17:39:26.211907 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 17:39:26.212923 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 17:39:26.213922 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 17:39:26.215386 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 17:39:26.215447 kernel: ata3.00: applying bridge limits May 27 17:39:26.215473 kernel: ata3.00: configured for UDMA/100 May 27 17:39:26.216930 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 17:39:26.274915 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 17:39:26.275259 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:39:26.301282 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 17:39:26.718331 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:39:26.721534 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:39:26.722893 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:39:26.723463 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:39:26.724899 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:39:26.753046 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:39:26.983973 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:39:26.984236 disk-uuid[639]: The operation has completed successfully. May 27 17:39:27.015214 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:39:27.015382 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:39:27.051354 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:39:27.084073 sh[670]: Success May 27 17:39:27.104406 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:39:27.104495 kernel: device-mapper: uevent: version 1.0.3 May 27 17:39:27.104513 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:39:27.116930 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 17:39:27.152534 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:39:27.157248 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:39:27.173313 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:39:27.182341 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:39:27.182382 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (682) May 27 17:39:27.183817 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:39:27.183843 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:39:27.184808 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:39:27.190846 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:39:27.192014 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:39:27.193259 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:39:27.195460 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:39:27.198416 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:39:27.230750 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (716) May 27 17:39:27.230807 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:39:27.230818 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:39:27.232344 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:39:27.239912 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:39:27.240710 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:39:27.243014 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:39:27.329656 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:39:27.332341 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:39:27.600990 systemd-networkd[851]: lo: Link UP May 27 17:39:27.601492 systemd-networkd[851]: lo: Gained carrier May 27 17:39:27.603039 systemd-networkd[851]: Enumeration completed May 27 17:39:27.603411 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:39:27.603416 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:39:27.604814 systemd-networkd[851]: eth0: Link UP May 27 17:39:27.604818 systemd-networkd[851]: eth0: Gained carrier May 27 17:39:27.604828 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:39:27.605083 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:39:27.611325 systemd[1]: Reached target network.target - Network. May 27 17:39:27.628945 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:39:27.741912 ignition[758]: Ignition 2.21.0 May 27 17:39:27.741937 ignition[758]: Stage: fetch-offline May 27 17:39:27.741991 ignition[758]: no configs at "/usr/lib/ignition/base.d" May 27 17:39:27.742004 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:39:27.742118 ignition[758]: parsed url from cmdline: "" May 27 17:39:27.742123 ignition[758]: no config URL provided May 27 17:39:27.742132 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:39:27.742144 ignition[758]: no config at "/usr/lib/ignition/user.ign" May 27 17:39:27.742171 ignition[758]: op(1): [started] loading QEMU firmware config module May 27 17:39:27.742176 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 17:39:27.758675 ignition[758]: op(1): [finished] loading QEMU firmware config module May 27 17:39:27.798936 ignition[758]: parsing config with SHA512: 480bbe544fde12677e12190dd6f55f6e28043269724abee75e2030fb0acff5f5355566184a9494d1251a2af434dfeb8b94d2755c5d008fc1649a9ca044674be6 May 27 17:39:27.803299 unknown[758]: fetched base config from "system" May 27 17:39:27.803930 unknown[758]: fetched user config from "qemu" May 27 17:39:27.804416 ignition[758]: fetch-offline: fetch-offline passed May 27 17:39:27.804500 ignition[758]: Ignition finished successfully May 27 17:39:27.810017 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:39:27.811555 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 17:39:27.812540 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:39:27.859077 ignition[865]: Ignition 2.21.0 May 27 17:39:27.859098 ignition[865]: Stage: kargs May 27 17:39:27.859222 ignition[865]: no configs at "/usr/lib/ignition/base.d" May 27 17:39:27.859232 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:39:27.861492 ignition[865]: kargs: kargs passed May 27 17:39:27.861997 ignition[865]: Ignition finished successfully May 27 17:39:27.866390 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:39:27.869439 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:39:27.905559 ignition[873]: Ignition 2.21.0 May 27 17:39:27.905575 ignition[873]: Stage: disks May 27 17:39:27.905729 ignition[873]: no configs at "/usr/lib/ignition/base.d" May 27 17:39:27.905742 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:39:27.908670 ignition[873]: disks: disks passed May 27 17:39:27.908726 ignition[873]: Ignition finished successfully May 27 17:39:27.914463 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:39:27.915025 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:39:27.917291 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:39:27.917656 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:39:27.918214 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:39:27.918593 systemd[1]: Reached target basic.target - Basic System. May 27 17:39:27.928140 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:39:27.975980 systemd-fsck[884]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:39:27.984610 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:39:27.986100 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:39:28.107910 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:39:28.109082 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:39:28.110166 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:39:28.113209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:39:28.114782 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:39:28.115749 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:39:28.115801 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:39:28.115830 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:39:28.135313 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:39:28.137595 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:39:28.143901 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (892) May 27 17:39:28.146413 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:39:28.146439 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:39:28.146453 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:39:28.152054 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:39:28.178684 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:39:28.183474 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory May 27 17:39:28.188384 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:39:28.192903 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:39:28.292097 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:39:28.294159 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:39:28.296228 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:39:28.327464 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:39:28.328969 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:39:28.342017 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:39:28.436524 ignition[1006]: INFO : Ignition 2.21.0 May 27 17:39:28.436524 ignition[1006]: INFO : Stage: mount May 27 17:39:28.438264 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:39:28.438264 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:39:28.440383 ignition[1006]: INFO : mount: mount passed May 27 17:39:28.440383 ignition[1006]: INFO : Ignition finished successfully May 27 17:39:28.444718 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:39:28.447718 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:39:28.469843 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:39:28.494496 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1018) May 27 17:39:28.494525 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:39:28.494538 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:39:28.495374 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:39:28.499543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:39:28.538700 ignition[1035]: INFO : Ignition 2.21.0 May 27 17:39:28.538700 ignition[1035]: INFO : Stage: files May 27 17:39:28.540424 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:39:28.540424 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:39:28.542776 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping May 27 17:39:28.542776 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:39:28.542776 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:39:28.546821 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:39:28.546821 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:39:28.546821 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:39:28.545863 unknown[1035]: wrote ssh authorized keys file for user: core May 27 17:39:28.552049 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 17:39:28.552049 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 17:39:28.676834 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:39:28.987101 systemd-networkd[851]: eth0: Gained IPv6LL May 27 17:39:29.008773 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 17:39:29.010926 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:39:29.010926 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 17:39:29.505510 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 17:39:29.658054 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:39:29.658054 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 17:39:29.661933 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:39:29.661933 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:39:29.661933 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:39:29.661933 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:39:29.668743 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:39:29.668743 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:39:29.672150 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:39:29.678667 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:39:29.680614 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:39:29.682602 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:39:29.685177 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:39:29.685177 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:39:29.685177 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 17:39:30.318353 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 17:39:31.132594 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 17:39:31.132594 ignition[1035]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 17:39:31.137497 ignition[1035]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:39:31.181781 ignition[1035]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:39:31.181781 ignition[1035]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 17:39:31.181781 ignition[1035]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 17:39:31.187595 ignition[1035]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:39:31.187595 ignition[1035]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:39:31.187595 ignition[1035]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 17:39:31.187595 ignition[1035]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 27 17:39:31.212243 ignition[1035]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:39:31.223203 ignition[1035]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:39:31.225524 ignition[1035]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 27 17:39:31.225524 ignition[1035]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 27 17:39:31.225524 ignition[1035]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:39:31.225524 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:39:31.225524 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:39:31.225524 ignition[1035]: INFO : files: files passed May 27 17:39:31.225524 ignition[1035]: INFO : Ignition finished successfully May 27 17:39:31.230959 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:39:31.240773 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:39:31.244322 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:39:31.274224 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:39:31.274456 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:39:31.301925 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory May 27 17:39:31.307111 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:39:31.307111 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:39:31.310707 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:39:31.312292 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:39:31.314682 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:39:31.316498 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:39:31.380676 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:39:31.381929 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:39:31.385156 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:39:31.387297 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:39:31.389578 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:39:31.392165 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:39:31.449813 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:39:31.461452 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:39:31.492504 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:39:31.508097 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:39:31.510580 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:39:31.513309 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:39:31.513452 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:39:31.554008 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:39:31.554358 systemd[1]: Stopped target basic.target - Basic System. May 27 17:39:31.554741 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:39:31.555305 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:39:31.555683 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:39:31.556236 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:39:31.556645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:39:31.557191 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:39:31.557634 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:39:31.558185 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:39:31.558530 systemd[1]: Stopped target swap.target - Swaps. May 27 17:39:31.558906 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:39:31.559039 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:39:31.559850 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:39:31.560468 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:39:31.560816 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:39:31.560949 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:39:31.561378 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:39:31.561479 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:39:31.611332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:39:31.611602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:39:31.612447 systemd[1]: Stopped target paths.target - Path Units. May 27 17:39:31.615283 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:39:31.618978 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:39:31.619634 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:39:31.619970 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:39:31.620490 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:39:31.620580 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:39:31.625659 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:39:31.625740 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:39:31.627797 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:39:31.627964 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:39:31.629713 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:39:31.629851 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:39:31.633375 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:39:31.638334 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:39:31.639375 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:39:31.639509 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:39:31.642521 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:39:31.642729 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:39:31.654017 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:39:31.654141 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:39:31.673656 ignition[1090]: INFO : Ignition 2.21.0 May 27 17:39:31.674920 ignition[1090]: INFO : Stage: umount May 27 17:39:31.674920 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:39:31.674920 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:39:31.676590 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:39:31.679450 ignition[1090]: INFO : umount: umount passed May 27 17:39:31.680481 ignition[1090]: INFO : Ignition finished successfully May 27 17:39:31.683924 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:39:31.684092 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:39:31.685848 systemd[1]: Stopped target network.target - Network. May 27 17:39:31.687791 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:39:31.687868 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:39:31.688344 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:39:31.688398 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:39:31.688719 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:39:31.688776 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:39:31.689273 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:39:31.689327 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:39:31.689752 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:39:31.696945 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:39:31.697556 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:39:31.697688 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:39:31.702112 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:39:31.702186 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:39:31.708777 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:39:31.708969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:39:31.714215 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:39:31.714494 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:39:31.714650 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:39:31.719846 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:39:31.721278 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:39:31.727716 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:39:31.727775 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:39:31.732828 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:39:31.733308 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:39:31.733380 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:39:31.733731 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:39:31.733786 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:39:31.739305 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:39:31.739383 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:39:31.740057 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:39:31.740110 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:39:31.751033 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:39:31.753997 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:39:31.754093 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:39:31.765477 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:39:31.765634 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:39:31.778556 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:39:31.778753 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:39:31.779678 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:39:31.779749 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:39:31.782318 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:39:31.782360 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:39:31.795307 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:39:31.795396 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:39:31.797756 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:39:31.797802 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:39:31.801602 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:39:31.801669 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:39:31.803410 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:39:31.807199 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:39:31.807265 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:39:31.817741 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:39:31.817826 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:39:31.821119 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:39:31.821184 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:39:31.824618 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:39:31.824683 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:39:31.825414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:39:31.825471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:39:31.831802 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:39:31.831898 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:39:31.831959 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:39:31.832024 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:39:31.854132 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:39:31.854292 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:39:31.854946 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:39:31.861838 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:39:31.897868 systemd[1]: Switching root. May 27 17:39:31.949915 systemd-journald[220]: Journal stopped May 27 17:39:34.174403 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 27 17:39:34.174478 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:39:34.174502 kernel: SELinux: policy capability open_perms=1 May 27 17:39:34.174516 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:39:34.174530 kernel: SELinux: policy capability always_check_network=0 May 27 17:39:34.174550 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:39:34.174565 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:39:34.174579 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:39:34.174597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:39:34.174612 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:39:34.174631 kernel: audit: type=1403 audit(1748367573.288:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:39:34.174647 systemd[1]: Successfully loaded SELinux policy in 55.931ms. May 27 17:39:34.174676 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.734ms. May 27 17:39:34.174692 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:39:34.174707 systemd[1]: Detected virtualization kvm. May 27 17:39:34.174722 systemd[1]: Detected architecture x86-64. May 27 17:39:34.174737 systemd[1]: Detected first boot. May 27 17:39:34.174756 systemd[1]: Initializing machine ID from VM UUID. May 27 17:39:34.174771 zram_generator::config[1134]: No configuration found. May 27 17:39:34.174793 kernel: Guest personality initialized and is inactive May 27 17:39:34.174808 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 17:39:34.174821 kernel: Initialized host personality May 27 17:39:34.174835 kernel: NET: Registered PF_VSOCK protocol family May 27 17:39:34.174849 systemd[1]: Populated /etc with preset unit settings. May 27 17:39:34.174865 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:39:34.174916 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:39:34.174931 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:39:34.174946 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:39:34.174961 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:39:34.174976 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:39:34.174991 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:39:34.175024 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:39:34.175061 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:39:34.175081 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:39:34.175101 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:39:34.175116 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:39:34.175132 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:39:34.175148 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:39:34.175163 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:39:34.175179 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:39:34.175204 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:39:34.175223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:39:34.175238 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:39:34.175254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:39:34.175269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:39:34.175284 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:39:34.175307 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:39:34.175327 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:39:34.175346 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:39:34.175361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:39:34.175379 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:39:34.175394 systemd[1]: Reached target slices.target - Slice Units. May 27 17:39:34.175415 systemd[1]: Reached target swap.target - Swaps. May 27 17:39:34.175440 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:39:34.175458 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:39:34.175474 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:39:34.175489 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:39:34.175505 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:39:34.175521 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:39:34.175537 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:39:34.175555 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:39:34.175571 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:39:34.175586 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:39:34.175602 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:39:34.175617 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:39:34.175633 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:39:34.175648 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:39:34.175663 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:39:34.175681 systemd[1]: Reached target machines.target - Containers. May 27 17:39:34.175696 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:39:34.175712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:39:34.175727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:39:34.175742 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:39:34.175758 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:39:34.175773 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:39:34.175788 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:39:34.175804 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:39:34.175822 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:39:34.175838 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:39:34.175853 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:39:34.175869 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:39:34.175923 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:39:34.175939 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:39:34.175955 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:39:34.175971 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:39:34.175989 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:39:34.176003 kernel: fuse: init (API version 7.41) May 27 17:39:34.176018 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:39:34.176037 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:39:34.176052 kernel: loop: module loaded May 27 17:39:34.176066 kernel: ACPI: bus type drm_connector registered May 27 17:39:34.176083 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:39:34.176100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:39:34.176115 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:39:34.176130 systemd[1]: Stopped verity-setup.service. May 27 17:39:34.176146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:39:34.176164 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:39:34.176179 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:39:34.176204 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:39:34.176220 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:39:34.176236 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:39:34.176251 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:39:34.176267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:39:34.176282 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:39:34.176305 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:39:34.176321 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:39:34.176370 systemd-journald[1212]: Collecting audit messages is disabled. May 27 17:39:34.176398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:39:34.176414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:39:34.176430 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:39:34.176446 systemd-journald[1212]: Journal started May 27 17:39:34.176477 systemd-journald[1212]: Runtime Journal (/run/log/journal/7d1c3773d8984884a39e444b967bf36e) is 6M, max 48.2M, 42.2M free. May 27 17:39:33.851259 systemd[1]: Queued start job for default target multi-user.target. May 27 17:39:33.877256 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 17:39:33.877805 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:39:34.178384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:39:34.181040 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:39:34.183284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:39:34.183598 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:39:34.185515 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:39:34.185755 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:39:34.187513 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:39:34.187770 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:39:34.189387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:39:34.191139 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:39:34.192925 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:39:34.194836 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:39:34.212561 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:39:34.215723 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:39:34.218992 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:39:34.220307 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:39:34.220362 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:39:34.224028 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:39:34.235428 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:39:34.238097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:39:34.239973 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:39:34.243064 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:39:34.244523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:39:34.246424 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:39:34.249044 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:39:34.250357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:39:34.255082 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:39:34.265985 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:39:34.281866 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:39:34.283582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:39:34.287369 systemd-journald[1212]: Time spent on flushing to /var/log/journal/7d1c3773d8984884a39e444b967bf36e is 30.937ms for 1049 entries. May 27 17:39:34.287369 systemd-journald[1212]: System Journal (/var/log/journal/7d1c3773d8984884a39e444b967bf36e) is 8M, max 195.6M, 187.6M free. May 27 17:39:34.336557 systemd-journald[1212]: Received client request to flush runtime journal. May 27 17:39:34.336606 kernel: loop0: detected capacity change from 0 to 113872 May 27 17:39:34.297335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:39:34.317338 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:39:34.320846 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:39:34.326032 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:39:34.327803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:39:34.349691 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 27 17:39:34.349712 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 27 17:39:34.365308 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:39:34.367449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:39:34.380039 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:39:34.381701 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:39:34.398974 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:39:34.401613 kernel: loop1: detected capacity change from 0 to 229808 May 27 17:39:34.426018 kernel: loop2: detected capacity change from 0 to 146240 May 27 17:39:34.425981 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:39:34.429680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:39:34.460822 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 27 17:39:34.461204 kernel: loop3: detected capacity change from 0 to 113872 May 27 17:39:34.460843 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 27 17:39:34.467408 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:39:34.473920 kernel: loop4: detected capacity change from 0 to 229808 May 27 17:39:34.493239 kernel: loop5: detected capacity change from 0 to 146240 May 27 17:39:34.507574 (sd-merge)[1278]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 17:39:34.508585 (sd-merge)[1278]: Merged extensions into '/usr'. May 27 17:39:34.515607 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:39:34.515631 systemd[1]: Reloading... May 27 17:39:34.623107 zram_generator::config[1305]: No configuration found. May 27 17:39:34.802721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:39:34.827905 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:39:34.896726 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:39:34.897412 systemd[1]: Reloading finished in 380 ms. May 27 17:39:34.924208 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:39:34.926333 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:39:34.941612 systemd[1]: Starting ensure-sysext.service... May 27 17:39:34.944017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:39:34.968721 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... May 27 17:39:34.968747 systemd[1]: Reloading... May 27 17:39:35.054907 zram_generator::config[1366]: No configuration found. May 27 17:39:35.185926 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:39:35.185983 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:39:35.186403 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:39:35.186725 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:39:35.187786 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:39:35.188181 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. May 27 17:39:35.188278 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. May 27 17:39:35.193361 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:39:35.193378 systemd-tmpfiles[1343]: Skipping /boot May 27 17:39:35.207995 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:39:35.208016 systemd-tmpfiles[1343]: Skipping /boot May 27 17:39:35.222760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:39:35.308856 systemd[1]: Reloading finished in 339 ms. May 27 17:39:35.325471 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:39:35.351132 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:39:35.363936 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:39:35.367758 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:39:35.370687 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:39:35.392821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:39:35.396723 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:39:35.401841 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:39:35.408534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:39:35.408801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:39:35.414196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:39:35.420071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:39:35.426263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:39:35.427843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:39:35.428059 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:39:35.435738 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:39:35.437973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:39:35.441199 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:39:35.444810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:39:35.445357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:39:35.450166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:39:35.450592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:39:35.472743 augenrules[1439]: No rules May 27 17:39:35.494028 systemd-udevd[1413]: Using default interface naming scheme 'v255'. May 27 17:39:35.495030 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:39:35.495382 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:39:35.497445 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:39:35.499470 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:39:35.499782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:39:35.516722 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:39:35.521845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:39:35.524085 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:39:35.525278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:39:35.527016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:39:35.530337 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:39:35.541804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:39:35.549786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:39:35.551339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:39:35.551503 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:39:35.557012 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:39:35.558418 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:39:35.558604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:39:35.560308 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:39:35.562364 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:39:35.564441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:39:35.564680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:39:35.566366 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:39:35.566621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:39:35.575558 systemd[1]: Finished ensure-sysext.service. May 27 17:39:35.588005 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:39:35.594608 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 17:39:35.596334 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:39:35.596562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:39:35.599532 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:39:35.600418 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:39:35.605203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:39:35.606030 augenrules[1453]: /sbin/augenrules: No change May 27 17:39:35.607994 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:39:35.615503 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:39:35.665697 augenrules[1514]: No rules May 27 17:39:35.667296 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:39:35.668178 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:39:35.675433 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:39:35.771967 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:39:35.775089 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:39:35.777869 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:39:35.831217 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 17:39:35.838908 kernel: ACPI: button: Power Button [PWRF] May 27 17:39:35.842945 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 27 17:39:35.843596 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 17:39:35.843760 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 17:39:35.858081 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:39:35.903935 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 17:39:35.906063 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:39:35.920415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:39:35.940429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:39:35.940994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:39:35.947194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:39:35.978588 systemd-resolved[1412]: Positive Trust Anchors: May 27 17:39:35.978614 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:39:35.978646 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:39:35.984049 systemd-resolved[1412]: Defaulting to hostname 'linux'. May 27 17:39:35.985718 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:39:35.987069 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:39:35.991282 systemd-networkd[1500]: lo: Link UP May 27 17:39:35.991297 systemd-networkd[1500]: lo: Gained carrier May 27 17:39:35.994807 systemd-networkd[1500]: Enumeration completed May 27 17:39:35.994921 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:39:35.996276 systemd[1]: Reached target network.target - Network. May 27 17:39:35.998973 systemd-networkd[1500]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:39:35.999004 systemd-networkd[1500]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:39:36.001171 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:39:36.001847 systemd-networkd[1500]: eth0: Link UP May 27 17:39:36.002159 systemd-networkd[1500]: eth0: Gained carrier May 27 17:39:36.002181 systemd-networkd[1500]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:39:36.006434 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:39:36.025963 systemd-networkd[1500]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:39:36.029090 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. May 27 17:39:38.141058 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 17:39:38.141114 systemd-timesyncd[1501]: Initial clock synchronization to Tue 2025-05-27 17:39:38.140844 UTC. May 27 17:39:38.141175 systemd-resolved[1412]: Clock change detected. Flushing caches. May 27 17:39:38.167997 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:39:38.186823 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:39:38.188326 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:39:38.191233 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:39:38.192653 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:39:38.194076 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:39:38.195790 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:39:38.197147 kernel: kvm_amd: TSC scaling supported May 27 17:39:38.200371 kernel: kvm_amd: Nested Virtualization enabled May 27 17:39:38.200398 kernel: kvm_amd: Nested Paging enabled May 27 17:39:38.200440 kernel: kvm_amd: LBR virtualization supported May 27 17:39:38.200465 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 17:39:38.200487 kernel: kvm_amd: Virtual GIF supported May 27 17:39:38.202199 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:39:38.203600 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:39:38.204977 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:39:38.205029 systemd[1]: Reached target paths.target - Path Units. May 27 17:39:38.206409 systemd[1]: Reached target timers.target - Timer Units. May 27 17:39:38.210783 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:39:38.215939 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:39:38.219991 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:39:38.221548 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:39:38.222914 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:39:38.233320 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:39:38.234899 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:39:38.237246 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:39:38.239344 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:39:38.240966 systemd[1]: Reached target basic.target - Basic System. May 27 17:39:38.242476 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:39:38.242552 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:39:38.243919 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:39:38.246993 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:39:38.249882 kernel: EDAC MC: Ver: 3.0.0 May 27 17:39:38.261204 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:39:38.271305 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:39:38.273992 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:39:38.275171 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:39:38.277999 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:39:38.288960 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:39:38.291385 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:39:38.292773 jq[1567]: false May 27 17:39:38.295977 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:39:38.298534 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache May 27 17:39:38.297737 oslogin_cache_refresh[1569]: Refreshing passwd entry cache May 27 17:39:38.299227 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:39:38.303983 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:39:38.306259 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:39:38.306786 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:39:38.308032 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:39:38.311068 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:39:38.316362 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:39:38.320040 extend-filesystems[1568]: Found loop3 May 27 17:39:38.320040 extend-filesystems[1568]: Found loop4 May 27 17:39:38.320040 extend-filesystems[1568]: Found loop5 May 27 17:39:38.320040 extend-filesystems[1568]: Found sr0 May 27 17:39:38.320040 extend-filesystems[1568]: Found vda May 27 17:39:38.320040 extend-filesystems[1568]: Found vda1 May 27 17:39:38.320040 extend-filesystems[1568]: Found vda2 May 27 17:39:38.320040 extend-filesystems[1568]: Found vda3 May 27 17:39:38.320040 extend-filesystems[1568]: Found usr May 27 17:39:38.320040 extend-filesystems[1568]: Found vda4 May 27 17:39:38.320040 extend-filesystems[1568]: Found vda6 May 27 17:39:38.320040 extend-filesystems[1568]: Found vda7 May 27 17:39:38.320040 extend-filesystems[1568]: Found vda9 May 27 17:39:38.320040 extend-filesystems[1568]: Checking size of /dev/vda9 May 27 17:39:38.320000 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:39:38.323447 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:39:38.334530 jq[1582]: true May 27 17:39:38.323866 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:39:38.324097 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:39:38.327314 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:39:38.327549 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:39:38.345388 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:39:38.346719 update_engine[1581]: I20250527 17:39:38.346621 1581 main.cc:92] Flatcar Update Engine starting May 27 17:39:38.348487 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting May 27 17:39:38.348487 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:39:38.348487 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache May 27 17:39:38.347972 oslogin_cache_refresh[1569]: Failure getting users, quitting May 27 17:39:38.347996 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:39:38.348055 oslogin_cache_refresh[1569]: Refreshing group entry cache May 27 17:39:38.349409 jq[1590]: true May 27 17:39:38.352489 extend-filesystems[1568]: Resized partition /dev/vda9 May 27 17:39:38.356987 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting May 27 17:39:38.356987 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:39:38.355601 oslogin_cache_refresh[1569]: Failure getting groups, quitting May 27 17:39:38.355613 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:39:38.362056 extend-filesystems[1604]: resize2fs 1.47.2 (1-Jan-2025) May 27 17:39:38.362413 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:39:38.389223 dbus-daemon[1564]: [system] SELinux support is enabled May 27 17:39:38.392399 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:39:38.394035 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:39:38.399819 tar[1588]: linux-amd64/LICENSE May 27 17:39:38.399819 tar[1588]: linux-amd64/helm May 27 17:39:38.398175 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:39:38.398202 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:39:38.399560 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:39:38.399580 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:39:38.404271 systemd[1]: Started update-engine.service - Update Engine. May 27 17:39:38.405618 update_engine[1581]: I20250527 17:39:38.404927 1581 update_check_scheduler.cc:74] Next update check in 3m31s May 27 17:39:38.405871 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 17:39:38.415800 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:39:38.439799 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 17:39:38.461848 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 17:39:38.461848 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 17:39:38.461848 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 17:39:38.470451 extend-filesystems[1568]: Resized filesystem in /dev/vda9 May 27 17:39:38.467145 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:39:38.467467 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:39:38.483042 bash[1623]: Updated "/home/core/.ssh/authorized_keys" May 27 17:39:38.485088 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:39:38.487358 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:39:38.489048 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) May 27 17:39:38.489434 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:39:38.490998 systemd-logind[1577]: New seat seat0. May 27 17:39:38.492137 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:39:38.534881 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:39:38.815829 containerd[1591]: time="2025-05-27T17:39:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:39:38.816527 containerd[1591]: time="2025-05-27T17:39:38.816459027Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:39:38.834924 containerd[1591]: time="2025-05-27T17:39:38.834793210Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="26.149µs" May 27 17:39:38.834924 containerd[1591]: time="2025-05-27T17:39:38.834899009Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:39:38.834924 containerd[1591]: time="2025-05-27T17:39:38.834936248Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:39:38.835397 containerd[1591]: time="2025-05-27T17:39:38.835348852Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:39:38.835397 containerd[1591]: time="2025-05-27T17:39:38.835390180Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:39:38.835460 containerd[1591]: time="2025-05-27T17:39:38.835432740Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:39:38.835581 containerd[1591]: time="2025-05-27T17:39:38.835543507Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:39:38.835581 containerd[1591]: time="2025-05-27T17:39:38.835566801Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:39:38.836033 containerd[1591]: time="2025-05-27T17:39:38.835992018Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:39:38.836033 containerd[1591]: time="2025-05-27T17:39:38.836017186Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:39:38.836033 containerd[1591]: time="2025-05-27T17:39:38.836030851Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:39:38.836139 containerd[1591]: time="2025-05-27T17:39:38.836041752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:39:38.836230 containerd[1591]: time="2025-05-27T17:39:38.836193657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:39:38.836586 containerd[1591]: time="2025-05-27T17:39:38.836550756Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:39:38.836623 containerd[1591]: time="2025-05-27T17:39:38.836597364Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:39:38.836623 containerd[1591]: time="2025-05-27T17:39:38.836611270Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:39:38.836695 containerd[1591]: time="2025-05-27T17:39:38.836669218Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:39:38.837256 containerd[1591]: time="2025-05-27T17:39:38.837195666Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:39:38.837368 containerd[1591]: time="2025-05-27T17:39:38.837337922Z" level=info msg="metadata content store policy set" policy=shared May 27 17:39:38.923715 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:39:38.929114 containerd[1591]: time="2025-05-27T17:39:38.928824322Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.930995634Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931214505Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931241655Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931269397Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931283294Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931299734Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931343977Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931385896Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931415562Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931464563Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:39:38.931887 containerd[1591]: time="2025-05-27T17:39:38.931499820Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:39:38.932454 containerd[1591]: time="2025-05-27T17:39:38.932111136Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:39:38.932454 containerd[1591]: time="2025-05-27T17:39:38.932159326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:39:38.932454 containerd[1591]: time="2025-05-27T17:39:38.932316441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:39:38.932454 containerd[1591]: time="2025-05-27T17:39:38.932358861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:39:38.932454 containerd[1591]: time="2025-05-27T17:39:38.932386883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:39:38.932685 containerd[1591]: time="2025-05-27T17:39:38.932459580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:39:38.932685 containerd[1591]: time="2025-05-27T17:39:38.932491970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:39:38.932685 containerd[1591]: time="2025-05-27T17:39:38.932517799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:39:38.932685 containerd[1591]: time="2025-05-27T17:39:38.932577060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:39:38.932796 containerd[1591]: time="2025-05-27T17:39:38.932734776Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:39:38.932796 containerd[1591]: time="2025-05-27T17:39:38.932764321Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:39:38.933090 containerd[1591]: time="2025-05-27T17:39:38.932974365Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:39:38.933177 containerd[1591]: time="2025-05-27T17:39:38.933129295Z" level=info msg="Start snapshots syncer" May 27 17:39:38.934881 containerd[1591]: time="2025-05-27T17:39:38.933226969Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:39:38.934881 containerd[1591]: time="2025-05-27T17:39:38.933711307Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:39:38.935263 containerd[1591]: time="2025-05-27T17:39:38.933796166Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:39:38.936008 containerd[1591]: time="2025-05-27T17:39:38.935976886Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:39:38.936426 containerd[1591]: time="2025-05-27T17:39:38.936274444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:39:38.936986 containerd[1591]: time="2025-05-27T17:39:38.936942627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:39:38.937202 containerd[1591]: time="2025-05-27T17:39:38.936988032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:39:38.937202 containerd[1591]: time="2025-05-27T17:39:38.937016305Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:39:38.937202 containerd[1591]: time="2025-05-27T17:39:38.937089362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:39:38.937202 containerd[1591]: time="2025-05-27T17:39:38.937125049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:39:38.937202 containerd[1591]: time="2025-05-27T17:39:38.937152110Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:39:38.937372 containerd[1591]: time="2025-05-27T17:39:38.937256245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:39:38.937372 containerd[1591]: time="2025-05-27T17:39:38.937278206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:39:38.937444 containerd[1591]: time="2025-05-27T17:39:38.937399393Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:39:38.939167 containerd[1591]: time="2025-05-27T17:39:38.938640571Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:39:38.939233 containerd[1591]: time="2025-05-27T17:39:38.939171356Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:39:38.939233 containerd[1591]: time="2025-05-27T17:39:38.939183900Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:39:38.939276 containerd[1591]: time="2025-05-27T17:39:38.939251357Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:39:38.939276 containerd[1591]: time="2025-05-27T17:39:38.939263740Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:39:38.939276 containerd[1591]: time="2025-05-27T17:39:38.939275542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:39:38.939423 containerd[1591]: time="2025-05-27T17:39:38.939301330Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:39:38.939585 containerd[1591]: time="2025-05-27T17:39:38.939559524Z" level=info msg="runtime interface created" May 27 17:39:38.939585 containerd[1591]: time="2025-05-27T17:39:38.939577278Z" level=info msg="created NRI interface" May 27 17:39:38.949878 containerd[1591]: time="2025-05-27T17:39:38.948750000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:39:38.949878 containerd[1591]: time="2025-05-27T17:39:38.949242774Z" level=info msg="Connect containerd service" May 27 17:39:38.949878 containerd[1591]: time="2025-05-27T17:39:38.949291796Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:39:38.950661 containerd[1591]: time="2025-05-27T17:39:38.950625788Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:39:38.971351 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:39:38.975201 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:39:38.993426 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:39:38.993712 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:39:38.997130 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:39:39.041385 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:39:39.044628 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:39:39.049125 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:39:39.050830 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:39:39.187389 containerd[1591]: time="2025-05-27T17:39:39.187179600Z" level=info msg="Start subscribing containerd event" May 27 17:39:39.187570 containerd[1591]: time="2025-05-27T17:39:39.187480735Z" level=info msg="Start recovering state" May 27 17:39:39.187669 containerd[1591]: time="2025-05-27T17:39:39.187656545Z" level=info msg="Start event monitor" May 27 17:39:39.187690 containerd[1591]: time="2025-05-27T17:39:39.187677945Z" level=info msg="Start cni network conf syncer for default" May 27 17:39:39.187690 containerd[1591]: time="2025-05-27T17:39:39.187685219Z" level=info msg="Start streaming server" May 27 17:39:39.187726 containerd[1591]: time="2025-05-27T17:39:39.187707651Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:39:39.187726 containerd[1591]: time="2025-05-27T17:39:39.187717269Z" level=info msg="runtime interface starting up..." May 27 17:39:39.187726 containerd[1591]: time="2025-05-27T17:39:39.187724142Z" level=info msg="starting plugins..." May 27 17:39:39.187782 containerd[1591]: time="2025-05-27T17:39:39.187742626Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:39:39.187944 containerd[1591]: time="2025-05-27T17:39:39.187918326Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:39:39.188042 containerd[1591]: time="2025-05-27T17:39:39.188007303Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:39:39.188250 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:39:39.189415 containerd[1591]: time="2025-05-27T17:39:39.188930103Z" level=info msg="containerd successfully booted in 0.375552s" May 27 17:39:39.194454 tar[1588]: linux-amd64/README.md May 27 17:39:39.227640 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:39:39.674067 systemd-networkd[1500]: eth0: Gained IPv6LL May 27 17:39:39.677433 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:39:39.679487 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:39:39.682323 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 17:39:39.685048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:39:39.687384 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:39:39.725072 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:39:39.727038 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 17:39:39.727345 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 17:39:39.730953 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:39:40.071612 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:39:40.074522 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:48236.service - OpenSSH per-connection server daemon (10.0.0.1:48236). May 27 17:39:40.170191 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 48236 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:40.172291 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:40.178589 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:39:40.183822 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:39:40.193449 systemd-logind[1577]: New session 1 of user core. May 27 17:39:40.218757 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:39:40.223686 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:39:40.239748 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:39:40.242374 systemd-logind[1577]: New session c1 of user core. May 27 17:39:40.460418 systemd[1695]: Queued start job for default target default.target. May 27 17:39:40.471399 systemd[1695]: Created slice app.slice - User Application Slice. May 27 17:39:40.471431 systemd[1695]: Reached target paths.target - Paths. May 27 17:39:40.471493 systemd[1695]: Reached target timers.target - Timers. May 27 17:39:40.473189 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:39:40.486828 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:39:40.487015 systemd[1695]: Reached target sockets.target - Sockets. May 27 17:39:40.487082 systemd[1695]: Reached target basic.target - Basic System. May 27 17:39:40.487133 systemd[1695]: Reached target default.target - Main User Target. May 27 17:39:40.487173 systemd[1695]: Startup finished in 195ms. May 27 17:39:40.487687 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:39:40.490692 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:39:40.557075 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:48248.service - OpenSSH per-connection server daemon (10.0.0.1:48248). May 27 17:39:40.613476 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 48248 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:40.615171 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:40.619835 systemd-logind[1577]: New session 2 of user core. May 27 17:39:40.630003 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:39:40.687036 sshd[1708]: Connection closed by 10.0.0.1 port 48248 May 27 17:39:40.687491 sshd-session[1706]: pam_unix(sshd:session): session closed for user core May 27 17:39:40.700376 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:48248.service: Deactivated successfully. May 27 17:39:40.702198 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:39:40.702964 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. May 27 17:39:40.705474 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:48250.service - OpenSSH per-connection server daemon (10.0.0.1:48250). May 27 17:39:40.708258 systemd-logind[1577]: Removed session 2. May 27 17:39:40.757644 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 48250 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:40.759353 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:40.764194 systemd-logind[1577]: New session 3 of user core. May 27 17:39:40.874199 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:39:40.928667 sshd[1716]: Connection closed by 10.0.0.1 port 48250 May 27 17:39:40.930069 sshd-session[1714]: pam_unix(sshd:session): session closed for user core May 27 17:39:40.934026 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:48250.service: Deactivated successfully. May 27 17:39:40.936186 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:39:40.937040 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. May 27 17:39:40.938512 systemd-logind[1577]: Removed session 3. May 27 17:39:41.187199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:39:41.189023 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:39:41.190785 systemd[1]: Startup finished in 3.043s (kernel) + 8.627s (initrd) + 5.844s (userspace) = 17.515s. May 27 17:39:41.193579 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:39:41.924753 kubelet[1726]: E0527 17:39:41.924676 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:39:41.929208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:39:41.929452 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:39:41.929928 systemd[1]: kubelet.service: Consumed 2.148s CPU time, 267.9M memory peak. May 27 17:39:50.940642 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:42408.service - OpenSSH per-connection server daemon (10.0.0.1:42408). May 27 17:39:50.978222 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 42408 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:50.979665 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:50.984158 systemd-logind[1577]: New session 4 of user core. May 27 17:39:50.993992 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:39:51.049173 sshd[1741]: Connection closed by 10.0.0.1 port 42408 May 27 17:39:51.049457 sshd-session[1739]: pam_unix(sshd:session): session closed for user core May 27 17:39:51.059697 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:42408.service: Deactivated successfully. May 27 17:39:51.061483 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:39:51.062313 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. May 27 17:39:51.065061 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:42410.service - OpenSSH per-connection server daemon (10.0.0.1:42410). May 27 17:39:51.065693 systemd-logind[1577]: Removed session 4. May 27 17:39:51.116776 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 42410 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:51.118082 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:51.122498 systemd-logind[1577]: New session 5 of user core. May 27 17:39:51.132166 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:39:51.183390 sshd[1749]: Connection closed by 10.0.0.1 port 42410 May 27 17:39:51.183808 sshd-session[1747]: pam_unix(sshd:session): session closed for user core May 27 17:39:51.196557 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:42410.service: Deactivated successfully. May 27 17:39:51.198300 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:39:51.199094 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. May 27 17:39:51.201977 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:42412.service - OpenSSH per-connection server daemon (10.0.0.1:42412). May 27 17:39:51.202521 systemd-logind[1577]: Removed session 5. May 27 17:39:51.244844 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 42412 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:51.246368 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:51.251117 systemd-logind[1577]: New session 6 of user core. May 27 17:39:51.265055 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:39:51.320645 sshd[1757]: Connection closed by 10.0.0.1 port 42412 May 27 17:39:51.321035 sshd-session[1755]: pam_unix(sshd:session): session closed for user core May 27 17:39:51.329588 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:42412.service: Deactivated successfully. May 27 17:39:51.331459 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:39:51.332288 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. May 27 17:39:51.335070 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:42428.service - OpenSSH per-connection server daemon (10.0.0.1:42428). May 27 17:39:51.335655 systemd-logind[1577]: Removed session 6. May 27 17:39:51.388444 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 42428 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:51.390050 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:51.394437 systemd-logind[1577]: New session 7 of user core. May 27 17:39:51.407001 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:39:51.464270 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:39:51.464582 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:39:51.482286 sudo[1767]: pam_unix(sudo:session): session closed for user root May 27 17:39:51.484125 sshd[1766]: Connection closed by 10.0.0.1 port 42428 May 27 17:39:51.484548 sshd-session[1763]: pam_unix(sshd:session): session closed for user core May 27 17:39:51.501764 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:42428.service: Deactivated successfully. May 27 17:39:51.503602 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:39:51.504423 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. May 27 17:39:51.507346 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:42436.service - OpenSSH per-connection server daemon (10.0.0.1:42436). May 27 17:39:51.507953 systemd-logind[1577]: Removed session 7. May 27 17:39:51.560156 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 42436 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:51.561543 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:51.566109 systemd-logind[1577]: New session 8 of user core. May 27 17:39:51.580007 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:39:51.634644 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:39:51.635008 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:39:51.641554 sudo[1777]: pam_unix(sudo:session): session closed for user root May 27 17:39:51.648129 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:39:51.648514 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:39:51.660032 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:39:51.710889 augenrules[1799]: No rules May 27 17:39:51.712458 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:39:51.712719 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:39:51.714007 sudo[1776]: pam_unix(sudo:session): session closed for user root May 27 17:39:51.715510 sshd[1775]: Connection closed by 10.0.0.1 port 42436 May 27 17:39:51.715698 sshd-session[1773]: pam_unix(sshd:session): session closed for user core May 27 17:39:51.733779 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:42436.service: Deactivated successfully. May 27 17:39:51.735606 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:39:51.736431 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. May 27 17:39:51.739727 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:42446.service - OpenSSH per-connection server daemon (10.0.0.1:42446). May 27 17:39:51.740393 systemd-logind[1577]: Removed session 8. May 27 17:39:51.787548 sshd[1808]: Accepted publickey for core from 10.0.0.1 port 42446 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:39:51.789012 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:39:51.793521 systemd-logind[1577]: New session 9 of user core. May 27 17:39:51.803984 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:39:51.856424 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:39:51.856750 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:39:52.160489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:39:52.162031 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:39:52.163185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:39:52.186286 (dockerd)[1832]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:39:52.413779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:39:52.421510 dockerd[1832]: time="2025-05-27T17:39:52.421458157Z" level=info msg="Starting up" May 27 17:39:52.422935 dockerd[1832]: time="2025-05-27T17:39:52.422906313Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:39:52.429220 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:39:53.082917 kubelet[1851]: E0527 17:39:53.082792 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:39:53.090304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:39:53.090501 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:39:53.090949 systemd[1]: kubelet.service: Consumed 302ms CPU time, 111.4M memory peak. May 27 17:39:53.211973 dockerd[1832]: time="2025-05-27T17:39:53.211891682Z" level=info msg="Loading containers: start." May 27 17:39:53.283901 kernel: Initializing XFRM netlink socket May 27 17:39:53.647202 systemd-networkd[1500]: docker0: Link UP May 27 17:39:53.718785 dockerd[1832]: time="2025-05-27T17:39:53.718708554Z" level=info msg="Loading containers: done." May 27 17:39:53.734516 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck247683396-merged.mount: Deactivated successfully. May 27 17:39:53.735152 dockerd[1832]: time="2025-05-27T17:39:53.735108300Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:39:53.735223 dockerd[1832]: time="2025-05-27T17:39:53.735192217Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:39:53.735339 dockerd[1832]: time="2025-05-27T17:39:53.735307092Z" level=info msg="Initializing buildkit" May 27 17:39:53.765794 dockerd[1832]: time="2025-05-27T17:39:53.765722643Z" level=info msg="Completed buildkit initialization" May 27 17:39:53.772397 dockerd[1832]: time="2025-05-27T17:39:53.772353548Z" level=info msg="Daemon has completed initialization" May 27 17:39:53.772711 dockerd[1832]: time="2025-05-27T17:39:53.772677415Z" level=info msg="API listen on /run/docker.sock" May 27 17:39:53.772896 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:39:54.474494 containerd[1591]: time="2025-05-27T17:39:54.474440712Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 17:39:55.106224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839640535.mount: Deactivated successfully. May 27 17:39:56.194849 containerd[1591]: time="2025-05-27T17:39:56.194786524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:56.195582 containerd[1591]: time="2025-05-27T17:39:56.195514119Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 17:39:56.196614 containerd[1591]: time="2025-05-27T17:39:56.196577964Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:56.198932 containerd[1591]: time="2025-05-27T17:39:56.198904888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:56.199708 containerd[1591]: time="2025-05-27T17:39:56.199642341Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.725160091s" May 27 17:39:56.199755 containerd[1591]: time="2025-05-27T17:39:56.199708595Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 17:39:56.200288 containerd[1591]: time="2025-05-27T17:39:56.200257114Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 17:39:58.095882 containerd[1591]: time="2025-05-27T17:39:58.095773093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:58.097033 containerd[1591]: time="2025-05-27T17:39:58.097008399Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 17:39:58.098499 containerd[1591]: time="2025-05-27T17:39:58.098467987Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:58.101615 containerd[1591]: time="2025-05-27T17:39:58.101545007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:58.102644 containerd[1591]: time="2025-05-27T17:39:58.102598583Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.902290814s" May 27 17:39:58.102716 containerd[1591]: time="2025-05-27T17:39:58.102664497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 17:39:58.103410 containerd[1591]: time="2025-05-27T17:39:58.103366343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 17:39:59.628031 containerd[1591]: time="2025-05-27T17:39:59.627970710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:59.628917 containerd[1591]: time="2025-05-27T17:39:59.628885015Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 17:39:59.629950 containerd[1591]: time="2025-05-27T17:39:59.629917010Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:59.632634 containerd[1591]: time="2025-05-27T17:39:59.632600443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:39:59.633645 containerd[1591]: time="2025-05-27T17:39:59.633600157Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.530200362s" May 27 17:39:59.633703 containerd[1591]: time="2025-05-27T17:39:59.633647376Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 17:39:59.634499 containerd[1591]: time="2025-05-27T17:39:59.634458738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 17:40:01.259203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578157464.mount: Deactivated successfully. May 27 17:40:01.577058 containerd[1591]: time="2025-05-27T17:40:01.576914445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:01.577779 containerd[1591]: time="2025-05-27T17:40:01.577744282Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 17:40:01.579309 containerd[1591]: time="2025-05-27T17:40:01.579272908Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:01.582075 containerd[1591]: time="2025-05-27T17:40:01.582026322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:01.583049 containerd[1591]: time="2025-05-27T17:40:01.582835910Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.948345713s" May 27 17:40:01.583049 containerd[1591]: time="2025-05-27T17:40:01.582919126Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 17:40:01.583716 containerd[1591]: time="2025-05-27T17:40:01.583650808Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 17:40:02.100783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264422848.mount: Deactivated successfully. May 27 17:40:03.231585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:40:03.233805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:40:03.502843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:40:03.506572 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:40:03.567771 kubelet[2182]: E0527 17:40:03.567692 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:40:03.572913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:40:03.573132 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:40:03.573567 systemd[1]: kubelet.service: Consumed 292ms CPU time, 110.5M memory peak. May 27 17:40:04.216627 containerd[1591]: time="2025-05-27T17:40:04.216535237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:04.217704 containerd[1591]: time="2025-05-27T17:40:04.217602548Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 17:40:04.219707 containerd[1591]: time="2025-05-27T17:40:04.219651400Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:04.224156 containerd[1591]: time="2025-05-27T17:40:04.224068715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:04.225139 containerd[1591]: time="2025-05-27T17:40:04.225075162Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.641389438s" May 27 17:40:04.225139 containerd[1591]: time="2025-05-27T17:40:04.225116339Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 17:40:04.225946 containerd[1591]: time="2025-05-27T17:40:04.225723568Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:40:04.785644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2896476808.mount: Deactivated successfully. May 27 17:40:04.793516 containerd[1591]: time="2025-05-27T17:40:04.793420122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:40:04.794287 containerd[1591]: time="2025-05-27T17:40:04.794232215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 17:40:04.795697 containerd[1591]: time="2025-05-27T17:40:04.795651306Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:40:04.797956 containerd[1591]: time="2025-05-27T17:40:04.797916885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:40:04.798416 containerd[1591]: time="2025-05-27T17:40:04.798372128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 572.617643ms" May 27 17:40:04.798448 containerd[1591]: time="2025-05-27T17:40:04.798416862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:40:04.799048 containerd[1591]: time="2025-05-27T17:40:04.799006839Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 17:40:07.912129 containerd[1591]: time="2025-05-27T17:40:07.912044335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:07.912898 containerd[1591]: time="2025-05-27T17:40:07.912836401Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 17:40:07.914054 containerd[1591]: time="2025-05-27T17:40:07.914017035Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:07.917132 containerd[1591]: time="2025-05-27T17:40:07.917103513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:07.920055 containerd[1591]: time="2025-05-27T17:40:07.919980759Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.120932893s" May 27 17:40:07.920055 containerd[1591]: time="2025-05-27T17:40:07.920051061Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 17:40:10.996272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:40:10.996438 systemd[1]: kubelet.service: Consumed 292ms CPU time, 110.5M memory peak. May 27 17:40:10.998602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:40:11.020262 systemd[1]: Reload requested from client PID 2244 ('systemctl') (unit session-9.scope)... May 27 17:40:11.020278 systemd[1]: Reloading... May 27 17:40:11.104966 zram_generator::config[2290]: No configuration found. May 27 17:40:11.231677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:40:11.350048 systemd[1]: Reloading finished in 329 ms. May 27 17:40:11.432776 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:40:11.432931 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:40:11.433277 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:40:11.433336 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.1M memory peak. May 27 17:40:11.435191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:40:11.610265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:40:11.629169 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:40:11.700077 kubelet[2335]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:40:11.700077 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:40:11.700077 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:40:11.700496 kubelet[2335]: I0527 17:40:11.700126 2335 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:40:12.180938 kubelet[2335]: I0527 17:40:12.180849 2335 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:40:12.180938 kubelet[2335]: I0527 17:40:12.180918 2335 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:40:12.181250 kubelet[2335]: I0527 17:40:12.181224 2335 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:40:12.216884 kubelet[2335]: I0527 17:40:12.216805 2335 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:40:12.217031 kubelet[2335]: E0527 17:40:12.216922 2335 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 17:40:12.228041 kubelet[2335]: I0527 17:40:12.227988 2335 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:40:12.235359 kubelet[2335]: I0527 17:40:12.235306 2335 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:40:12.235725 kubelet[2335]: I0527 17:40:12.235669 2335 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:40:12.235931 kubelet[2335]: I0527 17:40:12.235707 2335 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:40:12.236091 kubelet[2335]: I0527 17:40:12.235933 2335 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:40:12.236091 kubelet[2335]: I0527 17:40:12.235946 2335 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:40:12.237184 kubelet[2335]: I0527 17:40:12.237153 2335 state_mem.go:36] "Initialized new in-memory state store" May 27 17:40:12.241916 kubelet[2335]: I0527 17:40:12.241882 2335 kubelet.go:480] "Attempting to sync node with API server" May 27 17:40:12.241916 kubelet[2335]: I0527 17:40:12.241916 2335 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:40:12.242034 kubelet[2335]: I0527 17:40:12.241949 2335 kubelet.go:386] "Adding apiserver pod source" May 27 17:40:12.244681 kubelet[2335]: I0527 17:40:12.244485 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:40:12.286561 kubelet[2335]: E0527 17:40:12.286503 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:40:12.288355 kubelet[2335]: E0527 17:40:12.288282 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:40:12.288895 kubelet[2335]: I0527 17:40:12.288841 2335 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:40:12.289664 kubelet[2335]: I0527 17:40:12.289619 2335 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:40:12.290693 kubelet[2335]: W0527 17:40:12.290652 2335 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:40:12.296917 kubelet[2335]: I0527 17:40:12.296892 2335 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:40:12.297029 kubelet[2335]: I0527 17:40:12.296957 2335 server.go:1289] "Started kubelet" May 27 17:40:12.298719 kubelet[2335]: I0527 17:40:12.298483 2335 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:40:12.300320 kubelet[2335]: I0527 17:40:12.299619 2335 server.go:317] "Adding debug handlers to kubelet server" May 27 17:40:12.300461 kubelet[2335]: I0527 17:40:12.300430 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:40:12.304832 kubelet[2335]: I0527 17:40:12.304492 2335 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:40:12.307675 kubelet[2335]: I0527 17:40:12.307303 2335 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:40:12.307675 kubelet[2335]: E0527 17:40:12.307461 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:12.307675 kubelet[2335]: I0527 17:40:12.307647 2335 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:40:12.307806 kubelet[2335]: I0527 17:40:12.307749 2335 reconciler.go:26] "Reconciler: start to sync state" May 27 17:40:12.309783 kubelet[2335]: E0527 17:40:12.308533 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:40:12.309783 kubelet[2335]: E0527 17:40:12.308654 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" May 27 17:40:12.309783 kubelet[2335]: I0527 17:40:12.308954 2335 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:40:12.309783 kubelet[2335]: E0527 17:40:12.307000 2335 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1843731483c171e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 17:40:12.296917472 +0000 UTC m=+0.660600336,LastTimestamp:2025-05-27 17:40:12.296917472 +0000 UTC m=+0.660600336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 17:40:12.309783 kubelet[2335]: I0527 17:40:12.309428 2335 factory.go:223] Registration of the systemd container factory successfully May 27 17:40:12.309783 kubelet[2335]: I0527 17:40:12.309527 2335 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:40:12.310031 kubelet[2335]: I0527 17:40:12.309735 2335 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:40:12.311369 kubelet[2335]: E0527 17:40:12.311343 2335 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:40:12.312412 kubelet[2335]: I0527 17:40:12.312385 2335 factory.go:223] Registration of the containerd container factory successfully May 27 17:40:12.329137 kubelet[2335]: I0527 17:40:12.329101 2335 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:40:12.329137 kubelet[2335]: I0527 17:40:12.329121 2335 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:40:12.329137 kubelet[2335]: I0527 17:40:12.329141 2335 state_mem.go:36] "Initialized new in-memory state store" May 27 17:40:12.330211 kubelet[2335]: I0527 17:40:12.330167 2335 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:40:12.331681 kubelet[2335]: I0527 17:40:12.331588 2335 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:40:12.331681 kubelet[2335]: I0527 17:40:12.331653 2335 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:40:12.331681 kubelet[2335]: I0527 17:40:12.331688 2335 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:40:12.331681 kubelet[2335]: I0527 17:40:12.331704 2335 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:40:12.331915 kubelet[2335]: E0527 17:40:12.331769 2335 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:40:12.332510 kubelet[2335]: E0527 17:40:12.332480 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:40:12.408436 kubelet[2335]: E0527 17:40:12.408352 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:12.432955 kubelet[2335]: E0527 17:40:12.432756 2335 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:40:12.508598 kubelet[2335]: E0527 17:40:12.508495 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:12.510345 kubelet[2335]: E0527 17:40:12.510294 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" May 27 17:40:12.608663 kubelet[2335]: E0527 17:40:12.608594 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:12.633848 kubelet[2335]: E0527 17:40:12.633783 2335 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:40:12.709709 kubelet[2335]: E0527 17:40:12.709547 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:12.809789 kubelet[2335]: E0527 17:40:12.809718 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:12.910369 kubelet[2335]: E0527 17:40:12.910309 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:12.910821 kubelet[2335]: E0527 17:40:12.910779 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" May 27 17:40:13.011526 kubelet[2335]: E0527 17:40:13.011445 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:13.034809 kubelet[2335]: E0527 17:40:13.034728 2335 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:40:13.112526 kubelet[2335]: E0527 17:40:13.112455 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:13.213190 kubelet[2335]: E0527 17:40:13.213095 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:13.301839 kubelet[2335]: I0527 17:40:13.301693 2335 policy_none.go:49] "None policy: Start" May 27 17:40:13.301839 kubelet[2335]: I0527 17:40:13.301749 2335 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:40:13.301839 kubelet[2335]: I0527 17:40:13.301775 2335 state_mem.go:35] "Initializing new in-memory state store" May 27 17:40:13.314052 kubelet[2335]: E0527 17:40:13.313981 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:13.415073 kubelet[2335]: E0527 17:40:13.415006 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:13.436140 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:40:13.456869 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:40:13.461032 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:40:13.476336 kubelet[2335]: E0527 17:40:13.476206 2335 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:40:13.476700 kubelet[2335]: I0527 17:40:13.476561 2335 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:40:13.476700 kubelet[2335]: I0527 17:40:13.476593 2335 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:40:13.477360 kubelet[2335]: I0527 17:40:13.476935 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:40:13.478131 kubelet[2335]: E0527 17:40:13.478099 2335 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:40:13.478195 kubelet[2335]: E0527 17:40:13.478161 2335 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 17:40:13.479094 kubelet[2335]: E0527 17:40:13.479053 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:40:13.557609 kubelet[2335]: E0527 17:40:13.557430 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:40:13.579235 kubelet[2335]: I0527 17:40:13.579170 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:40:13.579723 kubelet[2335]: E0527 17:40:13.579664 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 27 17:40:13.672066 kubelet[2335]: E0527 17:40:13.672014 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:40:13.697836 kubelet[2335]: E0527 17:40:13.697789 2335 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:40:13.711648 kubelet[2335]: E0527 17:40:13.711605 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" May 27 17:40:13.781473 kubelet[2335]: I0527 17:40:13.781416 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:40:13.781831 kubelet[2335]: E0527 17:40:13.781791 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 27 17:40:13.918504 kubelet[2335]: I0527 17:40:13.918304 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 17:40:13.943638 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 27 17:40:13.954097 kubelet[2335]: E0527 17:40:13.954045 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:40:14.016246 systemd[1]: Created slice kubepods-burstable-pod4d8010abdb8c9bf0a771271053b4255f.slice - libcontainer container kubepods-burstable-pod4d8010abdb8c9bf0a771271053b4255f.slice. May 27 17:40:14.018441 kubelet[2335]: E0527 17:40:14.018409 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:40:14.018655 kubelet[2335]: I0527 17:40:14.018631 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d8010abdb8c9bf0a771271053b4255f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d8010abdb8c9bf0a771271053b4255f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:40:14.018697 kubelet[2335]: I0527 17:40:14.018651 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d8010abdb8c9bf0a771271053b4255f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d8010abdb8c9bf0a771271053b4255f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:40:14.018697 kubelet[2335]: I0527 17:40:14.018678 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d8010abdb8c9bf0a771271053b4255f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d8010abdb8c9bf0a771271053b4255f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:40:14.119050 kubelet[2335]: I0527 17:40:14.118992 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:14.119186 kubelet[2335]: I0527 17:40:14.119076 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:14.119186 kubelet[2335]: I0527 17:40:14.119111 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:14.119186 kubelet[2335]: I0527 17:40:14.119136 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:14.119272 kubelet[2335]: I0527 17:40:14.119188 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:14.178069 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 27 17:40:14.180550 kubelet[2335]: E0527 17:40:14.180517 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:40:14.183786 kubelet[2335]: I0527 17:40:14.183755 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:40:14.184157 kubelet[2335]: E0527 17:40:14.184119 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 27 17:40:14.255453 kubelet[2335]: E0527 17:40:14.255389 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:14.256238 containerd[1591]: time="2025-05-27T17:40:14.256202801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 27 17:40:14.319639 kubelet[2335]: E0527 17:40:14.319583 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:14.320170 containerd[1591]: time="2025-05-27T17:40:14.320136324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d8010abdb8c9bf0a771271053b4255f,Namespace:kube-system,Attempt:0,}" May 27 17:40:14.323450 kubelet[2335]: E0527 17:40:14.323420 2335 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 17:40:14.475663 containerd[1591]: time="2025-05-27T17:40:14.475130828Z" level=info msg="connecting to shim 75112bcd4fe265a0d06a3c13a8d05db14788af3f4d675b5926bd89ccc9cc1ace" address="unix:///run/containerd/s/0d97745bed99384205c4f7780a5d081c75adcf1612027d7ee62bd5e221afd8f7" namespace=k8s.io protocol=ttrpc version=3 May 27 17:40:14.481447 kubelet[2335]: E0527 17:40:14.481400 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:14.482600 containerd[1591]: time="2025-05-27T17:40:14.482396284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 27 17:40:14.486924 containerd[1591]: time="2025-05-27T17:40:14.486146060Z" level=info msg="connecting to shim 0981a58917eade729863de65bf2831d9884cbfa98661b41bfb50a7427757cb26" address="unix:///run/containerd/s/191d67db8e1d097741e7f94568c56ea6e527a07c2cae313ac5c7b1630e94ff8e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:40:14.581176 systemd[1]: Started cri-containerd-0981a58917eade729863de65bf2831d9884cbfa98661b41bfb50a7427757cb26.scope - libcontainer container 0981a58917eade729863de65bf2831d9884cbfa98661b41bfb50a7427757cb26. May 27 17:40:14.583610 systemd[1]: Started cri-containerd-75112bcd4fe265a0d06a3c13a8d05db14788af3f4d675b5926bd89ccc9cc1ace.scope - libcontainer container 75112bcd4fe265a0d06a3c13a8d05db14788af3f4d675b5926bd89ccc9cc1ace. May 27 17:40:14.589472 containerd[1591]: time="2025-05-27T17:40:14.589402470Z" level=info msg="connecting to shim 7337244f656ba089dd6b560f752358f0ce7f8b1ab94e32d6fa7f2e4e2b4fe627" address="unix:///run/containerd/s/6540056056cb6897f1b1448bdf92189a46d511a89a4bc662f99d9c63a9bfdd39" namespace=k8s.io protocol=ttrpc version=3 May 27 17:40:14.702016 systemd[1]: Started cri-containerd-7337244f656ba089dd6b560f752358f0ce7f8b1ab94e32d6fa7f2e4e2b4fe627.scope - libcontainer container 7337244f656ba089dd6b560f752358f0ce7f8b1ab94e32d6fa7f2e4e2b4fe627. May 27 17:40:14.718535 containerd[1591]: time="2025-05-27T17:40:14.718493174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d8010abdb8c9bf0a771271053b4255f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0981a58917eade729863de65bf2831d9884cbfa98661b41bfb50a7427757cb26\"" May 27 17:40:14.720261 kubelet[2335]: E0527 17:40:14.720238 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:14.722847 containerd[1591]: time="2025-05-27T17:40:14.722813942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"75112bcd4fe265a0d06a3c13a8d05db14788af3f4d675b5926bd89ccc9cc1ace\"" May 27 17:40:14.724422 kubelet[2335]: E0527 17:40:14.724361 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:14.729679 containerd[1591]: time="2025-05-27T17:40:14.729173657Z" level=info msg="CreateContainer within sandbox \"0981a58917eade729863de65bf2831d9884cbfa98661b41bfb50a7427757cb26\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:40:14.732880 containerd[1591]: time="2025-05-27T17:40:14.732446383Z" level=info msg="CreateContainer within sandbox \"75112bcd4fe265a0d06a3c13a8d05db14788af3f4d675b5926bd89ccc9cc1ace\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:40:14.743071 containerd[1591]: time="2025-05-27T17:40:14.743027786Z" level=info msg="Container 3bb39b647c165b97586c75c3e4851efc78b8e2f3d348d4f390d40a2e0f720d17: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:14.754379 containerd[1591]: time="2025-05-27T17:40:14.754337221Z" level=info msg="CreateContainer within sandbox \"0981a58917eade729863de65bf2831d9884cbfa98661b41bfb50a7427757cb26\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3bb39b647c165b97586c75c3e4851efc78b8e2f3d348d4f390d40a2e0f720d17\"" May 27 17:40:14.755897 containerd[1591]: time="2025-05-27T17:40:14.755838451Z" level=info msg="Container 6c206a6dde16fba219811514433f70d164668ea1e18234b5c0fdf4c37799f0f8: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:14.756270 containerd[1591]: time="2025-05-27T17:40:14.756217886Z" level=info msg="StartContainer for \"3bb39b647c165b97586c75c3e4851efc78b8e2f3d348d4f390d40a2e0f720d17\"" May 27 17:40:14.757616 containerd[1591]: time="2025-05-27T17:40:14.757590028Z" level=info msg="connecting to shim 3bb39b647c165b97586c75c3e4851efc78b8e2f3d348d4f390d40a2e0f720d17" address="unix:///run/containerd/s/191d67db8e1d097741e7f94568c56ea6e527a07c2cae313ac5c7b1630e94ff8e" protocol=ttrpc version=3 May 27 17:40:14.759612 containerd[1591]: time="2025-05-27T17:40:14.759564592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"7337244f656ba089dd6b560f752358f0ce7f8b1ab94e32d6fa7f2e4e2b4fe627\"" May 27 17:40:14.761086 kubelet[2335]: E0527 17:40:14.760985 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:14.767202 containerd[1591]: time="2025-05-27T17:40:14.767161121Z" level=info msg="CreateContainer within sandbox \"7337244f656ba089dd6b560f752358f0ce7f8b1ab94e32d6fa7f2e4e2b4fe627\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:40:14.771099 containerd[1591]: time="2025-05-27T17:40:14.771067608Z" level=info msg="CreateContainer within sandbox \"75112bcd4fe265a0d06a3c13a8d05db14788af3f4d675b5926bd89ccc9cc1ace\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6c206a6dde16fba219811514433f70d164668ea1e18234b5c0fdf4c37799f0f8\"" May 27 17:40:14.773034 containerd[1591]: time="2025-05-27T17:40:14.773001884Z" level=info msg="StartContainer for \"6c206a6dde16fba219811514433f70d164668ea1e18234b5c0fdf4c37799f0f8\"" May 27 17:40:14.774947 containerd[1591]: time="2025-05-27T17:40:14.774920190Z" level=info msg="connecting to shim 6c206a6dde16fba219811514433f70d164668ea1e18234b5c0fdf4c37799f0f8" address="unix:///run/containerd/s/0d97745bed99384205c4f7780a5d081c75adcf1612027d7ee62bd5e221afd8f7" protocol=ttrpc version=3 May 27 17:40:14.781042 containerd[1591]: time="2025-05-27T17:40:14.781003618Z" level=info msg="Container ac8358aab752b0434cade7eac03a5452b8549229cdb74ecbc18181973070688f: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:14.791508 containerd[1591]: time="2025-05-27T17:40:14.791458680Z" level=info msg="CreateContainer within sandbox \"7337244f656ba089dd6b560f752358f0ce7f8b1ab94e32d6fa7f2e4e2b4fe627\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac8358aab752b0434cade7eac03a5452b8549229cdb74ecbc18181973070688f\"" May 27 17:40:14.792054 containerd[1591]: time="2025-05-27T17:40:14.792018319Z" level=info msg="StartContainer for \"ac8358aab752b0434cade7eac03a5452b8549229cdb74ecbc18181973070688f\"" May 27 17:40:14.793185 containerd[1591]: time="2025-05-27T17:40:14.793148961Z" level=info msg="connecting to shim ac8358aab752b0434cade7eac03a5452b8549229cdb74ecbc18181973070688f" address="unix:///run/containerd/s/6540056056cb6897f1b1448bdf92189a46d511a89a4bc662f99d9c63a9bfdd39" protocol=ttrpc version=3 May 27 17:40:14.795727 systemd[1]: Started cri-containerd-3bb39b647c165b97586c75c3e4851efc78b8e2f3d348d4f390d40a2e0f720d17.scope - libcontainer container 3bb39b647c165b97586c75c3e4851efc78b8e2f3d348d4f390d40a2e0f720d17. May 27 17:40:14.807130 systemd[1]: Started cri-containerd-6c206a6dde16fba219811514433f70d164668ea1e18234b5c0fdf4c37799f0f8.scope - libcontainer container 6c206a6dde16fba219811514433f70d164668ea1e18234b5c0fdf4c37799f0f8. May 27 17:40:14.825137 systemd[1]: Started cri-containerd-ac8358aab752b0434cade7eac03a5452b8549229cdb74ecbc18181973070688f.scope - libcontainer container ac8358aab752b0434cade7eac03a5452b8549229cdb74ecbc18181973070688f. May 27 17:40:14.931654 containerd[1591]: time="2025-05-27T17:40:14.931591168Z" level=info msg="StartContainer for \"6c206a6dde16fba219811514433f70d164668ea1e18234b5c0fdf4c37799f0f8\" returns successfully" May 27 17:40:14.932174 containerd[1591]: time="2025-05-27T17:40:14.931930897Z" level=info msg="StartContainer for \"ac8358aab752b0434cade7eac03a5452b8549229cdb74ecbc18181973070688f\" returns successfully" May 27 17:40:14.933880 containerd[1591]: time="2025-05-27T17:40:14.933493082Z" level=info msg="StartContainer for \"3bb39b647c165b97586c75c3e4851efc78b8e2f3d348d4f390d40a2e0f720d17\" returns successfully" May 27 17:40:14.986220 kubelet[2335]: I0527 17:40:14.986187 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:40:15.343509 kubelet[2335]: E0527 17:40:15.343186 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:40:15.343509 kubelet[2335]: E0527 17:40:15.343328 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:15.347968 kubelet[2335]: E0527 17:40:15.346439 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:40:15.348486 kubelet[2335]: E0527 17:40:15.348436 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:15.350810 kubelet[2335]: E0527 17:40:15.350708 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:40:15.350937 kubelet[2335]: E0527 17:40:15.350919 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:16.120191 kubelet[2335]: E0527 17:40:16.120113 2335 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 17:40:16.199915 kubelet[2335]: I0527 17:40:16.199867 2335 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:40:16.199915 kubelet[2335]: E0527 17:40:16.199918 2335 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 17:40:16.209903 kubelet[2335]: E0527 17:40:16.209865 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:16.310263 kubelet[2335]: E0527 17:40:16.310211 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:16.352195 kubelet[2335]: E0527 17:40:16.352153 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:40:16.352318 kubelet[2335]: E0527 17:40:16.352283 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:16.352318 kubelet[2335]: E0527 17:40:16.352288 2335 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:40:16.352439 kubelet[2335]: E0527 17:40:16.352416 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:16.410988 kubelet[2335]: E0527 17:40:16.410868 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:16.511884 kubelet[2335]: E0527 17:40:16.511798 2335 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:40:16.608178 kubelet[2335]: I0527 17:40:16.608132 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:40:16.614307 kubelet[2335]: E0527 17:40:16.614279 2335 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 17:40:16.614307 kubelet[2335]: I0527 17:40:16.614301 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:40:16.615551 kubelet[2335]: E0527 17:40:16.615534 2335 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 17:40:16.615551 kubelet[2335]: I0527 17:40:16.615549 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:40:16.616823 kubelet[2335]: E0527 17:40:16.616802 2335 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 17:40:17.247605 kubelet[2335]: I0527 17:40:17.247561 2335 apiserver.go:52] "Watching apiserver" May 27 17:40:17.308368 kubelet[2335]: I0527 17:40:17.308323 2335 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:40:17.354749 kubelet[2335]: I0527 17:40:17.354714 2335 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:40:17.358760 kubelet[2335]: E0527 17:40:17.358716 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:18.355880 kubelet[2335]: E0527 17:40:18.355837 2335 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:18.570370 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-9.scope)... May 27 17:40:18.570385 systemd[1]: Reloading... May 27 17:40:18.645893 zram_generator::config[2664]: No configuration found. May 27 17:40:18.730668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:40:18.860290 systemd[1]: Reloading finished in 289 ms. May 27 17:40:18.884577 kubelet[2335]: I0527 17:40:18.884539 2335 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:40:18.884716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:40:18.899043 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:40:18.899354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:40:18.899405 systemd[1]: kubelet.service: Consumed 1.286s CPU time, 130.6M memory peak. May 27 17:40:18.901313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:40:19.111469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:40:19.125272 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:40:19.160978 kubelet[2706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:40:19.160978 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:40:19.160978 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:40:19.161359 kubelet[2706]: I0527 17:40:19.161020 2706 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:40:19.168174 kubelet[2706]: I0527 17:40:19.167543 2706 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:40:19.168174 kubelet[2706]: I0527 17:40:19.167576 2706 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:40:19.168174 kubelet[2706]: I0527 17:40:19.168146 2706 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:40:19.169889 kubelet[2706]: I0527 17:40:19.169868 2706 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 17:40:19.171791 kubelet[2706]: I0527 17:40:19.171775 2706 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:40:19.174817 kubelet[2706]: I0527 17:40:19.174778 2706 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:40:19.179667 kubelet[2706]: I0527 17:40:19.179646 2706 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:40:19.179945 kubelet[2706]: I0527 17:40:19.179920 2706 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:40:19.180075 kubelet[2706]: I0527 17:40:19.179945 2706 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:40:19.180163 kubelet[2706]: I0527 17:40:19.180082 2706 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:40:19.180163 kubelet[2706]: I0527 17:40:19.180091 2706 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:40:19.180163 kubelet[2706]: I0527 17:40:19.180138 2706 state_mem.go:36] "Initialized new in-memory state store" May 27 17:40:19.180335 kubelet[2706]: I0527 17:40:19.180323 2706 kubelet.go:480] "Attempting to sync node with API server" May 27 17:40:19.180368 kubelet[2706]: I0527 17:40:19.180337 2706 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:40:19.180368 kubelet[2706]: I0527 17:40:19.180359 2706 kubelet.go:386] "Adding apiserver pod source" May 27 17:40:19.180413 kubelet[2706]: I0527 17:40:19.180369 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:40:19.181715 kubelet[2706]: I0527 17:40:19.181690 2706 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:40:19.182354 kubelet[2706]: I0527 17:40:19.182332 2706 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:40:19.186317 kubelet[2706]: I0527 17:40:19.186292 2706 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:40:19.186378 kubelet[2706]: I0527 17:40:19.186342 2706 server.go:1289] "Started kubelet" May 27 17:40:19.189627 kubelet[2706]: I0527 17:40:19.189605 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:40:19.192511 kubelet[2706]: I0527 17:40:19.192476 2706 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:40:19.193560 kubelet[2706]: I0527 17:40:19.193545 2706 server.go:317] "Adding debug handlers to kubelet server" May 27 17:40:19.193840 kubelet[2706]: I0527 17:40:19.193820 2706 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:40:19.193929 kubelet[2706]: I0527 17:40:19.193915 2706 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:40:19.194054 kubelet[2706]: I0527 17:40:19.194040 2706 reconciler.go:26] "Reconciler: start to sync state" May 27 17:40:19.194455 kubelet[2706]: E0527 17:40:19.194410 2706 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:40:19.194500 kubelet[2706]: I0527 17:40:19.194431 2706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:40:19.194887 kubelet[2706]: I0527 17:40:19.194729 2706 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:40:19.194887 kubelet[2706]: I0527 17:40:19.194777 2706 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:40:19.195758 kubelet[2706]: I0527 17:40:19.195533 2706 factory.go:223] Registration of the systemd container factory successfully May 27 17:40:19.195758 kubelet[2706]: I0527 17:40:19.195600 2706 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:40:19.198780 kubelet[2706]: I0527 17:40:19.198760 2706 factory.go:223] Registration of the containerd container factory successfully May 27 17:40:19.208982 kubelet[2706]: I0527 17:40:19.208933 2706 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:40:19.210455 kubelet[2706]: I0527 17:40:19.210360 2706 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:40:19.210455 kubelet[2706]: I0527 17:40:19.210380 2706 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:40:19.211266 kubelet[2706]: I0527 17:40:19.210399 2706 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:40:19.211331 kubelet[2706]: I0527 17:40:19.211321 2706 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:40:19.211440 kubelet[2706]: E0527 17:40:19.211408 2706 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:40:19.232171 kubelet[2706]: I0527 17:40:19.232133 2706 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:40:19.232171 kubelet[2706]: I0527 17:40:19.232150 2706 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:40:19.232171 kubelet[2706]: I0527 17:40:19.232170 2706 state_mem.go:36] "Initialized new in-memory state store" May 27 17:40:19.232347 kubelet[2706]: I0527 17:40:19.232291 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:40:19.232347 kubelet[2706]: I0527 17:40:19.232303 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:40:19.232347 kubelet[2706]: I0527 17:40:19.232319 2706 policy_none.go:49] "None policy: Start" May 27 17:40:19.232347 kubelet[2706]: I0527 17:40:19.232330 2706 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:40:19.232347 kubelet[2706]: I0527 17:40:19.232339 2706 state_mem.go:35] "Initializing new in-memory state store" May 27 17:40:19.232460 kubelet[2706]: I0527 17:40:19.232425 2706 state_mem.go:75] "Updated machine memory state" May 27 17:40:19.236113 kubelet[2706]: E0527 17:40:19.236088 2706 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:40:19.236290 kubelet[2706]: I0527 17:40:19.236259 2706 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:40:19.236290 kubelet[2706]: I0527 17:40:19.236270 2706 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:40:19.236580 kubelet[2706]: I0527 17:40:19.236562 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:40:19.238778 kubelet[2706]: E0527 17:40:19.237897 2706 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:40:19.312632 kubelet[2706]: I0527 17:40:19.312592 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:40:19.312907 kubelet[2706]: I0527 17:40:19.312613 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:40:19.313090 kubelet[2706]: I0527 17:40:19.313056 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:40:19.319727 kubelet[2706]: E0527 17:40:19.319676 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 17:40:19.342663 kubelet[2706]: I0527 17:40:19.342640 2706 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:40:19.347688 kubelet[2706]: I0527 17:40:19.347672 2706 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 17:40:19.347763 kubelet[2706]: I0527 17:40:19.347739 2706 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:40:19.396082 kubelet[2706]: I0527 17:40:19.395942 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d8010abdb8c9bf0a771271053b4255f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d8010abdb8c9bf0a771271053b4255f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:40:19.396082 kubelet[2706]: I0527 17:40:19.395972 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d8010abdb8c9bf0a771271053b4255f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d8010abdb8c9bf0a771271053b4255f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:40:19.396082 kubelet[2706]: I0527 17:40:19.395990 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:19.396082 kubelet[2706]: I0527 17:40:19.396004 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:19.396082 kubelet[2706]: I0527 17:40:19.396018 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:19.396315 kubelet[2706]: I0527 17:40:19.396031 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:19.396315 kubelet[2706]: I0527 17:40:19.396043 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d8010abdb8c9bf0a771271053b4255f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d8010abdb8c9bf0a771271053b4255f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:40:19.396315 kubelet[2706]: I0527 17:40:19.396057 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:40:19.396315 kubelet[2706]: I0527 17:40:19.396071 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 17:40:19.572691 sudo[2746]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 17:40:19.573036 sudo[2746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 17:40:19.618014 kubelet[2706]: E0527 17:40:19.617982 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:19.620122 kubelet[2706]: E0527 17:40:19.620077 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:19.620276 kubelet[2706]: E0527 17:40:19.620244 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:20.182434 kubelet[2706]: I0527 17:40:20.182382 2706 apiserver.go:52] "Watching apiserver" May 27 17:40:20.194365 kubelet[2706]: I0527 17:40:20.194333 2706 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:40:20.220080 kubelet[2706]: I0527 17:40:20.219954 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:40:20.220080 kubelet[2706]: I0527 17:40:20.219963 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:40:20.220229 kubelet[2706]: E0527 17:40:20.220181 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:20.363595 kubelet[2706]: E0527 17:40:20.363436 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 17:40:20.363738 kubelet[2706]: E0527 17:40:20.363636 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 17:40:20.364335 kubelet[2706]: E0527 17:40:20.363777 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:20.364335 kubelet[2706]: E0527 17:40:20.363907 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:20.390242 kubelet[2706]: I0527 17:40:20.390184 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.390159115 podStartE2EDuration="1.390159115s" podCreationTimestamp="2025-05-27 17:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:40:20.352718016 +0000 UTC m=+1.221760524" watchObservedRunningTime="2025-05-27 17:40:20.390159115 +0000 UTC m=+1.259201623" May 27 17:40:20.442336 sudo[2746]: pam_unix(sudo:session): session closed for user root May 27 17:40:20.446942 kubelet[2706]: I0527 17:40:20.446877 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.446840975 podStartE2EDuration="1.446840975s" podCreationTimestamp="2025-05-27 17:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:40:20.39023528 +0000 UTC m=+1.259277788" watchObservedRunningTime="2025-05-27 17:40:20.446840975 +0000 UTC m=+1.315883483" May 27 17:40:20.447200 kubelet[2706]: I0527 17:40:20.447042 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.447037518 podStartE2EDuration="3.447037518s" podCreationTimestamp="2025-05-27 17:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:40:20.446987112 +0000 UTC m=+1.316029620" watchObservedRunningTime="2025-05-27 17:40:20.447037518 +0000 UTC m=+1.316080026" May 27 17:40:21.221378 kubelet[2706]: E0527 17:40:21.221338 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:21.221378 kubelet[2706]: E0527 17:40:21.221340 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:21.598412 sudo[1811]: pam_unix(sudo:session): session closed for user root May 27 17:40:21.600182 sshd[1810]: Connection closed by 10.0.0.1 port 42446 May 27 17:40:21.600822 sshd-session[1808]: pam_unix(sshd:session): session closed for user core May 27 17:40:21.604643 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:42446.service: Deactivated successfully. May 27 17:40:21.606618 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:40:21.606831 systemd[1]: session-9.scope: Consumed 4.910s CPU time, 266M memory peak. May 27 17:40:21.607980 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. May 27 17:40:21.609267 systemd-logind[1577]: Removed session 9. May 27 17:40:22.225313 kubelet[2706]: E0527 17:40:22.225258 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:23.677102 update_engine[1581]: I20250527 17:40:23.676996 1581 update_attempter.cc:509] Updating boot flags... May 27 17:40:23.856381 kubelet[2706]: E0527 17:40:23.856338 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:24.894179 kubelet[2706]: I0527 17:40:24.894138 2706 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:40:24.894719 containerd[1591]: time="2025-05-27T17:40:24.894596505Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:40:24.895017 kubelet[2706]: I0527 17:40:24.894907 2706 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:40:25.510843 kubelet[2706]: E0527 17:40:25.510753 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:25.813214 systemd[1]: Created slice kubepods-besteffort-pod0a50b008_e1c3_4f4f_ba0c_406455232151.slice - libcontainer container kubepods-besteffort-pod0a50b008_e1c3_4f4f_ba0c_406455232151.slice. May 27 17:40:25.827526 systemd[1]: Created slice kubepods-burstable-pod06bf7627_546b_4828_9257_60abefe87ce8.slice - libcontainer container kubepods-burstable-pod06bf7627_546b_4828_9257_60abefe87ce8.slice. May 27 17:40:25.841365 kubelet[2706]: I0527 17:40:25.841310 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a50b008-e1c3-4f4f-ba0c-406455232151-kube-proxy\") pod \"kube-proxy-6rl56\" (UID: \"0a50b008-e1c3-4f4f-ba0c-406455232151\") " pod="kube-system/kube-proxy-6rl56" May 27 17:40:25.841365 kubelet[2706]: I0527 17:40:25.841356 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a50b008-e1c3-4f4f-ba0c-406455232151-xtables-lock\") pod \"kube-proxy-6rl56\" (UID: \"0a50b008-e1c3-4f4f-ba0c-406455232151\") " pod="kube-system/kube-proxy-6rl56" May 27 17:40:25.841365 kubelet[2706]: I0527 17:40:25.841377 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cilium-cgroup\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841621 kubelet[2706]: I0527 17:40:25.841414 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-host-proc-sys-kernel\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841621 kubelet[2706]: I0527 17:40:25.841461 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06bf7627-546b-4828-9257-60abefe87ce8-hubble-tls\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841621 kubelet[2706]: I0527 17:40:25.841484 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2gd\" (UniqueName: \"kubernetes.io/projected/06bf7627-546b-4828-9257-60abefe87ce8-kube-api-access-8z2gd\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841621 kubelet[2706]: I0527 17:40:25.841506 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-hostproc\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841621 kubelet[2706]: I0527 17:40:25.841527 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-xtables-lock\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841621 kubelet[2706]: I0527 17:40:25.841556 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06bf7627-546b-4828-9257-60abefe87ce8-cilium-config-path\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841747 kubelet[2706]: I0527 17:40:25.841580 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-host-proc-sys-net\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841747 kubelet[2706]: I0527 17:40:25.841600 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cilium-run\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841747 kubelet[2706]: I0527 17:40:25.841617 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-bpf-maps\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841747 kubelet[2706]: I0527 17:40:25.841631 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cni-path\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841747 kubelet[2706]: I0527 17:40:25.841646 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a50b008-e1c3-4f4f-ba0c-406455232151-lib-modules\") pod \"kube-proxy-6rl56\" (UID: \"0a50b008-e1c3-4f4f-ba0c-406455232151\") " pod="kube-system/kube-proxy-6rl56" May 27 17:40:25.841747 kubelet[2706]: I0527 17:40:25.841676 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdqw\" (UniqueName: \"kubernetes.io/projected/0a50b008-e1c3-4f4f-ba0c-406455232151-kube-api-access-7hdqw\") pod \"kube-proxy-6rl56\" (UID: \"0a50b008-e1c3-4f4f-ba0c-406455232151\") " pod="kube-system/kube-proxy-6rl56" May 27 17:40:25.841885 kubelet[2706]: I0527 17:40:25.841692 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-etc-cni-netd\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841885 kubelet[2706]: I0527 17:40:25.841705 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-lib-modules\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:25.841885 kubelet[2706]: I0527 17:40:25.841720 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06bf7627-546b-4828-9257-60abefe87ce8-clustermesh-secrets\") pod \"cilium-qkv4n\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " pod="kube-system/cilium-qkv4n" May 27 17:40:26.045258 systemd[1]: Created slice kubepods-besteffort-podf4f44833_f5bd_4e9d_8a1c_34df019a161e.slice - libcontainer container kubepods-besteffort-podf4f44833_f5bd_4e9d_8a1c_34df019a161e.slice. May 27 17:40:26.125272 kubelet[2706]: E0527 17:40:26.125122 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:26.126091 containerd[1591]: time="2025-05-27T17:40:26.126020445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6rl56,Uid:0a50b008-e1c3-4f4f-ba0c-406455232151,Namespace:kube-system,Attempt:0,}" May 27 17:40:26.130295 kubelet[2706]: E0527 17:40:26.130251 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:26.131446 containerd[1591]: time="2025-05-27T17:40:26.131087661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qkv4n,Uid:06bf7627-546b-4828-9257-60abefe87ce8,Namespace:kube-system,Attempt:0,}" May 27 17:40:26.143064 kubelet[2706]: I0527 17:40:26.143017 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4f44833-f5bd-4e9d-8a1c-34df019a161e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-psmf4\" (UID: \"f4f44833-f5bd-4e9d-8a1c-34df019a161e\") " pod="kube-system/cilium-operator-6c4d7847fc-psmf4" May 27 17:40:26.143169 kubelet[2706]: I0527 17:40:26.143072 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplqz\" (UniqueName: \"kubernetes.io/projected/f4f44833-f5bd-4e9d-8a1c-34df019a161e-kube-api-access-hplqz\") pod \"cilium-operator-6c4d7847fc-psmf4\" (UID: \"f4f44833-f5bd-4e9d-8a1c-34df019a161e\") " pod="kube-system/cilium-operator-6c4d7847fc-psmf4" May 27 17:40:26.177501 containerd[1591]: time="2025-05-27T17:40:26.177423085Z" level=info msg="connecting to shim 583031449a9490977e994d7482f3ba773dd1386303c6ac4b1ca81331d27dbe61" address="unix:///run/containerd/s/a6cbcf51e447a04cea8d5347274adcc1bfec3c303bb53dfa1c3cf7727ccf9034" namespace=k8s.io protocol=ttrpc version=3 May 27 17:40:26.181452 containerd[1591]: time="2025-05-27T17:40:26.180163308Z" level=info msg="connecting to shim 7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559" address="unix:///run/containerd/s/c7ba50c772e67b7f73ccddc43b6869c21e555c707c09fd9dc12aefa74480e427" namespace=k8s.io protocol=ttrpc version=3 May 27 17:40:26.231076 kubelet[2706]: E0527 17:40:26.231032 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:26.231319 systemd[1]: Started cri-containerd-7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559.scope - libcontainer container 7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559. May 27 17:40:26.237613 systemd[1]: Started cri-containerd-583031449a9490977e994d7482f3ba773dd1386303c6ac4b1ca81331d27dbe61.scope - libcontainer container 583031449a9490977e994d7482f3ba773dd1386303c6ac4b1ca81331d27dbe61. May 27 17:40:26.278345 containerd[1591]: time="2025-05-27T17:40:26.278286468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qkv4n,Uid:06bf7627-546b-4828-9257-60abefe87ce8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\"" May 27 17:40:26.279232 kubelet[2706]: E0527 17:40:26.279208 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:26.280095 containerd[1591]: time="2025-05-27T17:40:26.280068088Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 17:40:26.282808 containerd[1591]: time="2025-05-27T17:40:26.282769880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6rl56,Uid:0a50b008-e1c3-4f4f-ba0c-406455232151,Namespace:kube-system,Attempt:0,} returns sandbox id \"583031449a9490977e994d7482f3ba773dd1386303c6ac4b1ca81331d27dbe61\"" May 27 17:40:26.283504 kubelet[2706]: E0527 17:40:26.283475 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:26.288559 containerd[1591]: time="2025-05-27T17:40:26.288206996Z" level=info msg="CreateContainer within sandbox \"583031449a9490977e994d7482f3ba773dd1386303c6ac4b1ca81331d27dbe61\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:40:26.298388 containerd[1591]: time="2025-05-27T17:40:26.298337322Z" level=info msg="Container 972ac983864280c658cf67ebe630dde8002a0b8f4538df259a2af12aa78bf82a: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:26.308068 containerd[1591]: time="2025-05-27T17:40:26.308023959Z" level=info msg="CreateContainer within sandbox \"583031449a9490977e994d7482f3ba773dd1386303c6ac4b1ca81331d27dbe61\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"972ac983864280c658cf67ebe630dde8002a0b8f4538df259a2af12aa78bf82a\"" May 27 17:40:26.308665 containerd[1591]: time="2025-05-27T17:40:26.308629364Z" level=info msg="StartContainer for \"972ac983864280c658cf67ebe630dde8002a0b8f4538df259a2af12aa78bf82a\"" May 27 17:40:26.310129 containerd[1591]: time="2025-05-27T17:40:26.310094786Z" level=info msg="connecting to shim 972ac983864280c658cf67ebe630dde8002a0b8f4538df259a2af12aa78bf82a" address="unix:///run/containerd/s/a6cbcf51e447a04cea8d5347274adcc1bfec3c303bb53dfa1c3cf7727ccf9034" protocol=ttrpc version=3 May 27 17:40:26.335051 systemd[1]: Started cri-containerd-972ac983864280c658cf67ebe630dde8002a0b8f4538df259a2af12aa78bf82a.scope - libcontainer container 972ac983864280c658cf67ebe630dde8002a0b8f4538df259a2af12aa78bf82a. May 27 17:40:26.349353 kubelet[2706]: E0527 17:40:26.349302 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:26.350235 containerd[1591]: time="2025-05-27T17:40:26.350187868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-psmf4,Uid:f4f44833-f5bd-4e9d-8a1c-34df019a161e,Namespace:kube-system,Attempt:0,}" May 27 17:40:26.384755 containerd[1591]: time="2025-05-27T17:40:26.384622615Z" level=info msg="connecting to shim 1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3" address="unix:///run/containerd/s/2f5387c36773898b5b5b4a1612f49e6b290523c9d489234b179443a3b1ada4fa" namespace=k8s.io protocol=ttrpc version=3 May 27 17:40:26.387346 containerd[1591]: time="2025-05-27T17:40:26.387303458Z" level=info msg="StartContainer for \"972ac983864280c658cf67ebe630dde8002a0b8f4538df259a2af12aa78bf82a\" returns successfully" May 27 17:40:26.411185 systemd[1]: Started cri-containerd-1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3.scope - libcontainer container 1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3. May 27 17:40:26.461533 containerd[1591]: time="2025-05-27T17:40:26.461474071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-psmf4,Uid:f4f44833-f5bd-4e9d-8a1c-34df019a161e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3\"" May 27 17:40:26.462378 kubelet[2706]: E0527 17:40:26.462339 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:26.819522 kubelet[2706]: E0527 17:40:26.819492 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:27.235930 kubelet[2706]: E0527 17:40:27.235110 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:27.235930 kubelet[2706]: E0527 17:40:27.235377 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:27.244376 kubelet[2706]: I0527 17:40:27.244281 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6rl56" podStartSLOduration=2.244256648 podStartE2EDuration="2.244256648s" podCreationTimestamp="2025-05-27 17:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:40:27.243636546 +0000 UTC m=+8.112679044" watchObservedRunningTime="2025-05-27 17:40:27.244256648 +0000 UTC m=+8.113299156" May 27 17:40:28.236679 kubelet[2706]: E0527 17:40:28.236646 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:40:34.112830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3541053814.mount: Deactivated successfully. May 27 17:40:36.584358 containerd[1591]: time="2025-05-27T17:40:36.584251626Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:36.585046 containerd[1591]: time="2025-05-27T17:40:36.584990117Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 17:40:36.586214 containerd[1591]: time="2025-05-27T17:40:36.586176904Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:36.587811 containerd[1591]: time="2025-05-27T17:40:36.587765527Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.307660458s" May 27 17:40:36.587811 containerd[1591]: time="2025-05-27T17:40:36.587797627Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 17:40:36.589862 containerd[1591]: time="2025-05-27T17:40:36.589835606Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 17:40:36.593678 containerd[1591]: time="2025-05-27T17:40:36.593598927Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:40:36.603746 containerd[1591]: time="2025-05-27T17:40:36.603699837Z" level=info msg="Container 71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:36.611048 containerd[1591]: time="2025-05-27T17:40:36.610999399Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\"" May 27 17:40:36.611609 containerd[1591]: time="2025-05-27T17:40:36.611573611Z" level=info msg="StartContainer for \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\"" May 27 17:40:36.612648 containerd[1591]: time="2025-05-27T17:40:36.612590217Z" level=info msg="connecting to shim 71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72" address="unix:///run/containerd/s/c7ba50c772e67b7f73ccddc43b6869c21e555c707c09fd9dc12aefa74480e427" protocol=ttrpc version=3 May 27 17:40:36.634046 systemd[1]: Started cri-containerd-71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72.scope - libcontainer container 71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72. May 27 17:40:36.666393 containerd[1591]: time="2025-05-27T17:40:36.666344462Z" level=info msg="StartContainer for \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\" returns successfully" May 27 17:40:36.679098 systemd[1]: cri-containerd-71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72.scope: Deactivated successfully. May 27 17:40:36.680572 containerd[1591]: time="2025-05-27T17:40:36.680534727Z" level=info msg="received exit event container_id:\"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\" id:\"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\" pid:3150 exited_at:{seconds:1748367636 nanos:680021400}" May 27 17:40:36.680905 containerd[1591]: time="2025-05-27T17:40:36.680799505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\" id:\"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\" pid:3150 exited_at:{seconds:1748367636 nanos:680021400}" May 27 17:40:36.704018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72-rootfs.mount: Deactivated successfully. May 27 17:40:37.261076 containerd[1591]: time="2025-05-27T17:40:37.261026773Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:40:37.270188 containerd[1591]: time="2025-05-27T17:40:37.270136702Z" level=info msg="Container 931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:37.276880 containerd[1591]: time="2025-05-27T17:40:37.276818576Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\"" May 27 17:40:37.277658 containerd[1591]: time="2025-05-27T17:40:37.277393319Z" level=info msg="StartContainer for \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\"" May 27 17:40:37.278233 containerd[1591]: time="2025-05-27T17:40:37.278190230Z" level=info msg="connecting to shim 931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb" address="unix:///run/containerd/s/c7ba50c772e67b7f73ccddc43b6869c21e555c707c09fd9dc12aefa74480e427" protocol=ttrpc version=3 May 27 17:40:37.307054 systemd[1]: Started cri-containerd-931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb.scope - libcontainer container 931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb. May 27 17:40:37.338721 containerd[1591]: time="2025-05-27T17:40:37.338672888Z" level=info msg="StartContainer for \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\" returns successfully" May 27 17:40:37.353108 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:40:37.353367 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:40:37.353774 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 17:40:37.355293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:40:37.356474 systemd[1]: cri-containerd-931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb.scope: Deactivated successfully. May 27 17:40:37.356896 containerd[1591]: time="2025-05-27T17:40:37.356703699Z" level=info msg="received exit event container_id:\"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\" id:\"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\" pid:3197 exited_at:{seconds:1748367637 nanos:356528129}" May 27 17:40:37.357283 containerd[1591]: time="2025-05-27T17:40:37.357257844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\" id:\"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\" pid:3197 exited_at:{seconds:1748367637 nanos:356528129}" May 27 17:40:37.387659 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:40:37.917895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184149086.mount: Deactivated successfully. May 27 17:40:38.264413 containerd[1591]: time="2025-05-27T17:40:38.264033585Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:40:38.279886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435728047.mount: Deactivated successfully. May 27 17:40:38.280676 containerd[1591]: time="2025-05-27T17:40:38.280644993Z" level=info msg="Container a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:38.295748 containerd[1591]: time="2025-05-27T17:40:38.295707096Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\"" May 27 17:40:38.296338 containerd[1591]: time="2025-05-27T17:40:38.296306794Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:38.296464 containerd[1591]: time="2025-05-27T17:40:38.296437090Z" level=info msg="StartContainer for \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\"" May 27 17:40:38.299545 containerd[1591]: time="2025-05-27T17:40:38.299505648Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 17:40:38.300881 containerd[1591]: time="2025-05-27T17:40:38.300826796Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:40:38.301097 containerd[1591]: time="2025-05-27T17:40:38.301061738Z" level=info msg="connecting to shim a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2" address="unix:///run/containerd/s/c7ba50c772e67b7f73ccddc43b6869c21e555c707c09fd9dc12aefa74480e427" protocol=ttrpc version=3 May 27 17:40:38.303599 containerd[1591]: time="2025-05-27T17:40:38.303569490Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.713676525s" May 27 17:40:38.303599 containerd[1591]: time="2025-05-27T17:40:38.303599647Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 17:40:38.308501 containerd[1591]: time="2025-05-27T17:40:38.308455180Z" level=info msg="CreateContainer within sandbox \"1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 17:40:38.316806 containerd[1591]: time="2025-05-27T17:40:38.316769215Z" level=info msg="Container 43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:38.324220 containerd[1591]: time="2025-05-27T17:40:38.324121930Z" level=info msg="CreateContainer within sandbox \"1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\"" May 27 17:40:38.324648 containerd[1591]: time="2025-05-27T17:40:38.324610189Z" level=info msg="StartContainer for \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\"" May 27 17:40:38.326050 containerd[1591]: time="2025-05-27T17:40:38.326022068Z" level=info msg="connecting to shim 43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549" address="unix:///run/containerd/s/2f5387c36773898b5b5b4a1612f49e6b290523c9d489234b179443a3b1ada4fa" protocol=ttrpc version=3 May 27 17:40:38.328225 systemd[1]: Started cri-containerd-a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2.scope - libcontainer container a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2. May 27 17:40:38.359000 systemd[1]: Started cri-containerd-43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549.scope - libcontainer container 43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549. May 27 17:40:38.381532 systemd[1]: cri-containerd-a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2.scope: Deactivated successfully. May 27 17:40:38.383460 containerd[1591]: time="2025-05-27T17:40:38.383423654Z" level=info msg="received exit event container_id:\"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\" id:\"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\" pid:3262 exited_at:{seconds:1748367638 nanos:383230210}" May 27 17:40:38.383646 containerd[1591]: time="2025-05-27T17:40:38.383619994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\" id:\"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\" pid:3262 exited_at:{seconds:1748367638 nanos:383230210}" May 27 17:40:38.385170 containerd[1591]: time="2025-05-27T17:40:38.385147150Z" level=info msg="StartContainer for \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\" returns successfully" May 27 17:40:38.395792 containerd[1591]: time="2025-05-27T17:40:38.395744544Z" level=info msg="StartContainer for \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" returns successfully" May 27 17:40:39.282679 kubelet[2706]: I0527 17:40:39.282076 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-psmf4" podStartSLOduration=2.440724967 podStartE2EDuration="14.281580015s" podCreationTimestamp="2025-05-27 17:40:25 +0000 UTC" firstStartedPulling="2025-05-27 17:40:26.46335004 +0000 UTC m=+7.332392558" lastFinishedPulling="2025-05-27 17:40:38.304205097 +0000 UTC m=+19.173247606" observedRunningTime="2025-05-27 17:40:39.280935491 +0000 UTC m=+20.149977999" watchObservedRunningTime="2025-05-27 17:40:39.281580015 +0000 UTC m=+20.150622523" May 27 17:40:39.287332 containerd[1591]: time="2025-05-27T17:40:39.287298680Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:40:39.303912 containerd[1591]: time="2025-05-27T17:40:39.303510469Z" level=info msg="Container 28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:39.314302 containerd[1591]: time="2025-05-27T17:40:39.314225129Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\"" May 27 17:40:39.315140 containerd[1591]: time="2025-05-27T17:40:39.315096219Z" level=info msg="StartContainer for \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\"" May 27 17:40:39.316370 containerd[1591]: time="2025-05-27T17:40:39.316339510Z" level=info msg="connecting to shim 28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5" address="unix:///run/containerd/s/c7ba50c772e67b7f73ccddc43b6869c21e555c707c09fd9dc12aefa74480e427" protocol=ttrpc version=3 May 27 17:40:39.344064 systemd[1]: Started cri-containerd-28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5.scope - libcontainer container 28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5. May 27 17:40:39.374035 systemd[1]: cri-containerd-28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5.scope: Deactivated successfully. May 27 17:40:39.375547 containerd[1591]: time="2025-05-27T17:40:39.375485001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\" id:\"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\" pid:3337 exited_at:{seconds:1748367639 nanos:375167894}" May 27 17:40:39.376514 containerd[1591]: time="2025-05-27T17:40:39.376474755Z" level=info msg="received exit event container_id:\"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\" id:\"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\" pid:3337 exited_at:{seconds:1748367639 nanos:375167894}" May 27 17:40:39.385215 containerd[1591]: time="2025-05-27T17:40:39.385158762Z" level=info msg="StartContainer for \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\" returns successfully" May 27 17:40:39.400700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5-rootfs.mount: Deactivated successfully. May 27 17:40:40.296312 containerd[1591]: time="2025-05-27T17:40:40.296250336Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:40:40.310052 containerd[1591]: time="2025-05-27T17:40:40.309997739Z" level=info msg="Container b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:40.320922 containerd[1591]: time="2025-05-27T17:40:40.320065548Z" level=info msg="CreateContainer within sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\"" May 27 17:40:40.323182 containerd[1591]: time="2025-05-27T17:40:40.323124995Z" level=info msg="StartContainer for \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\"" May 27 17:40:40.324396 containerd[1591]: time="2025-05-27T17:40:40.324352416Z" level=info msg="connecting to shim b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c" address="unix:///run/containerd/s/c7ba50c772e67b7f73ccddc43b6869c21e555c707c09fd9dc12aefa74480e427" protocol=ttrpc version=3 May 27 17:40:40.349060 systemd[1]: Started cri-containerd-b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c.scope - libcontainer container b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c. May 27 17:40:40.385737 containerd[1591]: time="2025-05-27T17:40:40.385675590Z" level=info msg="StartContainer for \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" returns successfully" May 27 17:40:40.465620 containerd[1591]: time="2025-05-27T17:40:40.465510875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" id:\"79605ee62b704156a8afcf9f089564f1292fa5ef9e2a457a5595e832b4a975ba\" pid:3406 exited_at:{seconds:1748367640 nanos:464029166}" May 27 17:40:40.510448 kubelet[2706]: I0527 17:40:40.510386 2706 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:40:40.563752 systemd[1]: Created slice kubepods-burstable-pod77a75067_0382_49ee_abe0_1df89e7d222b.slice - libcontainer container kubepods-burstable-pod77a75067_0382_49ee_abe0_1df89e7d222b.slice. May 27 17:40:40.569607 systemd[1]: Created slice kubepods-burstable-pod1e14ba4e_e6a1_4ea2_a249_c850df0a0199.slice - libcontainer container kubepods-burstable-pod1e14ba4e_e6a1_4ea2_a249_c850df0a0199.slice. May 27 17:40:40.648773 kubelet[2706]: I0527 17:40:40.648710 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxzhz\" (UniqueName: \"kubernetes.io/projected/1e14ba4e-e6a1-4ea2-a249-c850df0a0199-kube-api-access-hxzhz\") pod \"coredns-674b8bbfcf-d4rd7\" (UID: \"1e14ba4e-e6a1-4ea2-a249-c850df0a0199\") " pod="kube-system/coredns-674b8bbfcf-d4rd7" May 27 17:40:40.648773 kubelet[2706]: I0527 17:40:40.648757 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77a75067-0382-49ee-abe0-1df89e7d222b-config-volume\") pod \"coredns-674b8bbfcf-8qxr5\" (UID: \"77a75067-0382-49ee-abe0-1df89e7d222b\") " pod="kube-system/coredns-674b8bbfcf-8qxr5" May 27 17:40:40.648773 kubelet[2706]: I0527 17:40:40.648775 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvtj\" (UniqueName: \"kubernetes.io/projected/77a75067-0382-49ee-abe0-1df89e7d222b-kube-api-access-4zvtj\") pod \"coredns-674b8bbfcf-8qxr5\" (UID: \"77a75067-0382-49ee-abe0-1df89e7d222b\") " pod="kube-system/coredns-674b8bbfcf-8qxr5" May 27 17:40:40.648773 kubelet[2706]: I0527 17:40:40.648790 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e14ba4e-e6a1-4ea2-a249-c850df0a0199-config-volume\") pod \"coredns-674b8bbfcf-d4rd7\" (UID: \"1e14ba4e-e6a1-4ea2-a249-c850df0a0199\") " pod="kube-system/coredns-674b8bbfcf-d4rd7" May 27 17:40:40.868950 containerd[1591]: time="2025-05-27T17:40:40.868670555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8qxr5,Uid:77a75067-0382-49ee-abe0-1df89e7d222b,Namespace:kube-system,Attempt:0,}" May 27 17:40:40.875459 containerd[1591]: time="2025-05-27T17:40:40.875418266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d4rd7,Uid:1e14ba4e-e6a1-4ea2-a249-c850df0a0199,Namespace:kube-system,Attempt:0,}" May 27 17:40:42.649540 systemd-networkd[1500]: cilium_host: Link UP May 27 17:40:42.649706 systemd-networkd[1500]: cilium_net: Link UP May 27 17:40:42.650572 systemd-networkd[1500]: cilium_net: Gained carrier May 27 17:40:42.650749 systemd-networkd[1500]: cilium_host: Gained carrier May 27 17:40:42.769792 systemd-networkd[1500]: cilium_vxlan: Link UP May 27 17:40:42.770064 systemd-networkd[1500]: cilium_vxlan: Gained carrier May 27 17:40:42.930026 systemd-networkd[1500]: cilium_net: Gained IPv6LL May 27 17:40:43.051901 kernel: NET: Registered PF_ALG protocol family May 27 17:40:43.226093 systemd-networkd[1500]: cilium_host: Gained IPv6LL May 27 17:40:43.716845 systemd-networkd[1500]: lxc_health: Link UP May 27 17:40:43.718953 systemd-networkd[1500]: lxc_health: Gained carrier May 27 17:40:43.948183 systemd-networkd[1500]: lxcf0423a4e4e42: Link UP May 27 17:40:43.948887 kernel: eth0: renamed from tmpc6eac May 27 17:40:43.949975 systemd-networkd[1500]: lxcf0423a4e4e42: Gained carrier May 27 17:40:43.951022 systemd-networkd[1500]: lxcc5a54c82d0a4: Link UP May 27 17:40:43.962986 kernel: eth0: renamed from tmp5a9a8 May 27 17:40:43.966957 systemd-networkd[1500]: lxcc5a54c82d0a4: Gained carrier May 27 17:40:44.155328 kubelet[2706]: I0527 17:40:44.155228 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qkv4n" podStartSLOduration=8.846110555 podStartE2EDuration="19.155206539s" podCreationTimestamp="2025-05-27 17:40:25 +0000 UTC" firstStartedPulling="2025-05-27 17:40:26.279811283 +0000 UTC m=+7.148853791" lastFinishedPulling="2025-05-27 17:40:36.588907267 +0000 UTC m=+17.457949775" observedRunningTime="2025-05-27 17:40:41.304336929 +0000 UTC m=+22.173379437" watchObservedRunningTime="2025-05-27 17:40:44.155206539 +0000 UTC m=+25.024249047" May 27 17:40:44.572962 systemd-networkd[1500]: cilium_vxlan: Gained IPv6LL May 27 17:40:44.826466 systemd-networkd[1500]: lxc_health: Gained IPv6LL May 27 17:40:45.466127 systemd-networkd[1500]: lxcc5a54c82d0a4: Gained IPv6LL May 27 17:40:45.466480 systemd-networkd[1500]: lxcf0423a4e4e42: Gained IPv6LL May 27 17:40:47.397074 containerd[1591]: time="2025-05-27T17:40:47.396921846Z" level=info msg="connecting to shim c6eac13770c3802072f889efc6e1dbda1997eec5192a0f1e56289cb16b6f7aee" address="unix:///run/containerd/s/c27b9ca63357e3c6732b1c1ee0e4a6413ad82fcada58155bf36a9b5a5dd52abf" namespace=k8s.io protocol=ttrpc version=3 May 27 17:40:47.398822 containerd[1591]: time="2025-05-27T17:40:47.398787162Z" level=info msg="connecting to shim 5a9a89742b7c2cfdb708adf5325d772ada75e53a2dd7b4623d5e0c4a7a7035a9" address="unix:///run/containerd/s/63e03a44122607421c37ce42f5c2d5fe2c841ea465788b4937e43be653815100" namespace=k8s.io protocol=ttrpc version=3 May 27 17:40:47.439083 systemd[1]: Started cri-containerd-5a9a89742b7c2cfdb708adf5325d772ada75e53a2dd7b4623d5e0c4a7a7035a9.scope - libcontainer container 5a9a89742b7c2cfdb708adf5325d772ada75e53a2dd7b4623d5e0c4a7a7035a9. May 27 17:40:47.442510 systemd[1]: Started cri-containerd-c6eac13770c3802072f889efc6e1dbda1997eec5192a0f1e56289cb16b6f7aee.scope - libcontainer container c6eac13770c3802072f889efc6e1dbda1997eec5192a0f1e56289cb16b6f7aee. May 27 17:40:47.456445 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:40:47.458756 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:40:47.493673 containerd[1591]: time="2025-05-27T17:40:47.493620293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d4rd7,Uid:1e14ba4e-e6a1-4ea2-a249-c850df0a0199,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6eac13770c3802072f889efc6e1dbda1997eec5192a0f1e56289cb16b6f7aee\"" May 27 17:40:47.503421 containerd[1591]: time="2025-05-27T17:40:47.503377292Z" level=info msg="CreateContainer within sandbox \"c6eac13770c3802072f889efc6e1dbda1997eec5192a0f1e56289cb16b6f7aee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:40:47.503673 containerd[1591]: time="2025-05-27T17:40:47.503619818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8qxr5,Uid:77a75067-0382-49ee-abe0-1df89e7d222b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a9a89742b7c2cfdb708adf5325d772ada75e53a2dd7b4623d5e0c4a7a7035a9\"" May 27 17:40:47.509610 containerd[1591]: time="2025-05-27T17:40:47.509569822Z" level=info msg="CreateContainer within sandbox \"5a9a89742b7c2cfdb708adf5325d772ada75e53a2dd7b4623d5e0c4a7a7035a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:40:47.522512 containerd[1591]: time="2025-05-27T17:40:47.522474168Z" level=info msg="Container d5e2405960caba9180cdbc8b7481f180b538517c07e36a55ff30dc18aa1dcff1: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:47.530227 containerd[1591]: time="2025-05-27T17:40:47.530174752Z" level=info msg="CreateContainer within sandbox \"5a9a89742b7c2cfdb708adf5325d772ada75e53a2dd7b4623d5e0c4a7a7035a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d5e2405960caba9180cdbc8b7481f180b538517c07e36a55ff30dc18aa1dcff1\"" May 27 17:40:47.530814 containerd[1591]: time="2025-05-27T17:40:47.530773787Z" level=info msg="StartContainer for \"d5e2405960caba9180cdbc8b7481f180b538517c07e36a55ff30dc18aa1dcff1\"" May 27 17:40:47.531635 containerd[1591]: time="2025-05-27T17:40:47.531612424Z" level=info msg="connecting to shim d5e2405960caba9180cdbc8b7481f180b538517c07e36a55ff30dc18aa1dcff1" address="unix:///run/containerd/s/63e03a44122607421c37ce42f5c2d5fe2c841ea465788b4937e43be653815100" protocol=ttrpc version=3 May 27 17:40:47.537922 containerd[1591]: time="2025-05-27T17:40:47.537894913Z" level=info msg="Container 34c1acd99a9b1c8bfbadaa30b3175ad62e819d63b0583b7919f5837493a334a7: CDI devices from CRI Config.CDIDevices: []" May 27 17:40:47.544606 containerd[1591]: time="2025-05-27T17:40:47.544577493Z" level=info msg="CreateContainer within sandbox \"c6eac13770c3802072f889efc6e1dbda1997eec5192a0f1e56289cb16b6f7aee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34c1acd99a9b1c8bfbadaa30b3175ad62e819d63b0583b7919f5837493a334a7\"" May 27 17:40:47.545015 containerd[1591]: time="2025-05-27T17:40:47.544998124Z" level=info msg="StartContainer for \"34c1acd99a9b1c8bfbadaa30b3175ad62e819d63b0583b7919f5837493a334a7\"" May 27 17:40:47.545916 containerd[1591]: time="2025-05-27T17:40:47.545746681Z" level=info msg="connecting to shim 34c1acd99a9b1c8bfbadaa30b3175ad62e819d63b0583b7919f5837493a334a7" address="unix:///run/containerd/s/c27b9ca63357e3c6732b1c1ee0e4a6413ad82fcada58155bf36a9b5a5dd52abf" protocol=ttrpc version=3 May 27 17:40:47.554311 systemd[1]: Started cri-containerd-d5e2405960caba9180cdbc8b7481f180b538517c07e36a55ff30dc18aa1dcff1.scope - libcontainer container d5e2405960caba9180cdbc8b7481f180b538517c07e36a55ff30dc18aa1dcff1. May 27 17:40:47.570996 systemd[1]: Started cri-containerd-34c1acd99a9b1c8bfbadaa30b3175ad62e819d63b0583b7919f5837493a334a7.scope - libcontainer container 34c1acd99a9b1c8bfbadaa30b3175ad62e819d63b0583b7919f5837493a334a7. May 27 17:40:47.623889 containerd[1591]: time="2025-05-27T17:40:47.623275546Z" level=info msg="StartContainer for \"34c1acd99a9b1c8bfbadaa30b3175ad62e819d63b0583b7919f5837493a334a7\" returns successfully" May 27 17:40:47.624601 containerd[1591]: time="2025-05-27T17:40:47.624569648Z" level=info msg="StartContainer for \"d5e2405960caba9180cdbc8b7481f180b538517c07e36a55ff30dc18aa1dcff1\" returns successfully" May 27 17:40:48.345978 kubelet[2706]: I0527 17:40:48.345779 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d4rd7" podStartSLOduration=23.345754513 podStartE2EDuration="23.345754513s" podCreationTimestamp="2025-05-27 17:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:40:48.344499073 +0000 UTC m=+29.213541601" watchObservedRunningTime="2025-05-27 17:40:48.345754513 +0000 UTC m=+29.214797041" May 27 17:40:48.373778 kubelet[2706]: I0527 17:40:48.373665 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8qxr5" podStartSLOduration=23.373619942 podStartE2EDuration="23.373619942s" podCreationTimestamp="2025-05-27 17:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:40:48.359884669 +0000 UTC m=+29.228927187" watchObservedRunningTime="2025-05-27 17:40:48.373619942 +0000 UTC m=+29.242662450" May 27 17:40:48.741362 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:33924.service - OpenSSH per-connection server daemon (10.0.0.1:33924). May 27 17:40:48.810807 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 33924 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:40:48.812793 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:40:48.818585 systemd-logind[1577]: New session 10 of user core. May 27 17:40:48.833141 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:40:49.046819 sshd[4056]: Connection closed by 10.0.0.1 port 33924 May 27 17:40:49.047137 sshd-session[4054]: pam_unix(sshd:session): session closed for user core May 27 17:40:49.052533 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:33924.service: Deactivated successfully. May 27 17:40:49.055122 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:40:49.055962 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. May 27 17:40:49.057822 systemd-logind[1577]: Removed session 10. May 27 17:40:53.646726 kubelet[2706]: I0527 17:40:53.646667 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:40:54.061709 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:35608.service - OpenSSH per-connection server daemon (10.0.0.1:35608). May 27 17:40:54.115597 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 35608 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:40:54.117678 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:40:54.122723 systemd-logind[1577]: New session 11 of user core. May 27 17:40:54.132339 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:40:54.255010 sshd[4084]: Connection closed by 10.0.0.1 port 35608 May 27 17:40:54.255348 sshd-session[4082]: pam_unix(sshd:session): session closed for user core May 27 17:40:54.260006 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:35608.service: Deactivated successfully. May 27 17:40:54.262535 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:40:54.264032 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. May 27 17:40:54.265795 systemd-logind[1577]: Removed session 11. May 27 17:40:59.277970 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:35622.service - OpenSSH per-connection server daemon (10.0.0.1:35622). May 27 17:40:59.336514 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 35622 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:40:59.337939 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:40:59.342725 systemd-logind[1577]: New session 12 of user core. May 27 17:40:59.351013 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:40:59.465909 sshd[4104]: Connection closed by 10.0.0.1 port 35622 May 27 17:40:59.466205 sshd-session[4102]: pam_unix(sshd:session): session closed for user core May 27 17:40:59.470483 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:35622.service: Deactivated successfully. May 27 17:40:59.473126 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:40:59.474063 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. May 27 17:40:59.475639 systemd-logind[1577]: Removed session 12. May 27 17:41:04.479609 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:55742.service - OpenSSH per-connection server daemon (10.0.0.1:55742). May 27 17:41:04.544695 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 55742 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:04.546416 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:04.551390 systemd-logind[1577]: New session 13 of user core. May 27 17:41:04.560993 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:41:04.674183 sshd[4120]: Connection closed by 10.0.0.1 port 55742 May 27 17:41:04.674498 sshd-session[4118]: pam_unix(sshd:session): session closed for user core May 27 17:41:04.678435 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:55742.service: Deactivated successfully. May 27 17:41:04.680235 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:41:04.681189 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. May 27 17:41:04.682488 systemd-logind[1577]: Removed session 13. May 27 17:41:09.695148 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:55744.service - OpenSSH per-connection server daemon (10.0.0.1:55744). May 27 17:41:09.746659 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 55744 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:09.748462 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:09.753522 systemd-logind[1577]: New session 14 of user core. May 27 17:41:09.763026 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:41:09.883069 sshd[4136]: Connection closed by 10.0.0.1 port 55744 May 27 17:41:09.883547 sshd-session[4134]: pam_unix(sshd:session): session closed for user core May 27 17:41:09.896070 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:55744.service: Deactivated successfully. May 27 17:41:09.898048 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:41:09.899011 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. May 27 17:41:09.901958 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:55754.service - OpenSSH per-connection server daemon (10.0.0.1:55754). May 27 17:41:09.902602 systemd-logind[1577]: Removed session 14. May 27 17:41:09.955722 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 55754 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:09.957446 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:09.963246 systemd-logind[1577]: New session 15 of user core. May 27 17:41:09.973082 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:41:10.129945 sshd[4153]: Connection closed by 10.0.0.1 port 55754 May 27 17:41:10.130363 sshd-session[4151]: pam_unix(sshd:session): session closed for user core May 27 17:41:10.152973 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:55754.service: Deactivated successfully. May 27 17:41:10.155672 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:41:10.157438 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. May 27 17:41:10.161422 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:55760.service - OpenSSH per-connection server daemon (10.0.0.1:55760). May 27 17:41:10.162826 systemd-logind[1577]: Removed session 15. May 27 17:41:10.214668 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 55760 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:10.216527 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:10.221378 systemd-logind[1577]: New session 16 of user core. May 27 17:41:10.236121 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:41:10.368761 sshd[4167]: Connection closed by 10.0.0.1 port 55760 May 27 17:41:10.369421 sshd-session[4165]: pam_unix(sshd:session): session closed for user core May 27 17:41:10.374769 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:55760.service: Deactivated successfully. May 27 17:41:10.377148 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:41:10.378121 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. May 27 17:41:10.380097 systemd-logind[1577]: Removed session 16. May 27 17:41:15.385637 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:35048.service - OpenSSH per-connection server daemon (10.0.0.1:35048). May 27 17:41:15.423310 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 35048 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:15.424620 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:15.428642 systemd-logind[1577]: New session 17 of user core. May 27 17:41:15.439048 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:41:15.572070 sshd[4183]: Connection closed by 10.0.0.1 port 35048 May 27 17:41:15.572401 sshd-session[4181]: pam_unix(sshd:session): session closed for user core May 27 17:41:15.576822 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:35048.service: Deactivated successfully. May 27 17:41:15.578929 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:41:15.579783 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. May 27 17:41:15.581071 systemd-logind[1577]: Removed session 17. May 27 17:41:20.586359 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:35058.service - OpenSSH per-connection server daemon (10.0.0.1:35058). May 27 17:41:20.629734 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 35058 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:20.631597 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:20.636214 systemd-logind[1577]: New session 18 of user core. May 27 17:41:20.644997 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:41:20.761342 sshd[4200]: Connection closed by 10.0.0.1 port 35058 May 27 17:41:20.761672 sshd-session[4198]: pam_unix(sshd:session): session closed for user core May 27 17:41:20.770548 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:35058.service: Deactivated successfully. May 27 17:41:20.772476 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:41:20.773383 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. May 27 17:41:20.776659 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:35064.service - OpenSSH per-connection server daemon (10.0.0.1:35064). May 27 17:41:20.777275 systemd-logind[1577]: Removed session 18. May 27 17:41:20.824211 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 35064 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:20.825809 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:20.830352 systemd-logind[1577]: New session 19 of user core. May 27 17:41:20.844006 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:41:21.129531 sshd[4216]: Connection closed by 10.0.0.1 port 35064 May 27 17:41:21.130007 sshd-session[4214]: pam_unix(sshd:session): session closed for user core May 27 17:41:21.138585 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:35064.service: Deactivated successfully. May 27 17:41:21.140816 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:41:21.141874 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. May 27 17:41:21.145129 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:35070.service - OpenSSH per-connection server daemon (10.0.0.1:35070). May 27 17:41:21.146327 systemd-logind[1577]: Removed session 19. May 27 17:41:21.195170 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 35070 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:21.196758 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:21.201652 systemd-logind[1577]: New session 20 of user core. May 27 17:41:21.210993 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:41:22.052193 sshd[4230]: Connection closed by 10.0.0.1 port 35070 May 27 17:41:22.052582 sshd-session[4228]: pam_unix(sshd:session): session closed for user core May 27 17:41:22.065230 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:35070.service: Deactivated successfully. May 27 17:41:22.067343 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:41:22.068213 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. May 27 17:41:22.071695 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:35078.service - OpenSSH per-connection server daemon (10.0.0.1:35078). May 27 17:41:22.072706 systemd-logind[1577]: Removed session 20. May 27 17:41:22.121460 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 35078 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:22.123373 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:22.129888 systemd-logind[1577]: New session 21 of user core. May 27 17:41:22.141017 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:41:22.409448 sshd[4251]: Connection closed by 10.0.0.1 port 35078 May 27 17:41:22.410561 sshd-session[4249]: pam_unix(sshd:session): session closed for user core May 27 17:41:22.422328 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:35078.service: Deactivated successfully. May 27 17:41:22.424351 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:41:22.426114 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. May 27 17:41:22.430836 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:35094.service - OpenSSH per-connection server daemon (10.0.0.1:35094). May 27 17:41:22.431645 systemd-logind[1577]: Removed session 21. May 27 17:41:22.495415 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 35094 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:22.496843 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:22.501834 systemd-logind[1577]: New session 22 of user core. May 27 17:41:22.513025 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:41:22.628258 sshd[4264]: Connection closed by 10.0.0.1 port 35094 May 27 17:41:22.628598 sshd-session[4262]: pam_unix(sshd:session): session closed for user core May 27 17:41:22.634229 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:35094.service: Deactivated successfully. May 27 17:41:22.636764 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:41:22.637759 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. May 27 17:41:22.639549 systemd-logind[1577]: Removed session 22. May 27 17:41:27.647011 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:38076.service - OpenSSH per-connection server daemon (10.0.0.1:38076). May 27 17:41:27.690623 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 38076 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:27.692360 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:27.697138 systemd-logind[1577]: New session 23 of user core. May 27 17:41:27.708067 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:41:27.817683 sshd[4283]: Connection closed by 10.0.0.1 port 38076 May 27 17:41:27.818035 sshd-session[4281]: pam_unix(sshd:session): session closed for user core May 27 17:41:27.822062 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:38076.service: Deactivated successfully. May 27 17:41:27.823938 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:41:27.824731 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. May 27 17:41:27.825975 systemd-logind[1577]: Removed session 23. May 27 17:41:32.836700 systemd[1]: Started sshd@23-10.0.0.45:22-10.0.0.1:38078.service - OpenSSH per-connection server daemon (10.0.0.1:38078). May 27 17:41:32.885128 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 38078 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:32.887157 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:32.892799 systemd-logind[1577]: New session 24 of user core. May 27 17:41:32.910139 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:41:33.029022 sshd[4298]: Connection closed by 10.0.0.1 port 38078 May 27 17:41:33.029415 sshd-session[4296]: pam_unix(sshd:session): session closed for user core May 27 17:41:33.034148 systemd[1]: sshd@23-10.0.0.45:22-10.0.0.1:38078.service: Deactivated successfully. May 27 17:41:33.036182 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:41:33.036967 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. May 27 17:41:33.038406 systemd-logind[1577]: Removed session 24. May 27 17:41:38.046544 systemd[1]: Started sshd@24-10.0.0.45:22-10.0.0.1:59778.service - OpenSSH per-connection server daemon (10.0.0.1:59778). May 27 17:41:38.104752 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 59778 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:38.106526 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:38.111978 systemd-logind[1577]: New session 25 of user core. May 27 17:41:38.122206 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:41:38.239473 sshd[4313]: Connection closed by 10.0.0.1 port 59778 May 27 17:41:38.239807 sshd-session[4311]: pam_unix(sshd:session): session closed for user core May 27 17:41:38.256533 systemd[1]: sshd@24-10.0.0.45:22-10.0.0.1:59778.service: Deactivated successfully. May 27 17:41:38.258221 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:41:38.259067 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. May 27 17:41:38.261495 systemd[1]: Started sshd@25-10.0.0.45:22-10.0.0.1:59786.service - OpenSSH per-connection server daemon (10.0.0.1:59786). May 27 17:41:38.262317 systemd-logind[1577]: Removed session 25. May 27 17:41:38.307635 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 59786 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:38.308802 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:38.313090 systemd-logind[1577]: New session 26 of user core. May 27 17:41:38.319990 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 17:41:39.808904 containerd[1591]: time="2025-05-27T17:41:39.808276111Z" level=info msg="StopContainer for \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" with timeout 30 (s)" May 27 17:41:39.816908 containerd[1591]: time="2025-05-27T17:41:39.816849423Z" level=info msg="Stop container \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" with signal terminated" May 27 17:41:39.825939 containerd[1591]: time="2025-05-27T17:41:39.825837094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" id:\"d246b56bf8ae1a1898a2a0549f065ed17c3a4bc3e78bc99aba08df8ea06258dd\" pid:4348 exited_at:{seconds:1748367699 nanos:825496034}" May 27 17:41:39.826996 containerd[1591]: time="2025-05-27T17:41:39.826931517Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:41:39.828088 containerd[1591]: time="2025-05-27T17:41:39.828041460Z" level=info msg="StopContainer for \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" with timeout 2 (s)" May 27 17:41:39.828399 containerd[1591]: time="2025-05-27T17:41:39.828377982Z" level=info msg="Stop container \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" with signal terminated" May 27 17:41:39.830420 systemd[1]: cri-containerd-43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549.scope: Deactivated successfully. May 27 17:41:39.832521 containerd[1591]: time="2025-05-27T17:41:39.832491613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" id:\"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" pid:3280 exited_at:{seconds:1748367699 nanos:832027620}" May 27 17:41:39.832679 containerd[1591]: time="2025-05-27T17:41:39.832551506Z" level=info msg="received exit event container_id:\"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" id:\"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" pid:3280 exited_at:{seconds:1748367699 nanos:832027620}" May 27 17:41:39.837294 systemd-networkd[1500]: lxc_health: Link DOWN May 27 17:41:39.837303 systemd-networkd[1500]: lxc_health: Lost carrier May 27 17:41:39.857243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549-rootfs.mount: Deactivated successfully. May 27 17:41:39.859222 systemd[1]: cri-containerd-b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c.scope: Deactivated successfully. May 27 17:41:39.859930 systemd[1]: cri-containerd-b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c.scope: Consumed 6.730s CPU time, 125.4M memory peak, 212K read from disk, 13.3M written to disk. May 27 17:41:39.860696 containerd[1591]: time="2025-05-27T17:41:39.860656994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" id:\"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" pid:3374 exited_at:{seconds:1748367699 nanos:860221695}" May 27 17:41:39.860793 containerd[1591]: time="2025-05-27T17:41:39.860712570Z" level=info msg="received exit event container_id:\"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" id:\"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" pid:3374 exited_at:{seconds:1748367699 nanos:860221695}" May 27 17:41:39.883379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c-rootfs.mount: Deactivated successfully. May 27 17:41:39.933378 containerd[1591]: time="2025-05-27T17:41:39.933331861Z" level=info msg="StopContainer for \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" returns successfully" May 27 17:41:39.933950 containerd[1591]: time="2025-05-27T17:41:39.933921203Z" level=info msg="StopPodSandbox for \"1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3\"" May 27 17:41:39.934001 containerd[1591]: time="2025-05-27T17:41:39.933993000Z" level=info msg="Container to stop \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:41:39.940848 systemd[1]: cri-containerd-1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3.scope: Deactivated successfully. May 27 17:41:39.942255 containerd[1591]: time="2025-05-27T17:41:39.942222767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3\" id:\"1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3\" pid:2965 exit_status:137 exited_at:{seconds:1748367699 nanos:941443985}" May 27 17:41:39.968348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3-rootfs.mount: Deactivated successfully. May 27 17:41:40.039767 containerd[1591]: time="2025-05-27T17:41:40.039725786Z" level=info msg="StopContainer for \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" returns successfully" May 27 17:41:40.040471 containerd[1591]: time="2025-05-27T17:41:40.040318904Z" level=info msg="StopPodSandbox for \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\"" May 27 17:41:40.040471 containerd[1591]: time="2025-05-27T17:41:40.040371384Z" level=info msg="Container to stop \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:41:40.040471 containerd[1591]: time="2025-05-27T17:41:40.040380592Z" level=info msg="Container to stop \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:41:40.040471 containerd[1591]: time="2025-05-27T17:41:40.040388086Z" level=info msg="Container to stop \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:41:40.040471 containerd[1591]: time="2025-05-27T17:41:40.040395500Z" level=info msg="Container to stop \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:41:40.040471 containerd[1591]: time="2025-05-27T17:41:40.040402854Z" level=info msg="Container to stop \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:41:40.047345 systemd[1]: cri-containerd-7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559.scope: Deactivated successfully. May 27 17:41:40.053234 containerd[1591]: time="2025-05-27T17:41:40.053192885Z" level=info msg="shim disconnected" id=1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3 namespace=k8s.io May 27 17:41:40.053234 containerd[1591]: time="2025-05-27T17:41:40.053228112Z" level=warning msg="cleaning up after shim disconnected" id=1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3 namespace=k8s.io May 27 17:41:40.064623 containerd[1591]: time="2025-05-27T17:41:40.053237019Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:41:40.066653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559-rootfs.mount: Deactivated successfully. May 27 17:41:40.074768 containerd[1591]: time="2025-05-27T17:41:40.074716767Z" level=info msg="shim disconnected" id=7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559 namespace=k8s.io May 27 17:41:40.074768 containerd[1591]: time="2025-05-27T17:41:40.074752595Z" level=warning msg="cleaning up after shim disconnected" id=7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559 namespace=k8s.io May 27 17:41:40.074768 containerd[1591]: time="2025-05-27T17:41:40.074760369Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:41:40.095152 containerd[1591]: time="2025-05-27T17:41:40.092671155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" id:\"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" pid:2871 exit_status:137 exited_at:{seconds:1748367700 nanos:47333293}" May 27 17:41:40.094552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3-shm.mount: Deactivated successfully. May 27 17:41:40.094667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559-shm.mount: Deactivated successfully. May 27 17:41:40.100994 containerd[1591]: time="2025-05-27T17:41:40.100931856Z" level=info msg="received exit event sandbox_id:\"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" exit_status:137 exited_at:{seconds:1748367700 nanos:47333293}" May 27 17:41:40.101164 containerd[1591]: time="2025-05-27T17:41:40.101134452Z" level=info msg="received exit event sandbox_id:\"1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3\" exit_status:137 exited_at:{seconds:1748367699 nanos:941443985}" May 27 17:41:40.106590 containerd[1591]: time="2025-05-27T17:41:40.106547222Z" level=info msg="TearDown network for sandbox \"1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3\" successfully" May 27 17:41:40.106590 containerd[1591]: time="2025-05-27T17:41:40.106574515Z" level=info msg="StopPodSandbox for \"1891f49db00705d3302f82ed21b34f58ce64fb2fb9a48ce434c8893564c7b3b3\" returns successfully" May 27 17:41:40.107598 containerd[1591]: time="2025-05-27T17:41:40.107561503Z" level=info msg="TearDown network for sandbox \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" successfully" May 27 17:41:40.107598 containerd[1591]: time="2025-05-27T17:41:40.107589716Z" level=info msg="StopPodSandbox for \"7e0c3562144beb07b3ad062ce43238308f873187bb25e970373ba7a074edf559\" returns successfully" May 27 17:41:40.233345 kubelet[2706]: I0527 17:41:40.233285 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cilium-cgroup\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.233345 kubelet[2706]: I0527 17:41:40.233332 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-hostproc\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.233345 kubelet[2706]: I0527 17:41:40.233356 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06bf7627-546b-4828-9257-60abefe87ce8-hubble-tls\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.233987 kubelet[2706]: I0527 17:41:40.233371 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-bpf-maps\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.233987 kubelet[2706]: I0527 17:41:40.233387 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-xtables-lock\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.233987 kubelet[2706]: I0527 17:41:40.233404 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06bf7627-546b-4828-9257-60abefe87ce8-clustermesh-secrets\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.233987 kubelet[2706]: I0527 17:41:40.233424 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hplqz\" (UniqueName: \"kubernetes.io/projected/f4f44833-f5bd-4e9d-8a1c-34df019a161e-kube-api-access-hplqz\") pod \"f4f44833-f5bd-4e9d-8a1c-34df019a161e\" (UID: \"f4f44833-f5bd-4e9d-8a1c-34df019a161e\") " May 27 17:41:40.233987 kubelet[2706]: I0527 17:41:40.233444 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z2gd\" (UniqueName: \"kubernetes.io/projected/06bf7627-546b-4828-9257-60abefe87ce8-kube-api-access-8z2gd\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.233987 kubelet[2706]: I0527 17:41:40.233459 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-etc-cni-netd\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.234198 kubelet[2706]: I0527 17:41:40.233479 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4f44833-f5bd-4e9d-8a1c-34df019a161e-cilium-config-path\") pod \"f4f44833-f5bd-4e9d-8a1c-34df019a161e\" (UID: \"f4f44833-f5bd-4e9d-8a1c-34df019a161e\") " May 27 17:41:40.234198 kubelet[2706]: I0527 17:41:40.233463 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-hostproc" (OuterVolumeSpecName: "hostproc") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.234198 kubelet[2706]: I0527 17:41:40.233463 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.234198 kubelet[2706]: I0527 17:41:40.233496 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cilium-run\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.234198 kubelet[2706]: I0527 17:41:40.233543 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.234441 kubelet[2706]: I0527 17:41:40.233604 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06bf7627-546b-4828-9257-60abefe87ce8-cilium-config-path\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.234441 kubelet[2706]: I0527 17:41:40.233632 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cni-path\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.234441 kubelet[2706]: I0527 17:41:40.233655 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-lib-modules\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.234441 kubelet[2706]: I0527 17:41:40.233679 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-host-proc-sys-kernel\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.234441 kubelet[2706]: I0527 17:41:40.233697 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-host-proc-sys-net\") pod \"06bf7627-546b-4828-9257-60abefe87ce8\" (UID: \"06bf7627-546b-4828-9257-60abefe87ce8\") " May 27 17:41:40.234441 kubelet[2706]: I0527 17:41:40.233796 2706 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cilium-run\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.234441 kubelet[2706]: I0527 17:41:40.233884 2706 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-hostproc\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.234650 kubelet[2706]: I0527 17:41:40.233900 2706 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.234650 kubelet[2706]: I0527 17:41:40.233932 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.234650 kubelet[2706]: I0527 17:41:40.234070 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.234650 kubelet[2706]: I0527 17:41:40.234102 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.234650 kubelet[2706]: I0527 17:41:40.234118 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.234804 kubelet[2706]: I0527 17:41:40.234131 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cni-path" (OuterVolumeSpecName: "cni-path") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.234804 kubelet[2706]: I0527 17:41:40.234147 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.237451 kubelet[2706]: I0527 17:41:40.237426 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06bf7627-546b-4828-9257-60abefe87ce8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:41:40.237567 kubelet[2706]: I0527 17:41:40.237548 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:41:40.237639 kubelet[2706]: I0527 17:41:40.237556 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06bf7627-546b-4828-9257-60abefe87ce8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:41:40.238255 kubelet[2706]: I0527 17:41:40.238208 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06bf7627-546b-4828-9257-60abefe87ce8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:41:40.239523 kubelet[2706]: I0527 17:41:40.239491 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06bf7627-546b-4828-9257-60abefe87ce8-kube-api-access-8z2gd" (OuterVolumeSpecName: "kube-api-access-8z2gd") pod "06bf7627-546b-4828-9257-60abefe87ce8" (UID: "06bf7627-546b-4828-9257-60abefe87ce8"). InnerVolumeSpecName "kube-api-access-8z2gd". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:41:40.240657 kubelet[2706]: I0527 17:41:40.240633 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f44833-f5bd-4e9d-8a1c-34df019a161e-kube-api-access-hplqz" (OuterVolumeSpecName: "kube-api-access-hplqz") pod "f4f44833-f5bd-4e9d-8a1c-34df019a161e" (UID: "f4f44833-f5bd-4e9d-8a1c-34df019a161e"). InnerVolumeSpecName "kube-api-access-hplqz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:41:40.241318 kubelet[2706]: I0527 17:41:40.241292 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f44833-f5bd-4e9d-8a1c-34df019a161e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4f44833-f5bd-4e9d-8a1c-34df019a161e" (UID: "f4f44833-f5bd-4e9d-8a1c-34df019a161e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:41:40.334612 kubelet[2706]: I0527 17:41:40.334530 2706 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334612 kubelet[2706]: I0527 17:41:40.334549 2706 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06bf7627-546b-4828-9257-60abefe87ce8-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334612 kubelet[2706]: I0527 17:41:40.334557 2706 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334612 kubelet[2706]: I0527 17:41:40.334566 2706 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06bf7627-546b-4828-9257-60abefe87ce8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334612 kubelet[2706]: I0527 17:41:40.334576 2706 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hplqz\" (UniqueName: \"kubernetes.io/projected/f4f44833-f5bd-4e9d-8a1c-34df019a161e-kube-api-access-hplqz\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334612 kubelet[2706]: I0527 17:41:40.334585 2706 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8z2gd\" (UniqueName: \"kubernetes.io/projected/06bf7627-546b-4828-9257-60abefe87ce8-kube-api-access-8z2gd\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334612 kubelet[2706]: I0527 17:41:40.334592 2706 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334612 kubelet[2706]: I0527 17:41:40.334600 2706 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4f44833-f5bd-4e9d-8a1c-34df019a161e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334915 kubelet[2706]: I0527 17:41:40.334607 2706 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06bf7627-546b-4828-9257-60abefe87ce8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334915 kubelet[2706]: I0527 17:41:40.334614 2706 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-cni-path\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334915 kubelet[2706]: I0527 17:41:40.334621 2706 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-lib-modules\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334915 kubelet[2706]: I0527 17:41:40.334628 2706 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.334915 kubelet[2706]: I0527 17:41:40.334635 2706 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06bf7627-546b-4828-9257-60abefe87ce8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 27 17:41:40.424042 kubelet[2706]: I0527 17:41:40.423955 2706 scope.go:117] "RemoveContainer" containerID="43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549" May 27 17:41:40.426597 containerd[1591]: time="2025-05-27T17:41:40.426530879Z" level=info msg="RemoveContainer for \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\"" May 27 17:41:40.432306 containerd[1591]: time="2025-05-27T17:41:40.432174169Z" level=info msg="RemoveContainer for \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" returns successfully" May 27 17:41:40.433143 kubelet[2706]: I0527 17:41:40.433119 2706 scope.go:117] "RemoveContainer" containerID="43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549" May 27 17:41:40.436159 containerd[1591]: time="2025-05-27T17:41:40.436062740Z" level=error msg="ContainerStatus for \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\": not found" May 27 17:41:40.436556 kubelet[2706]: E0527 17:41:40.436529 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\": not found" containerID="43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549" May 27 17:41:40.436670 kubelet[2706]: I0527 17:41:40.436630 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549"} err="failed to get container status \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\": rpc error: code = NotFound desc = an error occurred when try to find container \"43a5ea33f2d07b9b5a9441d9e3bee9679903dd0775c5c7e89bd1e08a0492a549\": not found" May 27 17:41:40.436736 kubelet[2706]: I0527 17:41:40.436721 2706 scope.go:117] "RemoveContainer" containerID="b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c" May 27 17:41:40.439933 containerd[1591]: time="2025-05-27T17:41:40.439312223Z" level=info msg="RemoveContainer for \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\"" May 27 17:41:40.440109 systemd[1]: Removed slice kubepods-besteffort-podf4f44833_f5bd_4e9d_8a1c_34df019a161e.slice - libcontainer container kubepods-besteffort-podf4f44833_f5bd_4e9d_8a1c_34df019a161e.slice. May 27 17:41:40.441396 systemd[1]: Removed slice kubepods-burstable-pod06bf7627_546b_4828_9257_60abefe87ce8.slice - libcontainer container kubepods-burstable-pod06bf7627_546b_4828_9257_60abefe87ce8.slice. May 27 17:41:40.441479 systemd[1]: kubepods-burstable-pod06bf7627_546b_4828_9257_60abefe87ce8.slice: Consumed 6.840s CPU time, 125.8M memory peak, 220K read from disk, 13.3M written to disk. May 27 17:41:40.531410 containerd[1591]: time="2025-05-27T17:41:40.531361271Z" level=info msg="RemoveContainer for \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" returns successfully" May 27 17:41:40.531664 kubelet[2706]: I0527 17:41:40.531631 2706 scope.go:117] "RemoveContainer" containerID="28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5" May 27 17:41:40.533243 containerd[1591]: time="2025-05-27T17:41:40.533221982Z" level=info msg="RemoveContainer for \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\"" May 27 17:41:40.592898 containerd[1591]: time="2025-05-27T17:41:40.592774526Z" level=info msg="RemoveContainer for \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\" returns successfully" May 27 17:41:40.593286 kubelet[2706]: I0527 17:41:40.593256 2706 scope.go:117] "RemoveContainer" containerID="a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2" May 27 17:41:40.597652 containerd[1591]: time="2025-05-27T17:41:40.597602494Z" level=info msg="RemoveContainer for \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\"" May 27 17:41:40.613023 containerd[1591]: time="2025-05-27T17:41:40.612954670Z" level=info msg="RemoveContainer for \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\" returns successfully" May 27 17:41:40.613251 kubelet[2706]: I0527 17:41:40.613217 2706 scope.go:117] "RemoveContainer" containerID="931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb" May 27 17:41:40.614775 containerd[1591]: time="2025-05-27T17:41:40.614750477Z" level=info msg="RemoveContainer for \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\"" May 27 17:41:40.619311 containerd[1591]: time="2025-05-27T17:41:40.619271051Z" level=info msg="RemoveContainer for \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\" returns successfully" May 27 17:41:40.619482 kubelet[2706]: I0527 17:41:40.619436 2706 scope.go:117] "RemoveContainer" containerID="71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72" May 27 17:41:40.620933 containerd[1591]: time="2025-05-27T17:41:40.620898127Z" level=info msg="RemoveContainer for \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\"" May 27 17:41:40.634362 containerd[1591]: time="2025-05-27T17:41:40.634306495Z" level=info msg="RemoveContainer for \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\" returns successfully" May 27 17:41:40.634642 kubelet[2706]: I0527 17:41:40.634612 2706 scope.go:117] "RemoveContainer" containerID="b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c" May 27 17:41:40.634928 containerd[1591]: time="2025-05-27T17:41:40.634886789Z" level=error msg="ContainerStatus for \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\": not found" May 27 17:41:40.635089 kubelet[2706]: E0527 17:41:40.635050 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\": not found" containerID="b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c" May 27 17:41:40.635148 kubelet[2706]: I0527 17:41:40.635086 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c"} err="failed to get container status \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b491dd20905ac27ad46e5c01caf2e8a9f4ecfb24ed978d91d2c79c2f087a4e4c\": not found" May 27 17:41:40.635148 kubelet[2706]: I0527 17:41:40.635112 2706 scope.go:117] "RemoveContainer" containerID="28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5" May 27 17:41:40.635286 containerd[1591]: time="2025-05-27T17:41:40.635255330Z" level=error msg="ContainerStatus for \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\": not found" May 27 17:41:40.635432 kubelet[2706]: E0527 17:41:40.635399 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\": not found" containerID="28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5" May 27 17:41:40.635477 kubelet[2706]: I0527 17:41:40.635435 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5"} err="failed to get container status \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"28d74ff06dc87a5c7707b575e80159125215f832ce3fc0da8f11a65e1816f3a5\": not found" May 27 17:41:40.635477 kubelet[2706]: I0527 17:41:40.635454 2706 scope.go:117] "RemoveContainer" containerID="a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2" May 27 17:41:40.635611 containerd[1591]: time="2025-05-27T17:41:40.635584286Z" level=error msg="ContainerStatus for \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\": not found" May 27 17:41:40.635698 kubelet[2706]: E0527 17:41:40.635672 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\": not found" containerID="a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2" May 27 17:41:40.635749 kubelet[2706]: I0527 17:41:40.635697 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2"} err="failed to get container status \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a582fe6d50a3dcd2998f4d26ea6fb1079c3a166bcc0bd27caa63986c92d69ba2\": not found" May 27 17:41:40.635749 kubelet[2706]: I0527 17:41:40.635712 2706 scope.go:117] "RemoveContainer" containerID="931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb" May 27 17:41:40.635905 containerd[1591]: time="2025-05-27T17:41:40.635872635Z" level=error msg="ContainerStatus for \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\": not found" May 27 17:41:40.636025 kubelet[2706]: E0527 17:41:40.636002 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\": not found" containerID="931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb" May 27 17:41:40.636094 kubelet[2706]: I0527 17:41:40.636039 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb"} err="failed to get container status \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"931f3bb73250bc6096ac4eb19f0c2a531dd4b0e9805c834eb387b7bad58e02cb\": not found" May 27 17:41:40.636094 kubelet[2706]: I0527 17:41:40.636056 2706 scope.go:117] "RemoveContainer" containerID="71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72" May 27 17:41:40.636251 containerd[1591]: time="2025-05-27T17:41:40.636213935Z" level=error msg="ContainerStatus for \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\": not found" May 27 17:41:40.636357 kubelet[2706]: E0527 17:41:40.636330 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\": not found" containerID="71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72" May 27 17:41:40.636390 kubelet[2706]: I0527 17:41:40.636361 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72"} err="failed to get container status \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\": rpc error: code = NotFound desc = an error occurred when try to find container \"71332a5c8a9f8127a0b5f4aac6aa72bdea4859aa40b85ac18d0baaee0cf0ea72\": not found" May 27 17:41:40.857055 systemd[1]: var-lib-kubelet-pods-f4f44833\x2df5bd\x2d4e9d\x2d8a1c\x2d34df019a161e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhplqz.mount: Deactivated successfully. May 27 17:41:40.857186 systemd[1]: var-lib-kubelet-pods-06bf7627\x2d546b\x2d4828\x2d9257\x2d60abefe87ce8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8z2gd.mount: Deactivated successfully. May 27 17:41:40.857283 systemd[1]: var-lib-kubelet-pods-06bf7627\x2d546b\x2d4828\x2d9257\x2d60abefe87ce8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 17:41:40.857376 systemd[1]: var-lib-kubelet-pods-06bf7627\x2d546b\x2d4828\x2d9257\x2d60abefe87ce8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 17:41:41.215049 kubelet[2706]: I0527 17:41:41.214913 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06bf7627-546b-4828-9257-60abefe87ce8" path="/var/lib/kubelet/pods/06bf7627-546b-4828-9257-60abefe87ce8/volumes" May 27 17:41:41.215726 kubelet[2706]: I0527 17:41:41.215695 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f44833-f5bd-4e9d-8a1c-34df019a161e" path="/var/lib/kubelet/pods/f4f44833-f5bd-4e9d-8a1c-34df019a161e/volumes" May 27 17:41:41.632235 sshd[4328]: Connection closed by 10.0.0.1 port 59786 May 27 17:41:41.632747 sshd-session[4326]: pam_unix(sshd:session): session closed for user core May 27 17:41:41.650208 systemd[1]: sshd@25-10.0.0.45:22-10.0.0.1:59786.service: Deactivated successfully. May 27 17:41:41.652423 systemd[1]: session-26.scope: Deactivated successfully. May 27 17:41:41.653216 systemd-logind[1577]: Session 26 logged out. Waiting for processes to exit. May 27 17:41:41.656372 systemd[1]: Started sshd@26-10.0.0.45:22-10.0.0.1:59796.service - OpenSSH per-connection server daemon (10.0.0.1:59796). May 27 17:41:41.657334 systemd-logind[1577]: Removed session 26. May 27 17:41:41.708948 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 59796 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:41.710454 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:41.715440 systemd-logind[1577]: New session 27 of user core. May 27 17:41:41.725028 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 17:41:42.396528 sshd[4480]: Connection closed by 10.0.0.1 port 59796 May 27 17:41:42.397127 sshd-session[4478]: pam_unix(sshd:session): session closed for user core May 27 17:41:42.408909 systemd[1]: sshd@26-10.0.0.45:22-10.0.0.1:59796.service: Deactivated successfully. May 27 17:41:42.412461 systemd[1]: session-27.scope: Deactivated successfully. May 27 17:41:42.413706 systemd-logind[1577]: Session 27 logged out. Waiting for processes to exit. May 27 17:41:42.419058 systemd[1]: Started sshd@27-10.0.0.45:22-10.0.0.1:59804.service - OpenSSH per-connection server daemon (10.0.0.1:59804). May 27 17:41:42.421060 systemd-logind[1577]: Removed session 27. May 27 17:41:42.442001 systemd[1]: Created slice kubepods-burstable-pod8b614bd8_e4eb_4a8d_bf16_c262486da690.slice - libcontainer container kubepods-burstable-pod8b614bd8_e4eb_4a8d_bf16_c262486da690.slice. May 27 17:41:42.467044 sshd[4492]: Accepted publickey for core from 10.0.0.1 port 59804 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:42.468570 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:42.473337 systemd-logind[1577]: New session 28 of user core. May 27 17:41:42.480021 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 17:41:42.530421 sshd[4496]: Connection closed by 10.0.0.1 port 59804 May 27 17:41:42.530785 sshd-session[4492]: pam_unix(sshd:session): session closed for user core May 27 17:41:42.545677 systemd[1]: sshd@27-10.0.0.45:22-10.0.0.1:59804.service: Deactivated successfully. May 27 17:41:42.547659 systemd[1]: session-28.scope: Deactivated successfully. May 27 17:41:42.548513 systemd-logind[1577]: Session 28 logged out. Waiting for processes to exit. May 27 17:41:42.551913 systemd[1]: Started sshd@28-10.0.0.45:22-10.0.0.1:59806.service - OpenSSH per-connection server daemon (10.0.0.1:59806). May 27 17:41:42.552572 systemd-logind[1577]: Removed session 28. May 27 17:41:42.554793 kubelet[2706]: I0527 17:41:42.554756 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-cilium-run\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.554793 kubelet[2706]: I0527 17:41:42.554790 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-bpf-maps\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555163 kubelet[2706]: I0527 17:41:42.554806 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-etc-cni-netd\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555163 kubelet[2706]: I0527 17:41:42.554821 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8b614bd8-e4eb-4a8d-bf16-c262486da690-cilium-ipsec-secrets\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555163 kubelet[2706]: I0527 17:41:42.554835 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-host-proc-sys-kernel\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555163 kubelet[2706]: I0527 17:41:42.554864 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-host-proc-sys-net\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555163 kubelet[2706]: I0527 17:41:42.554912 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b614bd8-e4eb-4a8d-bf16-c262486da690-hubble-tls\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555163 kubelet[2706]: I0527 17:41:42.554947 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-lib-modules\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555313 kubelet[2706]: I0527 17:41:42.554976 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-cni-path\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555313 kubelet[2706]: I0527 17:41:42.554989 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-hostproc\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555313 kubelet[2706]: I0527 17:41:42.555003 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m8mw\" (UniqueName: \"kubernetes.io/projected/8b614bd8-e4eb-4a8d-bf16-c262486da690-kube-api-access-7m8mw\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555313 kubelet[2706]: I0527 17:41:42.555018 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-cilium-cgroup\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555313 kubelet[2706]: I0527 17:41:42.555030 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b614bd8-e4eb-4a8d-bf16-c262486da690-xtables-lock\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555313 kubelet[2706]: I0527 17:41:42.555042 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b614bd8-e4eb-4a8d-bf16-c262486da690-clustermesh-secrets\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.555438 kubelet[2706]: I0527 17:41:42.555055 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b614bd8-e4eb-4a8d-bf16-c262486da690-cilium-config-path\") pod \"cilium-gffdh\" (UID: \"8b614bd8-e4eb-4a8d-bf16-c262486da690\") " pod="kube-system/cilium-gffdh" May 27 17:41:42.598056 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 59806 ssh2: RSA SHA256:Sdu3hc/K/GsFAoVLDVpDFh1tw++0J1r4WpeL8cs/qlY May 27 17:41:42.599666 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:41:42.604770 systemd-logind[1577]: New session 29 of user core. May 27 17:41:42.621037 systemd[1]: Started session-29.scope - Session 29 of User core. May 27 17:41:42.747525 containerd[1591]: time="2025-05-27T17:41:42.747477537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gffdh,Uid:8b614bd8-e4eb-4a8d-bf16-c262486da690,Namespace:kube-system,Attempt:0,}" May 27 17:41:42.766135 containerd[1591]: time="2025-05-27T17:41:42.766095904Z" level=info msg="connecting to shim d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8" address="unix:///run/containerd/s/529f809b6d9c30270a8cfb8223428b501f549f616b3662e10929c0bf4361db0f" namespace=k8s.io protocol=ttrpc version=3 May 27 17:41:42.789009 systemd[1]: Started cri-containerd-d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8.scope - libcontainer container d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8. May 27 17:41:42.813838 containerd[1591]: time="2025-05-27T17:41:42.813796219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gffdh,Uid:8b614bd8-e4eb-4a8d-bf16-c262486da690,Namespace:kube-system,Attempt:0,} returns sandbox id \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\"" May 27 17:41:42.826569 containerd[1591]: time="2025-05-27T17:41:42.826490550Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:41:42.835505 containerd[1591]: time="2025-05-27T17:41:42.835453445Z" level=info msg="Container f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d: CDI devices from CRI Config.CDIDevices: []" May 27 17:41:42.843492 containerd[1591]: time="2025-05-27T17:41:42.843435895Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d\"" May 27 17:41:42.844155 containerd[1591]: time="2025-05-27T17:41:42.844098837Z" level=info msg="StartContainer for \"f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d\"" May 27 17:41:42.845020 containerd[1591]: time="2025-05-27T17:41:42.844989010Z" level=info msg="connecting to shim f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d" address="unix:///run/containerd/s/529f809b6d9c30270a8cfb8223428b501f549f616b3662e10929c0bf4361db0f" protocol=ttrpc version=3 May 27 17:41:42.872013 systemd[1]: Started cri-containerd-f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d.scope - libcontainer container f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d. May 27 17:41:42.903589 containerd[1591]: time="2025-05-27T17:41:42.903545900Z" level=info msg="StartContainer for \"f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d\" returns successfully" May 27 17:41:42.913735 systemd[1]: cri-containerd-f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d.scope: Deactivated successfully. May 27 17:41:42.915001 containerd[1591]: time="2025-05-27T17:41:42.914850158Z" level=info msg="received exit event container_id:\"f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d\" id:\"f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d\" pid:4576 exited_at:{seconds:1748367702 nanos:914509450}" May 27 17:41:42.915069 containerd[1591]: time="2025-05-27T17:41:42.914930751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d\" id:\"f044684687d2fcd142a4d8ce77e90625a1601a6de460b23b48dc6ee4b04e802d\" pid:4576 exited_at:{seconds:1748367702 nanos:914509450}" May 27 17:41:43.449183 containerd[1591]: time="2025-05-27T17:41:43.449127105Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:41:43.456787 containerd[1591]: time="2025-05-27T17:41:43.456735078Z" level=info msg="Container de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032: CDI devices from CRI Config.CDIDevices: []" May 27 17:41:43.463385 containerd[1591]: time="2025-05-27T17:41:43.463334404Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032\"" May 27 17:41:43.464827 containerd[1591]: time="2025-05-27T17:41:43.463890773Z" level=info msg="StartContainer for \"de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032\"" May 27 17:41:43.465030 containerd[1591]: time="2025-05-27T17:41:43.464999820Z" level=info msg="connecting to shim de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032" address="unix:///run/containerd/s/529f809b6d9c30270a8cfb8223428b501f549f616b3662e10929c0bf4361db0f" protocol=ttrpc version=3 May 27 17:41:43.487997 systemd[1]: Started cri-containerd-de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032.scope - libcontainer container de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032. May 27 17:41:43.519347 containerd[1591]: time="2025-05-27T17:41:43.519296695Z" level=info msg="StartContainer for \"de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032\" returns successfully" May 27 17:41:43.525359 systemd[1]: cri-containerd-de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032.scope: Deactivated successfully. May 27 17:41:43.525615 containerd[1591]: time="2025-05-27T17:41:43.525585291Z" level=info msg="received exit event container_id:\"de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032\" id:\"de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032\" pid:4621 exited_at:{seconds:1748367703 nanos:525399648}" May 27 17:41:43.525775 containerd[1591]: time="2025-05-27T17:41:43.525699077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032\" id:\"de26059a260b772ceb7091e408b21c0a0c995ddd2b75d2ed41709c18ffe0d032\" pid:4621 exited_at:{seconds:1748367703 nanos:525399648}" May 27 17:41:44.261219 kubelet[2706]: E0527 17:41:44.261179 2706 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:41:44.456125 containerd[1591]: time="2025-05-27T17:41:44.456056000Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:41:44.469887 containerd[1591]: time="2025-05-27T17:41:44.468641248Z" level=info msg="Container f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf: CDI devices from CRI Config.CDIDevices: []" May 27 17:41:44.478990 containerd[1591]: time="2025-05-27T17:41:44.478925073Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf\"" May 27 17:41:44.479512 containerd[1591]: time="2025-05-27T17:41:44.479477903Z" level=info msg="StartContainer for \"f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf\"" May 27 17:41:44.481195 containerd[1591]: time="2025-05-27T17:41:44.481166974Z" level=info msg="connecting to shim f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf" address="unix:///run/containerd/s/529f809b6d9c30270a8cfb8223428b501f549f616b3662e10929c0bf4361db0f" protocol=ttrpc version=3 May 27 17:41:44.506141 systemd[1]: Started cri-containerd-f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf.scope - libcontainer container f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf. May 27 17:41:44.550244 systemd[1]: cri-containerd-f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf.scope: Deactivated successfully. May 27 17:41:44.551822 containerd[1591]: time="2025-05-27T17:41:44.551781306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf\" id:\"f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf\" pid:4666 exited_at:{seconds:1748367704 nanos:550925019}" May 27 17:41:44.552058 containerd[1591]: time="2025-05-27T17:41:44.551908358Z" level=info msg="received exit event container_id:\"f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf\" id:\"f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf\" pid:4666 exited_at:{seconds:1748367704 nanos:550925019}" May 27 17:41:44.554828 containerd[1591]: time="2025-05-27T17:41:44.554091096Z" level=info msg="StartContainer for \"f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf\" returns successfully" May 27 17:41:44.577647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2f6591121194d1808a245658208b7ffe0ac535ba00f72d56500beefb69cf3bf-rootfs.mount: Deactivated successfully. May 27 17:41:45.464575 containerd[1591]: time="2025-05-27T17:41:45.464508794Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:41:45.478130 containerd[1591]: time="2025-05-27T17:41:45.478065831Z" level=info msg="Container 8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b: CDI devices from CRI Config.CDIDevices: []" May 27 17:41:45.479424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405203467.mount: Deactivated successfully. May 27 17:41:45.486555 containerd[1591]: time="2025-05-27T17:41:45.486514973Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b\"" May 27 17:41:45.487154 containerd[1591]: time="2025-05-27T17:41:45.487130533Z" level=info msg="StartContainer for \"8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b\"" May 27 17:41:45.488082 containerd[1591]: time="2025-05-27T17:41:45.488045331Z" level=info msg="connecting to shim 8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b" address="unix:///run/containerd/s/529f809b6d9c30270a8cfb8223428b501f549f616b3662e10929c0bf4361db0f" protocol=ttrpc version=3 May 27 17:41:45.509074 systemd[1]: Started cri-containerd-8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b.scope - libcontainer container 8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b. May 27 17:41:45.538797 systemd[1]: cri-containerd-8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b.scope: Deactivated successfully. May 27 17:41:45.539529 containerd[1591]: time="2025-05-27T17:41:45.539408005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b\" id:\"8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b\" pid:4704 exited_at:{seconds:1748367705 nanos:539041779}" May 27 17:41:45.542761 containerd[1591]: time="2025-05-27T17:41:45.542702344Z" level=info msg="received exit event container_id:\"8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b\" id:\"8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b\" pid:4704 exited_at:{seconds:1748367705 nanos:539041779}" May 27 17:41:45.551233 containerd[1591]: time="2025-05-27T17:41:45.551185320Z" level=info msg="StartContainer for \"8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b\" returns successfully" May 27 17:41:45.563820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cede26017d81d5ca09e873ef9b6a77e718658edfbd3960bb521b364d77cb88b-rootfs.mount: Deactivated successfully. May 27 17:41:46.467594 containerd[1591]: time="2025-05-27T17:41:46.467524663Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:41:46.476624 containerd[1591]: time="2025-05-27T17:41:46.476569412Z" level=info msg="Container 12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9: CDI devices from CRI Config.CDIDevices: []" May 27 17:41:46.485324 containerd[1591]: time="2025-05-27T17:41:46.485271730Z" level=info msg="CreateContainer within sandbox \"d43457c1798748bc3a20cade47433c7e445fb7e494421c0df31532c65ba094e8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9\"" May 27 17:41:46.487236 containerd[1591]: time="2025-05-27T17:41:46.487195815Z" level=info msg="StartContainer for \"12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9\"" May 27 17:41:46.489092 containerd[1591]: time="2025-05-27T17:41:46.489048925Z" level=info msg="connecting to shim 12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9" address="unix:///run/containerd/s/529f809b6d9c30270a8cfb8223428b501f549f616b3662e10929c0bf4361db0f" protocol=ttrpc version=3 May 27 17:41:46.510013 systemd[1]: Started cri-containerd-12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9.scope - libcontainer container 12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9. May 27 17:41:46.548360 containerd[1591]: time="2025-05-27T17:41:46.548318305Z" level=info msg="StartContainer for \"12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9\" returns successfully" May 27 17:41:46.610548 containerd[1591]: time="2025-05-27T17:41:46.610452919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9\" id:\"399ac22b42ab119b7d62068fce136ab59fa1fea348843f40d34a486826c6d129\" pid:4770 exited_at:{seconds:1748367706 nanos:610086673}" May 27 17:41:46.975918 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 17:41:47.487451 kubelet[2706]: I0527 17:41:47.487388 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gffdh" podStartSLOduration=5.487371955 podStartE2EDuration="5.487371955s" podCreationTimestamp="2025-05-27 17:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:41:47.487003206 +0000 UTC m=+88.356045714" watchObservedRunningTime="2025-05-27 17:41:47.487371955 +0000 UTC m=+88.356414463" May 27 17:41:48.977545 containerd[1591]: time="2025-05-27T17:41:48.977363529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9\" id:\"7be68f371f5f67b4aaf9d2e4a7b8b4f3cf8e8ca22f7654f8006e99302e147cf0\" pid:4959 exit_status:1 exited_at:{seconds:1748367708 nanos:976805910}" May 27 17:41:50.076884 systemd-networkd[1500]: lxc_health: Link UP May 27 17:41:50.077770 systemd-networkd[1500]: lxc_health: Gained carrier May 27 17:41:51.166810 containerd[1591]: time="2025-05-27T17:41:51.166758624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9\" id:\"ac119fe7bd46a0f08bdf4a758b3fbcb7a9211fb05d934d0c86c61903e01d695a\" pid:5308 exited_at:{seconds:1748367711 nanos:166113620}" May 27 17:41:51.770180 systemd-networkd[1500]: lxc_health: Gained IPv6LL May 27 17:41:53.354111 containerd[1591]: time="2025-05-27T17:41:53.354052887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9\" id:\"43db165e569659cf54e12c694bce27be4965b7b49597a84e4530b2ae423bc25d\" pid:5338 exited_at:{seconds:1748367713 nanos:353492114}" May 27 17:41:55.463286 containerd[1591]: time="2025-05-27T17:41:55.463229306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12d0f92355a0891292d425b45d24162fb45ced1a7c767dfb99c5c99bf57832f9\" id:\"9bb50bb75aa1e3494cef160a341c961f6a4f66e6ee79cef0eabc28369c21338b\" pid:5368 exited_at:{seconds:1748367715 nanos:462443237}" May 27 17:41:55.470194 sshd[4506]: Connection closed by 10.0.0.1 port 59806 May 27 17:41:55.470658 sshd-session[4503]: pam_unix(sshd:session): session closed for user core May 27 17:41:55.475334 systemd[1]: sshd@28-10.0.0.45:22-10.0.0.1:59806.service: Deactivated successfully. May 27 17:41:55.477498 systemd[1]: session-29.scope: Deactivated successfully. May 27 17:41:55.478682 systemd-logind[1577]: Session 29 logged out. Waiting for processes to exit. May 27 17:41:55.480598 systemd-logind[1577]: Removed session 29.