May 8 00:13:26.959137 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:13:26.959167 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:13:26.959176 kernel: BIOS-provided physical RAM map: May 8 00:13:26.959183 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable May 8 00:13:26.959193 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved May 8 00:13:26.959199 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable May 8 00:13:26.959207 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved May 8 00:13:26.959214 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable May 8 00:13:26.959221 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 8 00:13:26.959227 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 8 00:13:26.959234 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 8 00:13:26.959241 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 8 00:13:26.959252 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 8 00:13:26.959262 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 8 00:13:26.959272 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 8 00:13:26.959279 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved May 8 00:13:26.959287 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 8 00:13:26.959294 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:13:26.959303 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:13:26.959310 kernel: NX (Execute Disable) protection: active May 8 00:13:26.959317 kernel: APIC: Static calls initialized May 8 00:13:26.959324 kernel: e820: update [mem 0x9a185018-0x9a18ec57] usable ==> usable May 8 00:13:26.959332 kernel: e820: update [mem 0x9a185018-0x9a18ec57] usable ==> usable May 8 00:13:26.959339 kernel: e820: update [mem 0x9a148018-0x9a184e57] usable ==> usable May 8 00:13:26.959346 kernel: e820: update [mem 0x9a148018-0x9a184e57] usable ==> usable May 8 00:13:26.959352 kernel: extended physical RAM map: May 8 00:13:26.959360 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable May 8 00:13:26.959367 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved May 8 00:13:26.959374 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable May 8 00:13:26.959381 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved May 8 00:13:26.959390 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a148017] usable May 8 00:13:26.959397 kernel: reserve setup_data: [mem 0x000000009a148018-0x000000009a184e57] usable May 8 00:13:26.959404 kernel: reserve setup_data: [mem 0x000000009a184e58-0x000000009a185017] usable May 8 00:13:26.959411 kernel: reserve setup_data: [mem 0x000000009a185018-0x000000009a18ec57] usable May 8 00:13:26.959418 kernel: reserve setup_data: [mem 0x000000009a18ec58-0x000000009b8ecfff] usable May 8 00:13:26.959425 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 8 00:13:26.959432 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 8 00:13:26.959440 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 8 00:13:26.959447 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 8 00:13:26.959454 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 8 00:13:26.959467 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 8 00:13:26.959474 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 8 00:13:26.959482 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved May 8 00:13:26.959489 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 8 00:13:26.959496 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:13:26.959506 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:13:26.959516 kernel: efi: EFI v2.7 by EDK II May 8 00:13:26.959524 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1f7018 RNG=0x9bb73018 May 8 00:13:26.959531 kernel: random: crng init done May 8 00:13:26.959538 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 8 00:13:26.959546 kernel: secureboot: Secure boot enabled May 8 00:13:26.959562 kernel: SMBIOS 2.8 present. May 8 00:13:26.959569 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 8 00:13:26.959577 kernel: Hypervisor detected: KVM May 8 00:13:26.959584 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:13:26.959591 kernel: kvm-clock: using sched offset of 6198061983 cycles May 8 00:13:26.959614 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:13:26.959625 kernel: tsc: Detected 2794.748 MHz processor May 8 00:13:26.959636 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:13:26.959644 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:13:26.959652 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 May 8 00:13:26.959660 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:13:26.959668 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:13:26.959676 kernel: Using GB pages for direct mapping May 8 00:13:26.959683 kernel: ACPI: Early table checksum verification disabled May 8 00:13:26.959691 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) May 8 00:13:26.959702 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:13:26.959710 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:13:26.959720 kernel: ACPI: DSDT 0x000000009BB7A000 002225 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:13:26.959728 kernel: ACPI: FACS 0x000000009BBDD000 000040 May 8 00:13:26.959736 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:13:26.959744 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:13:26.959752 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:13:26.959760 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:13:26.959770 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:13:26.959778 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] May 8 00:13:26.959786 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c224] May 8 00:13:26.959794 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] May 8 00:13:26.959801 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] May 8 00:13:26.959809 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] May 8 00:13:26.959817 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] May 8 00:13:26.959825 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] May 8 00:13:26.959833 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] May 8 00:13:26.959843 kernel: No NUMA configuration found May 8 00:13:26.959850 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] May 8 00:13:26.959858 kernel: NODE_DATA(0) allocated [mem 0x9bf59000-0x9bf5efff] May 8 00:13:26.959865 kernel: Zone ranges: May 8 00:13:26.959873 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:13:26.959880 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] May 8 00:13:26.959888 kernel: Normal empty May 8 00:13:26.959895 kernel: Movable zone start for each node May 8 00:13:26.959903 kernel: Early memory node ranges May 8 00:13:26.959910 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] May 8 00:13:26.959920 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] May 8 00:13:26.959927 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] May 8 00:13:26.959935 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] May 8 00:13:26.959942 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] May 8 00:13:26.959950 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] May 8 00:13:26.959957 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:13:26.959965 kernel: On node 0, zone DMA: 32 pages in unavailable ranges May 8 00:13:26.959972 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:13:26.959980 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 8 00:13:26.959990 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 8 00:13:26.959997 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges May 8 00:13:26.960005 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:13:26.960014 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:13:26.960022 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:13:26.960029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:13:26.960037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:13:26.960044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:13:26.960052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:13:26.960062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:13:26.960069 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:13:26.960077 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:13:26.960084 kernel: TSC deadline timer available May 8 00:13:26.960091 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:13:26.960099 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:13:26.960107 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:13:26.960124 kernel: kvm-guest: setup PV sched yield May 8 00:13:26.960132 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 8 00:13:26.960140 kernel: Booting paravirtualized kernel on KVM May 8 00:13:26.960148 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:13:26.960156 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:13:26.960166 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:13:26.960174 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:13:26.960181 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:13:26.960189 kernel: kvm-guest: PV spinlocks enabled May 8 00:13:26.960199 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:13:26.960208 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:13:26.960217 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:13:26.960225 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:13:26.960235 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:13:26.960243 kernel: Fallback order for Node 0: 0 May 8 00:13:26.960251 kernel: Built 1 zonelists, mobility grouping on. Total pages: 625927 May 8 00:13:26.960258 kernel: Policy zone: DMA32 May 8 00:13:26.960266 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:13:26.960277 kernel: Memory: 2370352K/2552216K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 181608K reserved, 0K cma-reserved) May 8 00:13:26.960285 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:13:26.960293 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:13:26.960300 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:13:26.960308 kernel: Dynamic Preempt: voluntary May 8 00:13:26.960316 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:13:26.960324 kernel: rcu: RCU event tracing is enabled. May 8 00:13:26.960332 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:13:26.960340 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:13:26.960350 kernel: Rude variant of Tasks RCU enabled. May 8 00:13:26.960358 kernel: Tracing variant of Tasks RCU enabled. May 8 00:13:26.960366 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:13:26.960374 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:13:26.960382 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:13:26.960390 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:13:26.960398 kernel: Console: colour dummy device 80x25 May 8 00:13:26.960408 kernel: printk: console [ttyS0] enabled May 8 00:13:26.960416 kernel: ACPI: Core revision 20230628 May 8 00:13:26.960424 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:13:26.960435 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:13:26.960442 kernel: x2apic enabled May 8 00:13:26.960450 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:13:26.960458 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:13:26.960466 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:13:26.960474 kernel: kvm-guest: setup PV IPIs May 8 00:13:26.960482 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:13:26.960490 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:13:26.960498 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:13:26.960508 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:13:26.960516 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:13:26.960524 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:13:26.960532 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:13:26.960539 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:13:26.960547 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:13:26.960564 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:13:26.960572 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:13:26.960582 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:13:26.960590 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:13:26.960609 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:13:26.960617 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:13:26.960626 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:13:26.960634 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:13:26.960642 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:13:26.960652 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:13:26.960660 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:13:26.960672 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:13:26.960680 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:13:26.960688 kernel: Freeing SMP alternatives memory: 32K May 8 00:13:26.960696 kernel: pid_max: default: 32768 minimum: 301 May 8 00:13:26.960704 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:13:26.960712 kernel: landlock: Up and running. May 8 00:13:26.960719 kernel: SELinux: Initializing. May 8 00:13:26.960727 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:13:26.960735 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:13:26.960746 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:13:26.960754 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:13:26.960762 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:13:26.960770 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:13:26.960778 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:13:26.960785 kernel: ... version: 0 May 8 00:13:26.960793 kernel: ... bit width: 48 May 8 00:13:26.960801 kernel: ... generic registers: 6 May 8 00:13:26.960813 kernel: ... value mask: 0000ffffffffffff May 8 00:13:26.960821 kernel: ... max period: 00007fffffffffff May 8 00:13:26.960829 kernel: ... fixed-purpose events: 0 May 8 00:13:26.960837 kernel: ... event mask: 000000000000003f May 8 00:13:26.960845 kernel: signal: max sigframe size: 1776 May 8 00:13:26.960852 kernel: rcu: Hierarchical SRCU implementation. May 8 00:13:26.960860 kernel: rcu: Max phase no-delay instances is 400. May 8 00:13:26.960868 kernel: smp: Bringing up secondary CPUs ... May 8 00:13:26.960876 kernel: smpboot: x86: Booting SMP configuration: May 8 00:13:26.960884 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:13:26.960894 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:13:26.960902 kernel: smpboot: Max logical packages: 1 May 8 00:13:26.960909 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:13:26.960917 kernel: devtmpfs: initialized May 8 00:13:26.960925 kernel: x86/mm: Memory block size: 128MB May 8 00:13:26.960933 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) May 8 00:13:26.960941 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) May 8 00:13:26.960949 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:13:26.960957 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:13:26.960967 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:13:26.960975 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:13:26.960983 kernel: audit: initializing netlink subsys (disabled) May 8 00:13:26.960991 kernel: audit: type=2000 audit(1746663205.745:1): state=initialized audit_enabled=0 res=1 May 8 00:13:26.960999 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:13:26.961007 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:13:26.961014 kernel: cpuidle: using governor menu May 8 00:13:26.961022 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:13:26.961033 kernel: dca service started, version 1.12.1 May 8 00:13:26.961041 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 8 00:13:26.961049 kernel: PCI: Using configuration type 1 for base access May 8 00:13:26.961057 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:13:26.961064 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:13:26.961072 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:13:26.961080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:13:26.961088 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:13:26.961096 kernel: ACPI: Added _OSI(Module Device) May 8 00:13:26.961106 kernel: ACPI: Added _OSI(Processor Device) May 8 00:13:26.961113 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:13:26.961121 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:13:26.961129 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:13:26.961137 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:13:26.961145 kernel: ACPI: Interpreter enabled May 8 00:13:26.961152 kernel: ACPI: PM: (supports S0 S5) May 8 00:13:26.961160 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:13:26.961168 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:13:26.961176 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:13:26.961186 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:13:26.961194 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:13:26.961390 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:13:26.961531 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:13:26.961744 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:13:26.961757 kernel: PCI host bridge to bus 0000:00 May 8 00:13:26.961892 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:13:26.962021 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:13:26.962143 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:13:26.962265 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 8 00:13:26.962387 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 8 00:13:26.962513 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 8 00:13:26.962687 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:13:26.962844 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:13:26.962986 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:13:26.963119 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:13:26.963252 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:13:26.963383 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:13:26.963515 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:13:26.963687 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:13:26.963838 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:13:26.963974 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:13:26.964109 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:13:26.964245 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 8 00:13:26.964392 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:13:26.964527 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:13:26.964693 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:13:26.964836 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 8 00:13:26.964980 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:13:26.965115 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:13:26.965249 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:13:26.965382 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 8 00:13:26.965516 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:13:26.965690 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:13:26.965834 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:13:26.965976 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:13:26.966109 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:13:26.966241 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:13:26.966383 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:13:26.966517 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:13:26.966536 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:13:26.966546 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:13:26.966568 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:13:26.966578 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:13:26.966587 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:13:26.966611 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:13:26.966621 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:13:26.966631 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:13:26.966641 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:13:26.966655 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:13:26.966664 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:13:26.966672 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:13:26.966679 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:13:26.966687 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:13:26.966695 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:13:26.966703 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:13:26.966711 kernel: iommu: Default domain type: Translated May 8 00:13:26.966719 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:13:26.966729 kernel: efivars: Registered efivars operations May 8 00:13:26.966737 kernel: PCI: Using ACPI for IRQ routing May 8 00:13:26.966745 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:13:26.966753 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] May 8 00:13:26.966761 kernel: e820: reserve RAM buffer [mem 0x9a148018-0x9bffffff] May 8 00:13:26.966768 kernel: e820: reserve RAM buffer [mem 0x9a185018-0x9bffffff] May 8 00:13:26.966776 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] May 8 00:13:26.966783 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] May 8 00:13:26.966921 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:13:26.967057 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:13:26.967189 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:13:26.967200 kernel: vgaarb: loaded May 8 00:13:26.967208 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:13:26.967216 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:13:26.967224 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:13:26.967232 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:13:26.967240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:13:26.967251 kernel: pnp: PnP ACPI init May 8 00:13:26.967399 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 8 00:13:26.967411 kernel: pnp: PnP ACPI: found 6 devices May 8 00:13:26.967419 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:13:26.967427 kernel: NET: Registered PF_INET protocol family May 8 00:13:26.967435 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:13:26.967443 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:13:26.967451 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:13:26.967463 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:13:26.967471 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:13:26.967479 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:13:26.967487 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:13:26.967495 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:13:26.967503 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:13:26.967510 kernel: NET: Registered PF_XDP protocol family May 8 00:13:26.967712 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:13:26.967853 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:13:26.967983 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:13:26.968109 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:13:26.968233 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:13:26.968356 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 8 00:13:26.968478 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 8 00:13:26.968627 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 8 00:13:26.968639 kernel: PCI: CLS 0 bytes, default 64 May 8 00:13:26.968647 kernel: Initialise system trusted keyrings May 8 00:13:26.968659 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:13:26.968667 kernel: Key type asymmetric registered May 8 00:13:26.968675 kernel: Asymmetric key parser 'x509' registered May 8 00:13:26.968683 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:13:26.968691 kernel: io scheduler mq-deadline registered May 8 00:13:26.968699 kernel: io scheduler kyber registered May 8 00:13:26.968707 kernel: io scheduler bfq registered May 8 00:13:26.968715 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:13:26.968740 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:13:26.968753 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:13:26.968761 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:13:26.968769 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:13:26.968777 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:13:26.968786 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:13:26.968794 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:13:26.968802 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:13:26.968947 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:13:26.968963 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:13:26.969088 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:13:26.969214 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:13:26 UTC (1746663206) May 8 00:13:26.969340 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 8 00:13:26.969351 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:13:26.969359 kernel: efifb: probing for efifb May 8 00:13:26.969367 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 8 00:13:26.969375 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 8 00:13:26.969383 kernel: efifb: scrolling: redraw May 8 00:13:26.969395 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 8 00:13:26.969403 kernel: Console: switching to colour frame buffer device 160x50 May 8 00:13:26.969411 kernel: fb0: EFI VGA frame buffer device May 8 00:13:26.969420 kernel: pstore: Using crash dump compression: deflate May 8 00:13:26.969428 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:13:26.969436 kernel: NET: Registered PF_INET6 protocol family May 8 00:13:26.969444 kernel: Segment Routing with IPv6 May 8 00:13:26.969452 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:13:26.969461 kernel: NET: Registered PF_PACKET protocol family May 8 00:13:26.969471 kernel: Key type dns_resolver registered May 8 00:13:26.969482 kernel: IPI shorthand broadcast: enabled May 8 00:13:26.969490 kernel: sched_clock: Marking stable (692023709, 144642107)->(865220907, -28555091) May 8 00:13:26.969498 kernel: registered taskstats version 1 May 8 00:13:26.969507 kernel: Loading compiled-in X.509 certificates May 8 00:13:26.969515 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:13:26.969526 kernel: Key type .fscrypt registered May 8 00:13:26.969534 kernel: Key type fscrypt-provisioning registered May 8 00:13:26.969542 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:13:26.969559 kernel: ima: Allocated hash algorithm: sha1 May 8 00:13:26.969568 kernel: ima: No architecture policies found May 8 00:13:26.969576 kernel: clk: Disabling unused clocks May 8 00:13:26.969584 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:13:26.969592 kernel: Write protecting the kernel read-only data: 38912k May 8 00:13:26.969616 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:13:26.969624 kernel: Run /init as init process May 8 00:13:26.969633 kernel: with arguments: May 8 00:13:26.969641 kernel: /init May 8 00:13:26.969649 kernel: with environment: May 8 00:13:26.969657 kernel: HOME=/ May 8 00:13:26.969665 kernel: TERM=linux May 8 00:13:26.969673 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:13:26.969682 systemd[1]: Successfully made /usr/ read-only. May 8 00:13:26.969697 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:13:26.969706 systemd[1]: Detected virtualization kvm. May 8 00:13:26.969715 systemd[1]: Detected architecture x86-64. May 8 00:13:26.969723 systemd[1]: Running in initrd. May 8 00:13:26.969732 systemd[1]: No hostname configured, using default hostname. May 8 00:13:26.969741 systemd[1]: Hostname set to . May 8 00:13:26.969750 systemd[1]: Initializing machine ID from VM UUID. May 8 00:13:26.969761 systemd[1]: Queued start job for default target initrd.target. May 8 00:13:26.969770 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:13:26.969779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:13:26.969788 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:13:26.969797 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:13:26.969806 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:13:26.969815 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:13:26.969828 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:13:26.969837 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:13:26.969846 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:13:26.969854 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:13:26.969863 systemd[1]: Reached target paths.target - Path Units. May 8 00:13:26.969872 systemd[1]: Reached target slices.target - Slice Units. May 8 00:13:26.969880 systemd[1]: Reached target swap.target - Swaps. May 8 00:13:26.969889 systemd[1]: Reached target timers.target - Timer Units. May 8 00:13:26.969897 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:13:26.969908 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:13:26.969917 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:13:26.969926 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:13:26.969935 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:13:26.969943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:13:26.969952 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:13:26.969960 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:13:26.969971 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:13:26.969982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:13:26.969991 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:13:26.970002 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:13:26.970011 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:13:26.970020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:13:26.970028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:13:26.970037 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:13:26.970046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:13:26.970057 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:13:26.970066 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:13:26.970102 systemd-journald[192]: Collecting audit messages is disabled. May 8 00:13:26.970125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:26.970134 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:13:26.970143 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:13:26.970152 systemd-journald[192]: Journal started May 8 00:13:26.970173 systemd-journald[192]: Runtime Journal (/run/log/journal/9cfab0b7566f44a797761ace22226396) is 6M, max 48M, 42M free. May 8 00:13:26.956980 systemd-modules-load[195]: Inserted module 'overlay' May 8 00:13:26.972839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:13:26.977127 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:13:26.978884 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:13:26.987190 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:13:26.988816 systemd-modules-load[195]: Inserted module 'br_netfilter' May 8 00:13:26.989670 kernel: Bridge firewalling registered May 8 00:13:26.990429 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:13:27.002736 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:13:27.003049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:13:27.003531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:13:27.010722 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:13:27.013450 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:13:27.016495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:13:27.019171 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:13:27.037185 dracut-cmdline[226]: dracut-dracut-053 May 8 00:13:27.041085 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:13:27.059383 systemd-resolved[229]: Positive Trust Anchors: May 8 00:13:27.059403 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:13:27.059434 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:13:27.062014 systemd-resolved[229]: Defaulting to hostname 'linux'. May 8 00:13:27.063247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:13:27.069677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:13:27.147650 kernel: SCSI subsystem initialized May 8 00:13:27.157623 kernel: Loading iSCSI transport class v2.0-870. May 8 00:13:27.168636 kernel: iscsi: registered transport (tcp) May 8 00:13:27.189896 kernel: iscsi: registered transport (qla4xxx) May 8 00:13:27.189965 kernel: QLogic iSCSI HBA Driver May 8 00:13:27.243698 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:13:27.258857 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:13:27.284394 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:13:27.284437 kernel: device-mapper: uevent: version 1.0.3 May 8 00:13:27.284463 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:13:27.325637 kernel: raid6: avx2x4 gen() 29957 MB/s May 8 00:13:27.342632 kernel: raid6: avx2x2 gen() 30183 MB/s May 8 00:13:27.359723 kernel: raid6: avx2x1 gen() 25267 MB/s May 8 00:13:27.359747 kernel: raid6: using algorithm avx2x2 gen() 30183 MB/s May 8 00:13:27.377736 kernel: raid6: .... xor() 19619 MB/s, rmw enabled May 8 00:13:27.377768 kernel: raid6: using avx2x2 recovery algorithm May 8 00:13:27.398632 kernel: xor: automatically using best checksumming function avx May 8 00:13:27.545636 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:13:27.559944 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:13:27.572747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:13:27.588388 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 8 00:13:27.594344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:13:27.600789 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:13:27.616834 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation May 8 00:13:27.651846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:13:27.661867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:13:27.734923 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:13:27.744815 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:13:27.758318 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:13:27.759394 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:13:27.759946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:13:27.760390 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:13:27.766895 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:13:27.776173 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:13:27.780852 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:13:27.803000 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:13:27.803166 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:13:27.803198 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:13:27.803211 kernel: GPT:9289727 != 19775487 May 8 00:13:27.803222 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:13:27.803260 kernel: GPT:9289727 != 19775487 May 8 00:13:27.803272 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:13:27.803290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:13:27.809377 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:13:27.809400 kernel: AES CTR mode by8 optimization enabled May 8 00:13:27.820684 kernel: libata version 3.00 loaded. May 8 00:13:27.833524 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:13:27.863705 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:13:27.863724 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (459) May 8 00:13:27.863745 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (461) May 8 00:13:27.863763 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:13:27.864125 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:13:27.864364 kernel: scsi host0: ahci May 8 00:13:27.864640 kernel: scsi host1: ahci May 8 00:13:27.866689 kernel: scsi host2: ahci May 8 00:13:27.866909 kernel: scsi host3: ahci May 8 00:13:27.867143 kernel: scsi host4: ahci May 8 00:13:27.867357 kernel: scsi host5: ahci May 8 00:13:27.867579 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:13:27.867596 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:13:27.867632 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:13:27.867649 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:13:27.867665 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:13:27.867687 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:13:27.857101 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:13:27.874892 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:13:27.896728 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:13:27.899323 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:13:27.911389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:13:27.922741 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:13:27.925055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:13:27.925114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:13:27.929085 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:13:27.931772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:13:27.931830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:27.933993 disk-uuid[554]: Primary Header is updated. May 8 00:13:27.933993 disk-uuid[554]: Secondary Entries is updated. May 8 00:13:27.933993 disk-uuid[554]: Secondary Header is updated. May 8 00:13:27.936267 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:13:27.937923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:13:27.941586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:13:27.944587 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:13:27.943232 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:13:27.961757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:27.972772 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:13:28.006488 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:13:28.172935 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:13:28.173007 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:13:28.173019 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:13:28.174640 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:13:28.175624 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:13:28.175651 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:13:28.176679 kernel: ata3.00: applying bridge limits May 8 00:13:28.177630 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:13:28.177645 kernel: ata3.00: configured for UDMA/100 May 8 00:13:28.178648 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:13:28.228631 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:13:28.242370 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:13:28.242389 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:13:28.944631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:13:28.945368 disk-uuid[555]: The operation has completed successfully. May 8 00:13:28.976722 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:13:28.976899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:13:29.034751 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:13:29.040318 sh[591]: Success May 8 00:13:29.053627 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:13:29.092539 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:13:29.111787 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:13:29.116410 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:13:29.129110 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:13:29.129138 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:13:29.129149 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:13:29.130147 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:13:29.130901 kernel: BTRFS info (device dm-0): using free space tree May 8 00:13:29.136081 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:13:29.138639 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:13:29.153755 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:13:29.156526 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:13:29.174897 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:29.174949 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:13:29.174965 kernel: BTRFS info (device vda6): using free space tree May 8 00:13:29.178633 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:13:29.183659 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:29.189314 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:13:29.196798 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:13:29.254922 ignition[676]: Ignition 2.20.0 May 8 00:13:29.254937 ignition[676]: Stage: fetch-offline May 8 00:13:29.255001 ignition[676]: no configs at "/usr/lib/ignition/base.d" May 8 00:13:29.255017 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:13:29.255141 ignition[676]: parsed url from cmdline: "" May 8 00:13:29.255146 ignition[676]: no config URL provided May 8 00:13:29.255154 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:13:29.255166 ignition[676]: no config at "/usr/lib/ignition/user.ign" May 8 00:13:29.255198 ignition[676]: op(1): [started] loading QEMU firmware config module May 8 00:13:29.255206 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:13:29.265218 ignition[676]: op(1): [finished] loading QEMU firmware config module May 8 00:13:29.288994 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:13:29.298750 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:13:29.309651 ignition[676]: parsing config with SHA512: 69f64beba71118a4abad21d5820238c2d8cabd483ef855c412d870b812ea08ae331eb1f7555e815e35e42d3e35c5f46bacd3c300ef2bb73644c06b26d7a70d2f May 8 00:13:29.314321 unknown[676]: fetched base config from "system" May 8 00:13:29.314333 unknown[676]: fetched user config from "qemu" May 8 00:13:29.314843 ignition[676]: fetch-offline: fetch-offline passed May 8 00:13:29.314919 ignition[676]: Ignition finished successfully May 8 00:13:29.316832 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:13:29.331833 systemd-networkd[779]: lo: Link UP May 8 00:13:29.331844 systemd-networkd[779]: lo: Gained carrier May 8 00:13:29.335001 systemd-networkd[779]: Enumeration completed May 8 00:13:29.335088 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:13:29.336987 systemd[1]: Reached target network.target - Network. May 8 00:13:29.338884 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:13:29.342279 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:13:29.342292 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:13:29.342724 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:13:29.349007 systemd-networkd[779]: eth0: Link UP May 8 00:13:29.349019 systemd-networkd[779]: eth0: Gained carrier May 8 00:13:29.349028 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:13:29.368193 ignition[783]: Ignition 2.20.0 May 8 00:13:29.368205 ignition[783]: Stage: kargs May 8 00:13:29.368374 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 8 00:13:29.368385 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:13:29.369993 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:13:29.369290 ignition[783]: kargs: kargs passed May 8 00:13:29.372411 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:13:29.369332 ignition[783]: Ignition finished successfully May 8 00:13:29.377795 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:13:29.394279 ignition[791]: Ignition 2.20.0 May 8 00:13:29.394305 ignition[791]: Stage: disks May 8 00:13:29.394536 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 8 00:13:29.394554 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:13:29.395862 ignition[791]: disks: disks passed May 8 00:13:29.398239 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:13:29.395924 ignition[791]: Ignition finished successfully May 8 00:13:29.400194 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:13:29.402334 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:13:29.403759 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:13:29.405622 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:13:29.405706 systemd[1]: Reached target basic.target - Basic System. May 8 00:13:29.415764 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:13:29.430338 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:13:29.437480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:13:29.447673 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:13:29.536633 kernel: EXT4-fs (vda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:13:29.537154 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:13:29.539296 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:13:29.553681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:13:29.556848 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:13:29.559914 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:13:29.559977 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:13:29.569935 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) May 8 00:13:29.569963 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:29.569976 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:13:29.569987 kernel: BTRFS info (device vda6): using free space tree May 8 00:13:29.560007 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:13:29.572014 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:13:29.574007 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:13:29.575900 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:13:29.596751 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:13:29.628593 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:13:29.634311 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory May 8 00:13:29.639175 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:13:29.644267 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:13:29.731945 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:13:29.745692 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:13:29.748305 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:13:29.756626 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:29.775463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:13:29.780963 ignition[925]: INFO : Ignition 2.20.0 May 8 00:13:29.780963 ignition[925]: INFO : Stage: mount May 8 00:13:29.782990 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:13:29.782990 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:13:29.786822 ignition[925]: INFO : mount: mount passed May 8 00:13:29.787761 ignition[925]: INFO : Ignition finished successfully May 8 00:13:29.790039 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:13:29.797743 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:13:30.128444 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:13:30.142781 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:13:30.149630 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (938) May 8 00:13:30.151702 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:30.151725 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:13:30.151737 kernel: BTRFS info (device vda6): using free space tree May 8 00:13:30.154626 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:13:30.156485 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:13:30.177706 ignition[955]: INFO : Ignition 2.20.0 May 8 00:13:30.177706 ignition[955]: INFO : Stage: files May 8 00:13:30.179759 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:13:30.179759 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:13:30.179759 ignition[955]: DEBUG : files: compiled without relabeling support, skipping May 8 00:13:30.183918 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:13:30.183918 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:13:30.183918 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:13:30.183918 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:13:30.183918 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:13:30.182660 unknown[955]: wrote ssh authorized keys file for user: core May 8 00:13:30.193083 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:13:30.193083 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:13:30.227716 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:13:30.350122 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:13:30.350122 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:13:30.354213 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:13:30.828377 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:13:30.919169 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:13:30.921290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 8 00:13:31.029878 systemd-networkd[779]: eth0: Gained IPv6LL May 8 00:13:31.323677 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:13:31.850912 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:13:31.850912 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:13:31.855456 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:13:31.855456 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:13:31.855456 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:13:31.855456 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:13:31.855456 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:13:31.855456 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:13:31.855456 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:13:31.855456 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:13:31.876184 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:13:31.881556 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:13:31.883321 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:13:31.883321 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:13:31.886241 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:13:31.887862 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:13:31.890247 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:13:31.892026 ignition[955]: INFO : files: files passed May 8 00:13:31.892787 ignition[955]: INFO : Ignition finished successfully May 8 00:13:31.896324 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:13:31.909869 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:13:31.913854 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:13:31.919792 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:13:31.920829 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:13:31.938078 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:13:31.941108 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:13:31.941108 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:13:31.944223 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:13:31.947458 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:13:31.950411 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:13:31.964753 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:13:31.999547 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:13:31.999707 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:13:32.001138 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:13:32.003724 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:13:32.004088 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:13:32.005062 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:13:32.026801 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:13:32.046982 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:13:32.059538 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:13:32.059762 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:13:32.062020 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:13:32.062337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:13:32.062482 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:13:32.068960 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:13:32.069095 systemd[1]: Stopped target basic.target - Basic System. May 8 00:13:32.069450 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:13:32.069956 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:13:32.070276 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:13:32.070641 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:13:32.071122 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:13:32.071482 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:13:32.071985 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:13:32.072316 systemd[1]: Stopped target swap.target - Swaps. May 8 00:13:32.072636 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:13:32.072766 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:13:32.073495 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:13:32.074024 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:13:32.074314 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:13:32.074449 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:13:32.093921 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:13:32.094087 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:13:32.097071 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:13:32.097192 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:13:32.099849 systemd[1]: Stopped target paths.target - Path Units. May 8 00:13:32.100080 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:13:32.103713 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:13:32.107037 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:13:32.108172 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:13:32.109113 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:13:32.109250 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:13:32.112653 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:13:32.112769 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:13:32.114854 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:13:32.115024 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:13:32.116454 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:13:32.116624 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:13:32.122786 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:13:32.125827 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:13:32.126010 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:13:32.127647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:13:32.130010 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:13:32.130171 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:13:32.133720 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:13:32.133874 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:13:32.142303 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:13:32.142484 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:13:32.147426 ignition[1010]: INFO : Ignition 2.20.0 May 8 00:13:32.148478 ignition[1010]: INFO : Stage: umount May 8 00:13:32.148478 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:13:32.148478 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:13:32.151799 ignition[1010]: INFO : umount: umount passed May 8 00:13:32.151799 ignition[1010]: INFO : Ignition finished successfully May 8 00:13:32.152520 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:13:32.152709 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:13:32.154184 systemd[1]: Stopped target network.target - Network. May 8 00:13:32.155935 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:13:32.156014 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:13:32.158199 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:13:32.158275 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:13:32.160424 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:13:32.160495 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:13:32.162800 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:13:32.162867 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:13:32.165203 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:13:32.167389 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:13:32.170912 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:13:32.176801 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:13:32.176975 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:13:32.182163 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:13:32.182425 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:13:32.182549 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:13:32.184764 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:13:32.185586 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:13:32.185689 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:13:32.199825 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:13:32.200904 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:13:32.201006 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:13:32.203429 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:13:32.203505 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:13:32.207396 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:13:32.207472 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:13:32.208715 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:13:32.208768 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:13:32.211168 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:13:32.215676 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:13:32.215754 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:13:32.226163 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:13:32.226311 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:13:32.239000 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:13:32.239186 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:13:32.241881 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:13:32.241980 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:13:32.243550 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:13:32.243615 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:13:32.246330 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:13:32.246387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:13:32.249026 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:13:32.249079 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:13:32.251108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:13:32.251172 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:13:32.261986 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:13:32.263309 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:13:32.263393 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:13:32.267440 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:13:32.267513 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:32.271230 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:13:32.271305 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:13:32.271818 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:13:32.271942 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:13:32.405947 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:13:32.406117 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:13:32.408320 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:13:32.410082 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:13:32.410154 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:13:32.424745 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:13:32.434810 systemd[1]: Switching root. May 8 00:13:32.469027 systemd-journald[192]: Journal stopped May 8 00:13:33.903514 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 8 00:13:33.903587 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:13:33.903743 kernel: SELinux: policy capability open_perms=1 May 8 00:13:33.903756 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:13:33.903776 kernel: SELinux: policy capability always_check_network=0 May 8 00:13:33.903788 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:13:33.903800 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:13:33.903818 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:13:33.903830 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:13:33.903843 kernel: audit: type=1403 audit(1746663212.990:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:13:33.903856 systemd[1]: Successfully loaded SELinux policy in 41.415ms. May 8 00:13:33.903887 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.219ms. May 8 00:13:33.903903 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:13:33.903916 systemd[1]: Detected virtualization kvm. May 8 00:13:33.903932 systemd[1]: Detected architecture x86-64. May 8 00:13:33.903945 systemd[1]: Detected first boot. May 8 00:13:33.903966 systemd[1]: Initializing machine ID from VM UUID. May 8 00:13:33.903981 zram_generator::config[1058]: No configuration found. May 8 00:13:33.903996 kernel: Guest personality initialized and is inactive May 8 00:13:33.904008 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:13:33.904019 kernel: Initialized host personality May 8 00:13:33.904031 kernel: NET: Registered PF_VSOCK protocol family May 8 00:13:33.904044 systemd[1]: Populated /etc with preset unit settings. May 8 00:13:33.904059 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:13:33.904079 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:13:33.904092 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:13:33.904105 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:13:33.904118 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:13:33.904137 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:13:33.904152 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:13:33.904165 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:13:33.904178 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:13:33.904194 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:13:33.904214 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:13:33.904228 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:13:33.904241 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:13:33.904256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:13:33.904269 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:13:33.904286 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:13:33.904300 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:13:33.904313 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:13:33.904332 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:13:33.904345 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:13:33.904361 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:13:33.904391 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:13:33.904408 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:13:33.904425 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:13:33.904441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:13:33.904458 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:13:33.904482 systemd[1]: Reached target slices.target - Slice Units. May 8 00:13:33.904501 systemd[1]: Reached target swap.target - Swaps. May 8 00:13:33.904518 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:13:33.904532 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:13:33.904545 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:13:33.904558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:13:33.904573 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:13:33.904586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:13:33.904612 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:13:33.904633 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:13:33.904646 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:13:33.904659 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:13:33.904671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:33.904684 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:13:33.904697 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:13:33.904709 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:13:33.904722 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:13:33.904735 systemd[1]: Reached target machines.target - Containers. May 8 00:13:33.904753 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:13:33.904766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:13:33.904779 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:13:33.904792 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:13:33.904806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:13:33.904819 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:13:33.904832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:13:33.904844 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:13:33.904863 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:13:33.904876 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:13:33.904889 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:13:33.904901 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:13:33.904916 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:13:33.904929 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:13:33.904942 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:13:33.904954 kernel: loop: module loaded May 8 00:13:33.904975 kernel: ACPI: bus type drm_connector registered May 8 00:13:33.904987 kernel: fuse: init (API version 7.39) May 8 00:13:33.904999 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:13:33.905012 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:13:33.905024 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:13:33.905037 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:13:33.905050 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:13:33.905088 systemd-journald[1136]: Collecting audit messages is disabled. May 8 00:13:33.905111 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:13:33.905126 systemd-journald[1136]: Journal started May 8 00:13:33.905148 systemd-journald[1136]: Runtime Journal (/run/log/journal/9cfab0b7566f44a797761ace22226396) is 6M, max 48M, 42M free. May 8 00:13:33.637964 systemd[1]: Queued start job for default target multi-user.target. May 8 00:13:33.653953 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:13:33.654525 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:13:33.907000 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:13:33.907025 systemd[1]: Stopped verity-setup.service. May 8 00:13:33.909644 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:33.915425 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:13:33.916291 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:13:33.917646 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:13:33.918964 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:13:33.920110 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:13:33.921377 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:13:33.922707 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:13:33.924080 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:13:33.925665 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:13:33.927302 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:13:33.927544 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:13:33.929114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:13:33.929340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:13:33.931050 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:13:33.931273 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:13:33.932746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:13:33.932970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:13:33.934727 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:13:33.934949 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:13:33.936400 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:13:33.936633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:13:33.938204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:13:33.939721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:13:33.941537 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:13:33.943178 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:13:33.957764 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:13:33.968688 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:13:33.971068 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:13:33.972292 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:13:33.972325 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:13:33.974421 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:13:33.976856 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:13:33.979282 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:13:33.980576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:13:33.984052 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:13:33.986868 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:13:33.988458 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:13:33.990407 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:13:33.991658 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:13:33.993050 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:13:33.998527 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:13:34.004756 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:13:34.016384 systemd-journald[1136]: Time spent on flushing to /var/log/journal/9cfab0b7566f44a797761ace22226396 is 21.368ms for 1029 entries. May 8 00:13:34.016384 systemd-journald[1136]: System Journal (/var/log/journal/9cfab0b7566f44a797761ace22226396) is 8M, max 195.6M, 187.6M free. May 8 00:13:34.061088 systemd-journald[1136]: Received client request to flush runtime journal. May 8 00:13:34.061171 kernel: loop0: detected capacity change from 0 to 147912 May 8 00:13:34.008024 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:13:34.009535 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:13:34.011177 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:13:34.020495 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:13:34.068878 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:13:34.027839 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:13:34.030192 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:13:34.035573 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:13:34.040754 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:13:34.056504 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:13:34.063379 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:13:34.065381 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:13:34.082415 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:13:34.089738 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:13:34.096726 kernel: loop1: detected capacity change from 0 to 205544 May 8 00:13:34.096909 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:13:34.125132 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 8 00:13:34.125163 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 8 00:13:34.129651 kernel: loop2: detected capacity change from 0 to 138176 May 8 00:13:34.131845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:13:34.176652 kernel: loop3: detected capacity change from 0 to 147912 May 8 00:13:34.188630 kernel: loop4: detected capacity change from 0 to 205544 May 8 00:13:34.197650 kernel: loop5: detected capacity change from 0 to 138176 May 8 00:13:34.210312 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:13:34.211168 (sd-merge)[1204]: Merged extensions into '/usr'. May 8 00:13:34.216347 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:13:34.216375 systemd[1]: Reloading... May 8 00:13:34.290654 zram_generator::config[1231]: No configuration found. May 8 00:13:34.360023 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:13:34.428445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:13:34.499370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:13:34.499899 systemd[1]: Reloading finished in 282 ms. May 8 00:13:34.528644 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:13:34.530330 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:13:34.547159 systemd[1]: Starting ensure-sysext.service... May 8 00:13:34.549256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:13:34.572237 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:13:34.572543 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:13:34.573544 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:13:34.573854 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 8 00:13:34.573944 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 8 00:13:34.584097 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:13:34.584115 systemd-tmpfiles[1270]: Skipping /boot May 8 00:13:34.590678 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... May 8 00:13:34.590801 systemd[1]: Reloading... May 8 00:13:34.597946 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:13:34.597962 systemd-tmpfiles[1270]: Skipping /boot May 8 00:13:34.641732 zram_generator::config[1299]: No configuration found. May 8 00:13:34.759133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:13:34.828951 systemd[1]: Reloading finished in 237 ms. May 8 00:13:34.844990 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:13:34.868984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:13:34.890976 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:13:34.893973 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:13:34.896524 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:13:34.902769 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:13:34.906377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:13:34.912861 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:13:34.917714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:34.917895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:13:34.925985 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:13:34.928991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:13:34.935003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:13:34.937841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:13:34.938019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:13:34.940468 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:13:34.941587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:34.943175 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:13:34.943418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:13:34.945176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:13:34.945445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:13:34.948724 systemd-udevd[1348]: Using default interface naming scheme 'v255'. May 8 00:13:34.949413 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:13:34.955478 augenrules[1367]: No rules May 8 00:13:34.959328 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:13:34.959645 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:13:34.961409 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:13:34.961776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:13:34.973505 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:34.974097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:13:34.985178 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:13:34.988163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:13:34.991638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:13:34.992973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:13:34.993148 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:13:35.000044 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:13:35.001174 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:35.004523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:13:35.007303 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:13:35.009170 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:13:35.011425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:13:35.016218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:13:35.018244 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:13:35.020688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:13:35.021400 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:13:35.031658 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:13:35.031898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:13:35.033740 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:13:35.049642 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1380) May 8 00:13:35.083995 systemd[1]: Finished ensure-sysext.service. May 8 00:13:35.096053 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:13:35.109876 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:35.115975 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:13:35.117831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:13:35.119301 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:13:35.119939 systemd-resolved[1347]: Positive Trust Anchors: May 8 00:13:35.119947 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:13:35.120153 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:13:35.122839 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:13:35.125069 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:13:35.127889 systemd-resolved[1347]: Defaulting to hostname 'linux'. May 8 00:13:35.135834 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:13:35.131267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:13:35.132593 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:13:35.132646 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:13:35.136342 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:13:35.149694 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:13:35.149995 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:13:35.150181 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:13:35.150995 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:13:35.152439 kernel: ACPI: button: Power Button [PWRF] May 8 00:13:35.146787 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:13:35.150409 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:13:35.150441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:35.150934 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:13:35.153808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:13:35.154060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:13:35.156250 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:13:35.156928 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:13:35.160588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:13:35.161959 augenrules[1419]: /sbin/augenrules: No change May 8 00:13:35.162130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:13:35.165003 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:13:35.166715 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:13:35.174109 augenrules[1445]: No rules May 8 00:13:35.177550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:13:35.185191 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:13:35.180842 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:13:35.181133 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:13:35.183885 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:13:35.196541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:13:35.198043 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:13:35.198119 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:13:35.226652 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:13:35.275683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:13:35.293509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:13:35.293846 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:35.303873 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:13:35.310119 kernel: kvm_amd: TSC scaling supported May 8 00:13:35.310155 kernel: kvm_amd: Nested Virtualization enabled May 8 00:13:35.310169 kernel: kvm_amd: Nested Paging enabled May 8 00:13:35.311215 kernel: kvm_amd: LBR virtualization supported May 8 00:13:35.313636 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:13:35.313660 kernel: kvm_amd: Virtual GIF supported May 8 00:13:35.323637 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:13:35.343364 systemd-networkd[1425]: lo: Link UP May 8 00:13:35.343659 systemd-networkd[1425]: lo: Gained carrier May 8 00:13:35.345931 systemd-networkd[1425]: Enumeration completed May 8 00:13:35.346269 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:13:35.346517 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:13:35.346589 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:13:35.347557 systemd-networkd[1425]: eth0: Link UP May 8 00:13:35.348131 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:13:35.348363 systemd-networkd[1425]: eth0: Gained carrier May 8 00:13:35.348424 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:13:35.349731 systemd[1]: Reached target network.target - Network. May 8 00:13:35.350832 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:13:35.351625 kernel: EDAC MC: Ver: 3.0.0 May 8 00:13:35.360666 systemd-networkd[1425]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:13:35.360765 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:13:35.362160 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. May 8 00:13:35.363003 systemd-timesyncd[1427]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:13:35.363052 systemd-timesyncd[1427]: Initial clock synchronization to Thu 2025-05-08 00:13:35.317904 UTC. May 8 00:13:35.363432 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:13:35.376975 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:13:35.377502 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:13:35.389749 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:13:35.391449 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:35.398288 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:13:35.436987 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:13:35.438661 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:13:35.439857 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:13:35.441109 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:13:35.442438 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:13:35.444157 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:13:35.445443 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:13:35.446775 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:13:35.448081 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:13:35.448109 systemd[1]: Reached target paths.target - Path Units. May 8 00:13:35.449088 systemd[1]: Reached target timers.target - Timer Units. May 8 00:13:35.451046 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:13:35.453937 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:13:35.457848 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:13:35.459406 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:13:35.460827 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:13:35.465234 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:13:35.466785 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:13:35.469440 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:13:35.471173 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:13:35.472484 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:13:35.473513 systemd[1]: Reached target basic.target - Basic System. May 8 00:13:35.474680 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:13:35.474733 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:13:35.476122 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:13:35.478862 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:13:35.481727 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:13:35.482861 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:13:35.486106 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:13:35.487353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:13:35.491810 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:13:35.495826 jq[1483]: false May 8 00:13:35.495539 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:13:35.498176 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:13:35.502840 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:13:35.509452 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:13:35.511970 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:13:35.512912 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:13:35.513937 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:13:35.516891 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:13:35.519903 dbus-daemon[1482]: [system] SELinux support is enabled May 8 00:13:35.520158 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:13:35.521357 extend-filesystems[1484]: Found loop3 May 8 00:13:35.522803 extend-filesystems[1484]: Found loop4 May 8 00:13:35.522803 extend-filesystems[1484]: Found loop5 May 8 00:13:35.522803 extend-filesystems[1484]: Found sr0 May 8 00:13:35.522803 extend-filesystems[1484]: Found vda May 8 00:13:35.522803 extend-filesystems[1484]: Found vda1 May 8 00:13:35.522803 extend-filesystems[1484]: Found vda2 May 8 00:13:35.524598 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:13:35.529354 extend-filesystems[1484]: Found vda3 May 8 00:13:35.529354 extend-filesystems[1484]: Found usr May 8 00:13:35.529354 extend-filesystems[1484]: Found vda4 May 8 00:13:35.529354 extend-filesystems[1484]: Found vda6 May 8 00:13:35.529354 extend-filesystems[1484]: Found vda7 May 8 00:13:35.529354 extend-filesystems[1484]: Found vda9 May 8 00:13:35.529354 extend-filesystems[1484]: Checking size of /dev/vda9 May 8 00:13:35.539429 update_engine[1493]: I20250508 00:13:35.537995 1493 main.cc:92] Flatcar Update Engine starting May 8 00:13:35.541221 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:13:35.544415 extend-filesystems[1484]: Resized partition /dev/vda9 May 8 00:13:35.545374 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:13:35.545559 jq[1494]: true May 8 00:13:35.545832 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:13:35.546307 update_engine[1493]: I20250508 00:13:35.546087 1493 update_check_scheduler.cc:74] Next update check in 11m19s May 8 00:13:35.546119 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:13:35.550159 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) May 8 00:13:35.551889 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:13:35.552226 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:13:35.553619 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:13:35.559901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1381) May 8 00:13:35.574100 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:13:35.574175 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:13:35.575976 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:13:35.575998 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:13:35.582061 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:13:35.584706 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:13:35.586491 tar[1505]: linux-amd64/helm May 8 00:13:35.612739 jq[1507]: true May 8 00:13:35.613182 systemd-logind[1492]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:13:35.613216 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:13:35.614024 extend-filesystems[1504]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:13:35.614024 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:13:35.614024 extend-filesystems[1504]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:13:35.620539 extend-filesystems[1484]: Resized filesystem in /dev/vda9 May 8 00:13:35.616537 systemd-logind[1492]: New seat seat0. May 8 00:13:35.618361 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:13:35.618651 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:13:35.620980 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:13:35.625347 systemd[1]: Started update-engine.service - Update Engine. May 8 00:13:35.639863 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:13:35.655999 bash[1536]: Updated "/home/core/.ssh/authorized_keys" May 8 00:13:35.659447 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:13:35.662573 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:13:35.677353 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:13:35.772981 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:13:35.799680 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:13:35.826947 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:13:35.834954 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:13:35.835240 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:13:35.838097 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:13:35.854150 containerd[1508]: time="2025-05-08T00:13:35.854040646Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:13:35.862749 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:13:35.869936 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:13:35.872641 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:13:35.875943 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:13:35.879033 containerd[1508]: time="2025-05-08T00:13:35.878995023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:13:35.880960 containerd[1508]: time="2025-05-08T00:13:35.880935002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:35.880960 containerd[1508]: time="2025-05-08T00:13:35.880958456Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:13:35.881030 containerd[1508]: time="2025-05-08T00:13:35.880972503Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:13:35.881166 containerd[1508]: time="2025-05-08T00:13:35.881148332Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:13:35.881166 containerd[1508]: time="2025-05-08T00:13:35.881165905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:13:35.881288 containerd[1508]: time="2025-05-08T00:13:35.881232400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:35.881288 containerd[1508]: time="2025-05-08T00:13:35.881247649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:13:35.881526 containerd[1508]: time="2025-05-08T00:13:35.881507847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:35.881526 containerd[1508]: time="2025-05-08T00:13:35.881524558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:13:35.881569 containerd[1508]: time="2025-05-08T00:13:35.881536310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:35.881569 containerd[1508]: time="2025-05-08T00:13:35.881544936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:13:35.881671 containerd[1508]: time="2025-05-08T00:13:35.881655674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:13:35.881924 containerd[1508]: time="2025-05-08T00:13:35.881886968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:13:35.882086 containerd[1508]: time="2025-05-08T00:13:35.882052609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:35.882086 containerd[1508]: time="2025-05-08T00:13:35.882066935Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:13:35.882186 containerd[1508]: time="2025-05-08T00:13:35.882171021Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:13:35.882244 containerd[1508]: time="2025-05-08T00:13:35.882229460Z" level=info msg="metadata content store policy set" policy=shared May 8 00:13:35.888659 containerd[1508]: time="2025-05-08T00:13:35.888621780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:13:35.888706 containerd[1508]: time="2025-05-08T00:13:35.888671864Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:13:35.888706 containerd[1508]: time="2025-05-08T00:13:35.888696851Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:13:35.888756 containerd[1508]: time="2025-05-08T00:13:35.888714585Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:13:35.888756 containerd[1508]: time="2025-05-08T00:13:35.888731867Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:13:35.888914 containerd[1508]: time="2025-05-08T00:13:35.888885345Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:13:35.889170 containerd[1508]: time="2025-05-08T00:13:35.889118051Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:13:35.889262 containerd[1508]: time="2025-05-08T00:13:35.889241753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:13:35.889284 containerd[1508]: time="2025-05-08T00:13:35.889262392Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:13:35.889284 containerd[1508]: time="2025-05-08T00:13:35.889278763Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:13:35.889337 containerd[1508]: time="2025-05-08T00:13:35.889292489Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:13:35.889337 containerd[1508]: time="2025-05-08T00:13:35.889306425Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:13:35.889337 containerd[1508]: time="2025-05-08T00:13:35.889330069Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:13:35.889404 containerd[1508]: time="2025-05-08T00:13:35.889350527Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:13:35.889404 containerd[1508]: time="2025-05-08T00:13:35.889371757Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:13:35.889404 containerd[1508]: time="2025-05-08T00:13:35.889390302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:13:35.889475 containerd[1508]: time="2025-05-08T00:13:35.889407975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:13:35.889475 containerd[1508]: time="2025-05-08T00:13:35.889423835Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:13:35.889475 containerd[1508]: time="2025-05-08T00:13:35.889449844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889475 containerd[1508]: time="2025-05-08T00:13:35.889470412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889567 containerd[1508]: time="2025-05-08T00:13:35.889493395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889567 containerd[1508]: time="2025-05-08T00:13:35.889510137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889567 containerd[1508]: time="2025-05-08T00:13:35.889526087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889567 containerd[1508]: time="2025-05-08T00:13:35.889544110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889567 containerd[1508]: time="2025-05-08T00:13:35.889559870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889679 containerd[1508]: time="2025-05-08T00:13:35.889575219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889679 containerd[1508]: time="2025-05-08T00:13:35.889591329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889717 containerd[1508]: time="2025-05-08T00:13:35.889676228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889717 containerd[1508]: time="2025-05-08T00:13:35.889691657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889717 containerd[1508]: time="2025-05-08T00:13:35.889704531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889782 containerd[1508]: time="2025-05-08T00:13:35.889717696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889782 containerd[1508]: time="2025-05-08T00:13:35.889736251Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:13:35.889782 containerd[1508]: time="2025-05-08T00:13:35.889759094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889782 containerd[1508]: time="2025-05-08T00:13:35.889775424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:13:35.889851 containerd[1508]: time="2025-05-08T00:13:35.889788840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:13:35.890590 containerd[1508]: time="2025-05-08T00:13:35.890556369Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:13:35.890749 containerd[1508]: time="2025-05-08T00:13:35.890586466Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:13:35.890749 containerd[1508]: time="2025-05-08T00:13:35.890741717Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:13:35.890793 containerd[1508]: time="2025-05-08T00:13:35.890761504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:13:35.890793 containerd[1508]: time="2025-05-08T00:13:35.890774829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:13:35.890830 containerd[1508]: time="2025-05-08T00:13:35.890790839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:13:35.890830 containerd[1508]: time="2025-05-08T00:13:35.890803002Z" level=info msg="NRI interface is disabled by configuration." May 8 00:13:35.890830 containerd[1508]: time="2025-05-08T00:13:35.890816076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:13:35.891189 containerd[1508]: time="2025-05-08T00:13:35.891123383Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:13:35.891318 containerd[1508]: time="2025-05-08T00:13:35.891186992Z" level=info msg="Connect containerd service" May 8 00:13:35.891318 containerd[1508]: time="2025-05-08T00:13:35.891217830Z" level=info msg="using legacy CRI server" May 8 00:13:35.891318 containerd[1508]: time="2025-05-08T00:13:35.891226466Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:13:35.891374 containerd[1508]: time="2025-05-08T00:13:35.891359967Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:13:35.892166 containerd[1508]: time="2025-05-08T00:13:35.892133067Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:13:35.892320 containerd[1508]: time="2025-05-08T00:13:35.892271296Z" level=info msg="Start subscribing containerd event" May 8 00:13:35.892355 containerd[1508]: time="2025-05-08T00:13:35.892338732Z" level=info msg="Start recovering state" May 8 00:13:35.892551 containerd[1508]: time="2025-05-08T00:13:35.892515183Z" level=info msg="Start event monitor" May 8 00:13:35.892655 containerd[1508]: time="2025-05-08T00:13:35.892639036Z" level=info msg="Start snapshots syncer" May 8 00:13:35.892682 containerd[1508]: time="2025-05-08T00:13:35.892669373Z" level=info msg="Start cni network conf syncer for default" May 8 00:13:35.892714 containerd[1508]: time="2025-05-08T00:13:35.892680123Z" level=info msg="Start streaming server" May 8 00:13:35.893123 containerd[1508]: time="2025-05-08T00:13:35.893099740Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:13:35.893178 containerd[1508]: time="2025-05-08T00:13:35.893163950Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:13:35.893242 containerd[1508]: time="2025-05-08T00:13:35.893229343Z" level=info msg="containerd successfully booted in 0.043394s" May 8 00:13:35.893294 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:13:36.062657 tar[1505]: linux-amd64/LICENSE May 8 00:13:36.062769 tar[1505]: linux-amd64/README.md May 8 00:13:36.082194 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:13:36.725855 systemd-networkd[1425]: eth0: Gained IPv6LL May 8 00:13:36.729339 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:13:36.731357 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:13:36.744932 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:13:36.747439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:36.749831 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:13:36.769720 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:13:36.770010 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:13:36.771722 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:13:36.775939 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:13:37.372099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:37.373887 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:13:37.375272 systemd[1]: Startup finished in 845ms (kernel) + 6.256s (initrd) + 4.425s (userspace) = 11.527s. May 8 00:13:37.378568 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:13:38.021706 kubelet[1595]: E0508 00:13:38.021578 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:13:38.027144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:13:38.027391 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:13:38.027865 systemd[1]: kubelet.service: Consumed 1.153s CPU time, 236.1M memory peak. May 8 00:13:39.650765 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:13:39.652415 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:37578.service - OpenSSH per-connection server daemon (10.0.0.1:37578). May 8 00:13:39.711337 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 37578 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:13:39.713152 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:39.724371 systemd-logind[1492]: New session 1 of user core. May 8 00:13:39.725745 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:13:39.737827 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:13:39.750116 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:13:39.753212 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:13:39.761280 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:13:39.763670 systemd-logind[1492]: New session c1 of user core. May 8 00:13:39.913223 systemd[1612]: Queued start job for default target default.target. May 8 00:13:39.923180 systemd[1612]: Created slice app.slice - User Application Slice. May 8 00:13:39.923212 systemd[1612]: Reached target paths.target - Paths. May 8 00:13:39.923272 systemd[1612]: Reached target timers.target - Timers. May 8 00:13:39.925065 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:13:39.936698 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:13:39.936873 systemd[1612]: Reached target sockets.target - Sockets. May 8 00:13:39.936937 systemd[1612]: Reached target basic.target - Basic System. May 8 00:13:39.937003 systemd[1612]: Reached target default.target - Main User Target. May 8 00:13:39.937049 systemd[1612]: Startup finished in 166ms. May 8 00:13:39.937658 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:13:39.940013 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:13:40.006749 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:37580.service - OpenSSH per-connection server daemon (10.0.0.1:37580). May 8 00:13:40.050558 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 37580 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:13:40.052192 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:40.056580 systemd-logind[1492]: New session 2 of user core. May 8 00:13:40.071744 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:13:40.125178 sshd[1625]: Connection closed by 10.0.0.1 port 37580 May 8 00:13:40.125582 sshd-session[1623]: pam_unix(sshd:session): session closed for user core May 8 00:13:40.144484 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:37580.service: Deactivated successfully. May 8 00:13:40.146903 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:13:40.148741 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. May 8 00:13:40.162965 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:37592.service - OpenSSH per-connection server daemon (10.0.0.1:37592). May 8 00:13:40.164151 systemd-logind[1492]: Removed session 2. May 8 00:13:40.203595 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 37592 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:13:40.205314 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:40.209853 systemd-logind[1492]: New session 3 of user core. May 8 00:13:40.219720 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:13:40.269265 sshd[1633]: Connection closed by 10.0.0.1 port 37592 May 8 00:13:40.269695 sshd-session[1630]: pam_unix(sshd:session): session closed for user core May 8 00:13:40.282206 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:37592.service: Deactivated successfully. May 8 00:13:40.284001 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:13:40.285395 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. May 8 00:13:40.286691 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:37600.service - OpenSSH per-connection server daemon (10.0.0.1:37600). May 8 00:13:40.287387 systemd-logind[1492]: Removed session 3. May 8 00:13:40.329881 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 37600 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:13:40.331241 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:40.335366 systemd-logind[1492]: New session 4 of user core. May 8 00:13:40.343724 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:13:40.396579 sshd[1641]: Connection closed by 10.0.0.1 port 37600 May 8 00:13:40.396997 sshd-session[1638]: pam_unix(sshd:session): session closed for user core May 8 00:13:40.417359 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:37600.service: Deactivated successfully. May 8 00:13:40.419346 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:13:40.420805 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. May 8 00:13:40.431851 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:37602.service - OpenSSH per-connection server daemon (10.0.0.1:37602). May 8 00:13:40.432859 systemd-logind[1492]: Removed session 4. May 8 00:13:40.474107 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 37602 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:13:40.475815 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:40.480650 systemd-logind[1492]: New session 5 of user core. May 8 00:13:40.490768 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:13:40.551538 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:13:40.551985 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:13:40.567863 sudo[1650]: pam_unix(sudo:session): session closed for user root May 8 00:13:40.569395 sshd[1649]: Connection closed by 10.0.0.1 port 37602 May 8 00:13:40.569850 sshd-session[1646]: pam_unix(sshd:session): session closed for user core May 8 00:13:40.582415 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:37602.service: Deactivated successfully. May 8 00:13:40.584516 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:13:40.585942 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. May 8 00:13:40.602014 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:37610.service - OpenSSH per-connection server daemon (10.0.0.1:37610). May 8 00:13:40.602954 systemd-logind[1492]: Removed session 5. May 8 00:13:40.641857 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 37610 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:13:40.643489 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:40.648227 systemd-logind[1492]: New session 6 of user core. May 8 00:13:40.654732 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:13:40.712511 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:13:40.712986 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:13:40.719755 sudo[1660]: pam_unix(sudo:session): session closed for user root May 8 00:13:40.727637 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:13:40.727993 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:13:40.749997 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:13:40.784369 augenrules[1682]: No rules May 8 00:13:40.786401 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:13:40.786727 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:13:40.787884 sudo[1659]: pam_unix(sudo:session): session closed for user root May 8 00:13:40.789486 sshd[1658]: Connection closed by 10.0.0.1 port 37610 May 8 00:13:40.789811 sshd-session[1655]: pam_unix(sshd:session): session closed for user core May 8 00:13:40.798429 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:37610.service: Deactivated successfully. May 8 00:13:40.800470 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:13:40.801957 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. May 8 00:13:40.811843 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:37626.service - OpenSSH per-connection server daemon (10.0.0.1:37626). May 8 00:13:40.812785 systemd-logind[1492]: Removed session 6. May 8 00:13:40.853548 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 37626 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:13:40.854980 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:40.859528 systemd-logind[1492]: New session 7 of user core. May 8 00:13:40.869777 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:13:40.924052 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:13:40.924408 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:13:41.401952 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:13:41.402152 (dockerd)[1713]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:13:42.268589 dockerd[1713]: time="2025-05-08T00:13:42.268487253Z" level=info msg="Starting up" May 8 00:13:42.765029 dockerd[1713]: time="2025-05-08T00:13:42.764909608Z" level=info msg="Loading containers: start." May 8 00:13:42.978630 kernel: Initializing XFRM netlink socket May 8 00:13:43.070670 systemd-networkd[1425]: docker0: Link UP May 8 00:13:43.110282 dockerd[1713]: time="2025-05-08T00:13:43.110225959Z" level=info msg="Loading containers: done." May 8 00:13:43.161420 dockerd[1713]: time="2025-05-08T00:13:43.161346852Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:13:43.161639 dockerd[1713]: time="2025-05-08T00:13:43.161479724Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:13:43.161639 dockerd[1713]: time="2025-05-08T00:13:43.161631147Z" level=info msg="Daemon has completed initialization" May 8 00:13:43.199663 dockerd[1713]: time="2025-05-08T00:13:43.199582111Z" level=info msg="API listen on /run/docker.sock" May 8 00:13:43.199829 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:13:44.022885 containerd[1508]: time="2025-05-08T00:13:44.022808541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 00:13:44.903734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384981202.mount: Deactivated successfully. May 8 00:13:46.238996 containerd[1508]: time="2025-05-08T00:13:46.238936394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:46.239832 containerd[1508]: time="2025-05-08T00:13:46.239809764Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 8 00:13:46.241184 containerd[1508]: time="2025-05-08T00:13:46.241143608Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:46.244312 containerd[1508]: time="2025-05-08T00:13:46.244266859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:46.245420 containerd[1508]: time="2025-05-08T00:13:46.245382206Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.222507351s" May 8 00:13:46.245489 containerd[1508]: time="2025-05-08T00:13:46.245427085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 8 00:13:46.249169 containerd[1508]: time="2025-05-08T00:13:46.249127217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 00:13:47.865656 containerd[1508]: time="2025-05-08T00:13:47.865213319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:47.866504 containerd[1508]: time="2025-05-08T00:13:47.866459144Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 8 00:13:47.867800 containerd[1508]: time="2025-05-08T00:13:47.867738034Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:47.874543 containerd[1508]: time="2025-05-08T00:13:47.874438917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:47.876072 containerd[1508]: time="2025-05-08T00:13:47.875997695Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.626824279s" May 8 00:13:47.876072 containerd[1508]: time="2025-05-08T00:13:47.876060287Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 8 00:13:47.876806 containerd[1508]: time="2025-05-08T00:13:47.876770677Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 00:13:48.278275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:13:48.296008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:48.457495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:48.462936 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:13:48.998383 kubelet[1976]: E0508 00:13:48.998240 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:13:49.005776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:13:49.006040 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:13:49.006531 systemd[1]: kubelet.service: Consumed 219ms CPU time, 98.6M memory peak. May 8 00:13:50.711892 containerd[1508]: time="2025-05-08T00:13:50.711809407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:50.712721 containerd[1508]: time="2025-05-08T00:13:50.712651517Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 8 00:13:50.713995 containerd[1508]: time="2025-05-08T00:13:50.713957029Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:50.717163 containerd[1508]: time="2025-05-08T00:13:50.717098184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:50.718324 containerd[1508]: time="2025-05-08T00:13:50.718282026Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.841470547s" May 8 00:13:50.718324 containerd[1508]: time="2025-05-08T00:13:50.718319834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 8 00:13:50.718909 containerd[1508]: time="2025-05-08T00:13:50.718870920Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 00:13:52.444372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531276237.mount: Deactivated successfully. May 8 00:13:53.180802 containerd[1508]: time="2025-05-08T00:13:53.180717157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:53.181671 containerd[1508]: time="2025-05-08T00:13:53.181588163Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 8 00:13:53.211172 containerd[1508]: time="2025-05-08T00:13:53.211102829Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:53.214716 containerd[1508]: time="2025-05-08T00:13:53.214518581Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.495604616s" May 8 00:13:53.214783 containerd[1508]: time="2025-05-08T00:13:53.214722242Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 8 00:13:53.214783 containerd[1508]: time="2025-05-08T00:13:53.214766984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:53.215477 containerd[1508]: time="2025-05-08T00:13:53.215452136Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:13:53.754765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169789312.mount: Deactivated successfully. May 8 00:13:55.124626 containerd[1508]: time="2025-05-08T00:13:55.124530333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:55.125483 containerd[1508]: time="2025-05-08T00:13:55.125414822Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:13:55.127129 containerd[1508]: time="2025-05-08T00:13:55.127070443Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:55.130422 containerd[1508]: time="2025-05-08T00:13:55.130371053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:55.131579 containerd[1508]: time="2025-05-08T00:13:55.131540281Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.916061679s" May 8 00:13:55.131579 containerd[1508]: time="2025-05-08T00:13:55.131576869Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:13:55.132199 containerd[1508]: time="2025-05-08T00:13:55.132140583Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:13:56.137141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454380851.mount: Deactivated successfully. May 8 00:13:56.142098 containerd[1508]: time="2025-05-08T00:13:56.142029195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:56.142947 containerd[1508]: time="2025-05-08T00:13:56.142895200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:13:56.144160 containerd[1508]: time="2025-05-08T00:13:56.144124716Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:56.146467 containerd[1508]: time="2025-05-08T00:13:56.146425092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:56.147307 containerd[1508]: time="2025-05-08T00:13:56.147266900Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.014993488s" May 8 00:13:56.147307 containerd[1508]: time="2025-05-08T00:13:56.147301729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:13:56.147840 containerd[1508]: time="2025-05-08T00:13:56.147801941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 00:13:56.704332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131751941.mount: Deactivated successfully. May 8 00:13:59.256410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:13:59.263836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:59.417927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:59.422260 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:13:59.491261 kubelet[2104]: E0508 00:13:59.491172 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:13:59.496191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:13:59.496490 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:13:59.497254 systemd[1]: kubelet.service: Consumed 226ms CPU time, 99.7M memory peak. May 8 00:14:00.709009 containerd[1508]: time="2025-05-08T00:14:00.708930601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:00.710134 containerd[1508]: time="2025-05-08T00:14:00.710071369Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 8 00:14:00.711589 containerd[1508]: time="2025-05-08T00:14:00.711536373Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:00.714856 containerd[1508]: time="2025-05-08T00:14:00.714804862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:00.716161 containerd[1508]: time="2025-05-08T00:14:00.716105265Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.568265921s" May 8 00:14:00.716161 containerd[1508]: time="2025-05-08T00:14:00.716140961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 8 00:14:03.182385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:14:03.182650 systemd[1]: kubelet.service: Consumed 226ms CPU time, 99.7M memory peak. May 8 00:14:03.202907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:14:03.229633 systemd[1]: Reload requested from client PID 2142 ('systemctl') (unit session-7.scope)... May 8 00:14:03.229651 systemd[1]: Reloading... May 8 00:14:03.331765 zram_generator::config[2192]: No configuration found. May 8 00:14:03.644568 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:14:03.750237 systemd[1]: Reloading finished in 520 ms. May 8 00:14:03.814519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:14:03.819410 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:14:03.821917 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:14:03.823083 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:14:03.823348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:14:03.823385 systemd[1]: kubelet.service: Consumed 151ms CPU time, 84.7M memory peak. May 8 00:14:03.826122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:14:03.986962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:14:03.992550 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:14:04.243672 kubelet[2237]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:14:04.243672 kubelet[2237]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:14:04.243672 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:14:04.244138 kubelet[2237]: I0508 00:14:04.243666 2237 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:14:04.813001 kubelet[2237]: I0508 00:14:04.812937 2237 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:14:04.813001 kubelet[2237]: I0508 00:14:04.812975 2237 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:14:04.813301 kubelet[2237]: I0508 00:14:04.813243 2237 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:14:04.841544 kubelet[2237]: I0508 00:14:04.841498 2237 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:14:04.842061 kubelet[2237]: E0508 00:14:04.842027 2237 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:04.847966 kubelet[2237]: E0508 00:14:04.847921 2237 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:14:04.847966 kubelet[2237]: I0508 00:14:04.847957 2237 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:14:04.856333 kubelet[2237]: I0508 00:14:04.856292 2237 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:14:04.858282 kubelet[2237]: I0508 00:14:04.858234 2237 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:14:04.858540 kubelet[2237]: I0508 00:14:04.858478 2237 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:14:04.858781 kubelet[2237]: I0508 00:14:04.858525 2237 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:14:04.858781 kubelet[2237]: I0508 00:14:04.858783 2237 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:14:04.858932 kubelet[2237]: I0508 00:14:04.858793 2237 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:14:04.859038 kubelet[2237]: I0508 00:14:04.858968 2237 state_mem.go:36] "Initialized new in-memory state store" May 8 00:14:04.861296 kubelet[2237]: I0508 00:14:04.861249 2237 kubelet.go:408] "Attempting to sync node with API server" May 8 00:14:04.861296 kubelet[2237]: I0508 00:14:04.861281 2237 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:14:04.861434 kubelet[2237]: I0508 00:14:04.861338 2237 kubelet.go:314] "Adding apiserver pod source" May 8 00:14:04.861434 kubelet[2237]: I0508 00:14:04.861383 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:14:04.878275 kubelet[2237]: W0508 00:14:04.878040 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused May 8 00:14:04.878275 kubelet[2237]: E0508 00:14:04.878141 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:04.879675 kubelet[2237]: W0508 00:14:04.879640 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused May 8 00:14:04.879724 kubelet[2237]: E0508 00:14:04.879679 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:04.881959 kubelet[2237]: I0508 00:14:04.881934 2237 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:14:04.883807 kubelet[2237]: I0508 00:14:04.883765 2237 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:14:04.884639 kubelet[2237]: W0508 00:14:04.884608 2237 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:14:04.885428 kubelet[2237]: I0508 00:14:04.885383 2237 server.go:1269] "Started kubelet" May 8 00:14:04.886503 kubelet[2237]: I0508 00:14:04.886388 2237 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:14:04.886983 kubelet[2237]: I0508 00:14:04.886961 2237 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:14:04.887268 kubelet[2237]: I0508 00:14:04.887006 2237 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:14:04.887351 kubelet[2237]: I0508 00:14:04.887319 2237 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:14:04.888924 kubelet[2237]: I0508 00:14:04.888854 2237 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:14:04.890199 kubelet[2237]: E0508 00:14:04.889439 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:14:04.890199 kubelet[2237]: E0508 00:14:04.889547 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="200ms" May 8 00:14:04.890199 kubelet[2237]: I0508 00:14:04.889611 2237 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:14:04.890199 kubelet[2237]: I0508 00:14:04.889699 2237 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:14:04.890199 kubelet[2237]: I0508 00:14:04.889750 2237 reconciler.go:26] "Reconciler: start to sync state" May 8 00:14:04.890199 kubelet[2237]: W0508 00:14:04.889985 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused May 8 00:14:04.890199 kubelet[2237]: E0508 00:14:04.890023 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:04.892326 kubelet[2237]: I0508 00:14:04.891306 2237 server.go:460] "Adding debug handlers to kubelet server" May 8 00:14:04.892326 kubelet[2237]: I0508 00:14:04.891648 2237 factory.go:221] Registration of the systemd container factory successfully May 8 00:14:04.892326 kubelet[2237]: I0508 00:14:04.891786 2237 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:14:04.893177 kubelet[2237]: E0508 00:14:04.890373 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d64f78d9f760d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:14:04.885358093 +0000 UTC m=+0.888860005,LastTimestamp:2025-05-08 00:14:04.885358093 +0000 UTC m=+0.888860005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:14:04.893834 kubelet[2237]: I0508 00:14:04.893813 2237 factory.go:221] Registration of the containerd container factory successfully May 8 00:14:04.900249 kubelet[2237]: E0508 00:14:04.900203 2237 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:14:04.906015 kubelet[2237]: I0508 00:14:04.905985 2237 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:14:04.906015 kubelet[2237]: I0508 00:14:04.906007 2237 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:14:04.906130 kubelet[2237]: I0508 00:14:04.906030 2237 state_mem.go:36] "Initialized new in-memory state store" May 8 00:14:04.912536 kubelet[2237]: I0508 00:14:04.912476 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:14:04.914003 kubelet[2237]: I0508 00:14:04.913978 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:14:04.914268 kubelet[2237]: I0508 00:14:04.914127 2237 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:14:04.914268 kubelet[2237]: I0508 00:14:04.914151 2237 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:14:04.914268 kubelet[2237]: E0508 00:14:04.914200 2237 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:14:04.990098 kubelet[2237]: E0508 00:14:04.990014 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:14:05.014892 kubelet[2237]: E0508 00:14:05.014839 2237 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:14:05.090385 kubelet[2237]: E0508 00:14:05.090174 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:14:05.091812 kubelet[2237]: E0508 00:14:05.091769 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="400ms" May 8 00:14:05.191018 kubelet[2237]: E0508 00:14:05.190950 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:14:05.215196 kubelet[2237]: E0508 00:14:05.215137 2237 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:14:05.291546 kubelet[2237]: E0508 00:14:05.291446 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:14:05.298018 kubelet[2237]: W0508 00:14:05.297923 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused May 8 00:14:05.298195 kubelet[2237]: E0508 00:14:05.298026 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:05.352473 kubelet[2237]: I0508 00:14:05.352324 2237 policy_none.go:49] "None policy: Start" May 8 00:14:05.353477 kubelet[2237]: I0508 00:14:05.353451 2237 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:14:05.353547 kubelet[2237]: I0508 00:14:05.353487 2237 state_mem.go:35] "Initializing new in-memory state store" May 8 00:14:05.392344 kubelet[2237]: E0508 00:14:05.392300 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:14:05.492424 kubelet[2237]: E0508 00:14:05.492380 2237 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:14:05.493047 kubelet[2237]: E0508 00:14:05.492704 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="800ms" May 8 00:14:05.493545 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:14:05.512217 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:14:05.515122 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:14:05.528588 kubelet[2237]: I0508 00:14:05.528521 2237 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:14:05.528588 kubelet[2237]: I0508 00:14:05.528776 2237 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:14:05.528588 kubelet[2237]: I0508 00:14:05.528790 2237 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:14:05.528588 kubelet[2237]: I0508 00:14:05.529060 2237 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:14:05.530594 kubelet[2237]: E0508 00:14:05.530566 2237 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:14:05.624227 systemd[1]: Created slice kubepods-burstable-pod2f68bbfb5adcf0cba28fbf0460fd1c04.slice - libcontainer container kubepods-burstable-pod2f68bbfb5adcf0cba28fbf0460fd1c04.slice. May 8 00:14:05.630709 kubelet[2237]: I0508 00:14:05.630662 2237 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:14:05.631164 kubelet[2237]: E0508 00:14:05.631121 2237 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" May 8 00:14:05.640884 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 8 00:14:05.654426 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 8 00:14:05.693788 kubelet[2237]: I0508 00:14:05.693728 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f68bbfb5adcf0cba28fbf0460fd1c04-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f68bbfb5adcf0cba28fbf0460fd1c04\") " pod="kube-system/kube-apiserver-localhost" May 8 00:14:05.693788 kubelet[2237]: I0508 00:14:05.693782 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f68bbfb5adcf0cba28fbf0460fd1c04-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f68bbfb5adcf0cba28fbf0460fd1c04\") " pod="kube-system/kube-apiserver-localhost" May 8 00:14:05.693952 kubelet[2237]: I0508 00:14:05.693812 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:05.693952 kubelet[2237]: I0508 00:14:05.693834 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:05.693952 kubelet[2237]: I0508 00:14:05.693856 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:05.693952 kubelet[2237]: I0508 00:14:05.693875 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:14:05.693952 kubelet[2237]: I0508 00:14:05.693896 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f68bbfb5adcf0cba28fbf0460fd1c04-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f68bbfb5adcf0cba28fbf0460fd1c04\") " pod="kube-system/kube-apiserver-localhost" May 8 00:14:05.694081 kubelet[2237]: I0508 00:14:05.693956 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:05.694081 kubelet[2237]: I0508 00:14:05.693990 2237 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:05.832845 kubelet[2237]: I0508 00:14:05.832794 2237 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:14:05.833192 kubelet[2237]: E0508 00:14:05.833154 2237 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" May 8 00:14:05.851804 kubelet[2237]: W0508 00:14:05.851731 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused May 8 00:14:05.851901 kubelet[2237]: E0508 00:14:05.851811 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:05.938583 containerd[1508]: time="2025-05-08T00:14:05.938449816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f68bbfb5adcf0cba28fbf0460fd1c04,Namespace:kube-system,Attempt:0,}" May 8 00:14:05.953256 containerd[1508]: time="2025-05-08T00:14:05.953209140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 8 00:14:05.957826 containerd[1508]: time="2025-05-08T00:14:05.957777263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 8 00:14:06.234718 kubelet[2237]: I0508 00:14:06.234590 2237 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:14:06.234989 kubelet[2237]: E0508 00:14:06.234960 2237 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" May 8 00:14:06.293785 kubelet[2237]: E0508 00:14:06.293723 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="1.6s" May 8 00:14:06.352571 kubelet[2237]: W0508 00:14:06.352491 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused May 8 00:14:06.352571 kubelet[2237]: E0508 00:14:06.352561 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:06.356120 kubelet[2237]: W0508 00:14:06.356076 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused May 8 00:14:06.356120 kubelet[2237]: E0508 00:14:06.356113 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:06.790282 kubelet[2237]: W0508 00:14:06.790204 2237 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused May 8 00:14:06.790282 kubelet[2237]: E0508 00:14:06.790266 2237 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:06.862728 kubelet[2237]: E0508 00:14:06.862668 2237 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" May 8 00:14:07.014655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844457271.mount: Deactivated successfully. May 8 00:14:07.021236 containerd[1508]: time="2025-05-08T00:14:07.021164334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:14:07.024127 containerd[1508]: time="2025-05-08T00:14:07.024090371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:14:07.024948 containerd[1508]: time="2025-05-08T00:14:07.024903688Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:14:07.026692 containerd[1508]: time="2025-05-08T00:14:07.026658082Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:14:07.027648 containerd[1508]: time="2025-05-08T00:14:07.027563586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:14:07.028651 containerd[1508]: time="2025-05-08T00:14:07.028617532Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:14:07.029423 containerd[1508]: time="2025-05-08T00:14:07.029378410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:14:07.031309 containerd[1508]: time="2025-05-08T00:14:07.031273325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:14:07.033114 containerd[1508]: time="2025-05-08T00:14:07.033073518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.075233674s" May 8 00:14:07.033700 containerd[1508]: time="2025-05-08T00:14:07.033670280Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.09509974s" May 8 00:14:07.036587 containerd[1508]: time="2025-05-08T00:14:07.036536367Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.083237167s" May 8 00:14:07.036799 kubelet[2237]: I0508 00:14:07.036763 2237 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:14:07.037113 kubelet[2237]: E0508 00:14:07.037068 2237 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" May 8 00:14:07.367344 containerd[1508]: time="2025-05-08T00:14:07.364533072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:07.367582 containerd[1508]: time="2025-05-08T00:14:07.367343485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:07.367582 containerd[1508]: time="2025-05-08T00:14:07.367364637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:07.367582 containerd[1508]: time="2025-05-08T00:14:07.367508502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:07.372933 containerd[1508]: time="2025-05-08T00:14:07.372344556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:07.372933 containerd[1508]: time="2025-05-08T00:14:07.372412136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:07.372933 containerd[1508]: time="2025-05-08T00:14:07.372431385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:07.372933 containerd[1508]: time="2025-05-08T00:14:07.372548881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:07.375014 containerd[1508]: time="2025-05-08T00:14:07.373971079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:07.375014 containerd[1508]: time="2025-05-08T00:14:07.374828411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:07.375014 containerd[1508]: time="2025-05-08T00:14:07.374844165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:07.375014 containerd[1508]: time="2025-05-08T00:14:07.374925687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:07.401902 systemd[1]: Started cri-containerd-db18bf7e5ae4b7c82dae131857c5e24880ead55e014c602bb62cd0b48b105b1a.scope - libcontainer container db18bf7e5ae4b7c82dae131857c5e24880ead55e014c602bb62cd0b48b105b1a. May 8 00:14:07.455864 systemd[1]: Started cri-containerd-9b8b16edf814e0cf20855b1ed0528cd851ac0d02aa696a731de01ca2a21d0c54.scope - libcontainer container 9b8b16edf814e0cf20855b1ed0528cd851ac0d02aa696a731de01ca2a21d0c54. May 8 00:14:07.458559 systemd[1]: Started cri-containerd-ebf825c24b26ab6b6482b5e8e15b2ff6c6f918a857db6f06d29925af2f7e077a.scope - libcontainer container ebf825c24b26ab6b6482b5e8e15b2ff6c6f918a857db6f06d29925af2f7e077a. May 8 00:14:07.564035 containerd[1508]: time="2025-05-08T00:14:07.563900488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"db18bf7e5ae4b7c82dae131857c5e24880ead55e014c602bb62cd0b48b105b1a\"" May 8 00:14:07.568109 containerd[1508]: time="2025-05-08T00:14:07.568074791Z" level=info msg="CreateContainer within sandbox \"db18bf7e5ae4b7c82dae131857c5e24880ead55e014c602bb62cd0b48b105b1a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:14:07.573102 containerd[1508]: time="2025-05-08T00:14:07.573061450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b8b16edf814e0cf20855b1ed0528cd851ac0d02aa696a731de01ca2a21d0c54\"" May 8 00:14:07.573433 containerd[1508]: time="2025-05-08T00:14:07.573396431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2f68bbfb5adcf0cba28fbf0460fd1c04,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebf825c24b26ab6b6482b5e8e15b2ff6c6f918a857db6f06d29925af2f7e077a\"" May 8 00:14:07.576526 containerd[1508]: time="2025-05-08T00:14:07.576493164Z" level=info msg="CreateContainer within sandbox \"9b8b16edf814e0cf20855b1ed0528cd851ac0d02aa696a731de01ca2a21d0c54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:14:07.576629 containerd[1508]: time="2025-05-08T00:14:07.576494746Z" level=info msg="CreateContainer within sandbox \"ebf825c24b26ab6b6482b5e8e15b2ff6c6f918a857db6f06d29925af2f7e077a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:14:07.851995 containerd[1508]: time="2025-05-08T00:14:07.851852845Z" level=info msg="CreateContainer within sandbox \"db18bf7e5ae4b7c82dae131857c5e24880ead55e014c602bb62cd0b48b105b1a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a075844bef6bdd65172dcb72e30c4073b809a038fe38cf6761de55d914bfbf5d\"" May 8 00:14:07.852705 containerd[1508]: time="2025-05-08T00:14:07.852668185Z" level=info msg="StartContainer for \"a075844bef6bdd65172dcb72e30c4073b809a038fe38cf6761de55d914bfbf5d\"" May 8 00:14:07.865641 containerd[1508]: time="2025-05-08T00:14:07.865567550Z" level=info msg="CreateContainer within sandbox \"9b8b16edf814e0cf20855b1ed0528cd851ac0d02aa696a731de01ca2a21d0c54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"712abb5fb217c9f631f8634b30db021f18351358fd22fd923fe8240aec005ce4\"" May 8 00:14:07.866395 containerd[1508]: time="2025-05-08T00:14:07.866303680Z" level=info msg="StartContainer for \"712abb5fb217c9f631f8634b30db021f18351358fd22fd923fe8240aec005ce4\"" May 8 00:14:07.870546 containerd[1508]: time="2025-05-08T00:14:07.870489532Z" level=info msg="CreateContainer within sandbox \"ebf825c24b26ab6b6482b5e8e15b2ff6c6f918a857db6f06d29925af2f7e077a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f65eb30222047f1ed418724a2017b933432a7081a289e06bcdcef117d790da3b\"" May 8 00:14:07.871146 containerd[1508]: time="2025-05-08T00:14:07.871002269Z" level=info msg="StartContainer for \"f65eb30222047f1ed418724a2017b933432a7081a289e06bcdcef117d790da3b\"" May 8 00:14:07.895866 kubelet[2237]: E0508 00:14:07.895069 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="3.2s" May 8 00:14:07.939973 systemd[1]: Started cri-containerd-f65eb30222047f1ed418724a2017b933432a7081a289e06bcdcef117d790da3b.scope - libcontainer container f65eb30222047f1ed418724a2017b933432a7081a289e06bcdcef117d790da3b. May 8 00:14:07.943711 systemd[1]: Started cri-containerd-712abb5fb217c9f631f8634b30db021f18351358fd22fd923fe8240aec005ce4.scope - libcontainer container 712abb5fb217c9f631f8634b30db021f18351358fd22fd923fe8240aec005ce4. May 8 00:14:07.955783 systemd[1]: Started cri-containerd-a075844bef6bdd65172dcb72e30c4073b809a038fe38cf6761de55d914bfbf5d.scope - libcontainer container a075844bef6bdd65172dcb72e30c4073b809a038fe38cf6761de55d914bfbf5d. May 8 00:14:08.132890 containerd[1508]: time="2025-05-08T00:14:08.132676929Z" level=info msg="StartContainer for \"f65eb30222047f1ed418724a2017b933432a7081a289e06bcdcef117d790da3b\" returns successfully" May 8 00:14:08.133380 containerd[1508]: time="2025-05-08T00:14:08.133117779Z" level=info msg="StartContainer for \"712abb5fb217c9f631f8634b30db021f18351358fd22fd923fe8240aec005ce4\" returns successfully" May 8 00:14:08.133380 containerd[1508]: time="2025-05-08T00:14:08.133161185Z" level=info msg="StartContainer for \"a075844bef6bdd65172dcb72e30c4073b809a038fe38cf6761de55d914bfbf5d\" returns successfully" May 8 00:14:08.640426 kubelet[2237]: I0508 00:14:08.640352 2237 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:14:09.882030 kubelet[2237]: I0508 00:14:09.881957 2237 apiserver.go:52] "Watching apiserver" May 8 00:14:09.890275 kubelet[2237]: I0508 00:14:09.890238 2237 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:14:09.909144 kubelet[2237]: I0508 00:14:09.908091 2237 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:14:09.943921 kubelet[2237]: E0508 00:14:09.943566 2237 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 8 00:14:09.943921 kubelet[2237]: E0508 00:14:09.943810 2237 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:14:09.943921 kubelet[2237]: E0508 00:14:09.943849 2237 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 8 00:14:15.353098 kubelet[2237]: I0508 00:14:15.352966 2237 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.352943073 podStartE2EDuration="5.352943073s" podCreationTimestamp="2025-05-08 00:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:14:15.146533519 +0000 UTC m=+11.150035431" watchObservedRunningTime="2025-05-08 00:14:15.352943073 +0000 UTC m=+11.356444985" May 8 00:14:18.225483 kubelet[2237]: I0508 00:14:18.225403 2237 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.225385048 podStartE2EDuration="4.225385048s" podCreationTimestamp="2025-05-08 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:14:15.353877184 +0000 UTC m=+11.357379106" watchObservedRunningTime="2025-05-08 00:14:18.225385048 +0000 UTC m=+14.228886960" May 8 00:14:20.861075 update_engine[1493]: I20250508 00:14:20.860960 1493 update_attempter.cc:509] Updating boot flags... May 8 00:14:20.960702 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2525) May 8 00:14:21.290629 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2528) May 8 00:14:21.335844 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2528) May 8 00:14:22.966932 systemd[1]: Reload requested from client PID 2536 ('systemctl') (unit session-7.scope)... May 8 00:14:22.966954 systemd[1]: Reloading... May 8 00:14:23.090632 zram_generator::config[2583]: No configuration found. May 8 00:14:23.220191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:14:23.351195 systemd[1]: Reloading finished in 383 ms. May 8 00:14:23.375549 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:14:23.392426 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:14:23.392851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:14:23.392923 systemd[1]: kubelet.service: Consumed 2.263s CPU time, 121.8M memory peak. May 8 00:14:23.406960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:14:23.579617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:14:23.584627 (kubelet)[2625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:14:23.649938 kubelet[2625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:14:23.649938 kubelet[2625]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:14:23.649938 kubelet[2625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:14:23.650470 kubelet[2625]: I0508 00:14:23.650014 2625 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:14:23.658065 kubelet[2625]: I0508 00:14:23.657999 2625 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:14:23.658065 kubelet[2625]: I0508 00:14:23.658034 2625 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:14:23.658483 kubelet[2625]: I0508 00:14:23.658299 2625 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:14:23.659696 kubelet[2625]: I0508 00:14:23.659594 2625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:14:23.661985 kubelet[2625]: I0508 00:14:23.661947 2625 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:14:23.665171 kubelet[2625]: E0508 00:14:23.665128 2625 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:14:23.665171 kubelet[2625]: I0508 00:14:23.665158 2625 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:14:23.670235 kubelet[2625]: I0508 00:14:23.670196 2625 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:14:23.670355 kubelet[2625]: I0508 00:14:23.670328 2625 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:14:23.670748 kubelet[2625]: I0508 00:14:23.670687 2625 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:14:23.670923 kubelet[2625]: I0508 00:14:23.670731 2625 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:14:23.671057 kubelet[2625]: I0508 00:14:23.670927 2625 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:14:23.671057 kubelet[2625]: I0508 00:14:23.670939 2625 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:14:23.671057 kubelet[2625]: I0508 00:14:23.670971 2625 state_mem.go:36] "Initialized new in-memory state store" May 8 00:14:23.671138 kubelet[2625]: I0508 00:14:23.671097 2625 kubelet.go:408] "Attempting to sync node with API server" May 8 00:14:23.671138 kubelet[2625]: I0508 00:14:23.671111 2625 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:14:23.671181 kubelet[2625]: I0508 00:14:23.671141 2625 kubelet.go:314] "Adding apiserver pod source" May 8 00:14:23.671181 kubelet[2625]: I0508 00:14:23.671156 2625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:14:23.672442 kubelet[2625]: I0508 00:14:23.672411 2625 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:14:23.673074 kubelet[2625]: I0508 00:14:23.673041 2625 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:14:23.675659 kubelet[2625]: I0508 00:14:23.673637 2625 server.go:1269] "Started kubelet" May 8 00:14:23.677998 kubelet[2625]: I0508 00:14:23.677964 2625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:14:23.681945 kubelet[2625]: I0508 00:14:23.681924 2625 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:14:23.682525 kubelet[2625]: I0508 00:14:23.682331 2625 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:14:23.682711 kubelet[2625]: I0508 00:14:23.682697 2625 reconciler.go:26] "Reconciler: start to sync state" May 8 00:14:23.682786 kubelet[2625]: I0508 00:14:23.682663 2625 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:14:23.683248 kubelet[2625]: I0508 00:14:23.683209 2625 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:14:23.683911 kubelet[2625]: I0508 00:14:23.683835 2625 server.go:460] "Adding debug handlers to kubelet server" May 8 00:14:23.685470 kubelet[2625]: E0508 00:14:23.685451 2625 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:14:23.686078 kubelet[2625]: I0508 00:14:23.682734 2625 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:14:23.686164 kubelet[2625]: I0508 00:14:23.686140 2625 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:14:23.686377 kubelet[2625]: I0508 00:14:23.686346 2625 factory.go:221] Registration of the systemd container factory successfully May 8 00:14:23.686539 kubelet[2625]: I0508 00:14:23.686487 2625 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:14:23.688241 kubelet[2625]: I0508 00:14:23.688200 2625 factory.go:221] Registration of the containerd container factory successfully May 8 00:14:23.695903 kubelet[2625]: I0508 00:14:23.695757 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:14:23.698008 kubelet[2625]: I0508 00:14:23.697925 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:14:23.698121 kubelet[2625]: I0508 00:14:23.697970 2625 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:14:23.698216 kubelet[2625]: I0508 00:14:23.698206 2625 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:14:23.698372 kubelet[2625]: E0508 00:14:23.698301 2625 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:14:23.733121 kubelet[2625]: I0508 00:14:23.733080 2625 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:14:23.733121 kubelet[2625]: I0508 00:14:23.733103 2625 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:14:23.733121 kubelet[2625]: I0508 00:14:23.733136 2625 state_mem.go:36] "Initialized new in-memory state store" May 8 00:14:23.733360 kubelet[2625]: I0508 00:14:23.733349 2625 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:14:23.733393 kubelet[2625]: I0508 00:14:23.733363 2625 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:14:23.733393 kubelet[2625]: I0508 00:14:23.733387 2625 policy_none.go:49] "None policy: Start" May 8 00:14:23.734325 kubelet[2625]: I0508 00:14:23.734290 2625 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:14:23.734438 kubelet[2625]: I0508 00:14:23.734340 2625 state_mem.go:35] "Initializing new in-memory state store" May 8 00:14:23.734636 kubelet[2625]: I0508 00:14:23.734591 2625 state_mem.go:75] "Updated machine memory state" May 8 00:14:23.739746 kubelet[2625]: I0508 00:14:23.739708 2625 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:14:23.740313 kubelet[2625]: I0508 00:14:23.740115 2625 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:14:23.740313 kubelet[2625]: I0508 00:14:23.740142 2625 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:14:23.740410 kubelet[2625]: I0508 00:14:23.740383 2625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:14:23.845775 kubelet[2625]: I0508 00:14:23.845590 2625 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:14:23.884208 kubelet[2625]: I0508 00:14:23.884143 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:14:23.884208 kubelet[2625]: I0508 00:14:23.884183 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:23.884423 kubelet[2625]: I0508 00:14:23.884226 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:23.884423 kubelet[2625]: I0508 00:14:23.884245 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f68bbfb5adcf0cba28fbf0460fd1c04-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f68bbfb5adcf0cba28fbf0460fd1c04\") " pod="kube-system/kube-apiserver-localhost" May 8 00:14:23.884423 kubelet[2625]: I0508 00:14:23.884260 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f68bbfb5adcf0cba28fbf0460fd1c04-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2f68bbfb5adcf0cba28fbf0460fd1c04\") " pod="kube-system/kube-apiserver-localhost" May 8 00:14:23.884423 kubelet[2625]: I0508 00:14:23.884276 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f68bbfb5adcf0cba28fbf0460fd1c04-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2f68bbfb5adcf0cba28fbf0460fd1c04\") " pod="kube-system/kube-apiserver-localhost" May 8 00:14:23.884423 kubelet[2625]: I0508 00:14:23.884306 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:23.884579 kubelet[2625]: I0508 00:14:23.884321 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:23.884579 kubelet[2625]: I0508 00:14:23.884336 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:14:24.091593 kubelet[2625]: E0508 00:14:24.091533 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:14:24.100747 kubelet[2625]: E0508 00:14:24.100590 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:14:24.136861 kubelet[2625]: I0508 00:14:24.136676 2625 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 8 00:14:24.136861 kubelet[2625]: I0508 00:14:24.136788 2625 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:14:24.137805 kubelet[2625]: E0508 00:14:24.137246 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:14:24.544332 sudo[2658]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:14:24.544757 sudo[2658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:14:24.672154 kubelet[2625]: I0508 00:14:24.672110 2625 apiserver.go:52] "Watching apiserver" May 8 00:14:24.682888 kubelet[2625]: I0508 00:14:24.682861 2625 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:14:25.026664 sudo[2658]: pam_unix(sudo:session): session closed for user root May 8 00:14:27.103909 kubelet[2625]: I0508 00:14:27.103866 2625 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:14:27.104421 kubelet[2625]: I0508 00:14:27.104398 2625 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:14:27.104453 containerd[1508]: time="2025-05-08T00:14:27.104200827Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:14:27.252720 sudo[1694]: pam_unix(sudo:session): session closed for user root May 8 00:14:27.254821 sshd[1693]: Connection closed by 10.0.0.1 port 37626 May 8 00:14:27.261308 sshd-session[1690]: pam_unix(sshd:session): session closed for user core May 8 00:14:27.274948 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:37626.service: Deactivated successfully. May 8 00:14:27.278188 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:14:27.278510 systemd[1]: session-7.scope: Consumed 4.608s CPU time, 254.8M memory peak. May 8 00:14:27.280549 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. May 8 00:14:27.281850 systemd-logind[1492]: Removed session 7. May 8 00:14:32.525898 systemd[1]: Created slice kubepods-besteffort-podbd3ec17a_77bb_4ca5_bd17_7c2393f9f306.slice - libcontainer container kubepods-besteffort-podbd3ec17a_77bb_4ca5_bd17_7c2393f9f306.slice. May 8 00:14:32.544064 systemd[1]: Created slice kubepods-besteffort-podb0ce46fd_6349_4c35_9fcc_1dc15a8bd8c8.slice - libcontainer container kubepods-besteffort-podb0ce46fd_6349_4c35_9fcc_1dc15a8bd8c8.slice. May 8 00:14:32.550586 systemd[1]: Created slice kubepods-burstable-podbd1f9d6d_f453_4a66_afab_00c3b18e02b1.slice - libcontainer container kubepods-burstable-podbd1f9d6d_f453_4a66_afab_00c3b18e02b1.slice. May 8 00:14:32.583266 kubelet[2625]: I0508 00:14:32.583198 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd3ec17a-77bb-4ca5-bd17-7c2393f9f306-kube-proxy\") pod \"kube-proxy-vvlw2\" (UID: \"bd3ec17a-77bb-4ca5-bd17-7c2393f9f306\") " pod="kube-system/kube-proxy-vvlw2" May 8 00:14:32.583266 kubelet[2625]: I0508 00:14:32.583259 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-etc-cni-netd\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.583266 kubelet[2625]: I0508 00:14:32.583276 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-clustermesh-secrets\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584011 kubelet[2625]: I0508 00:14:32.583293 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-hubble-tls\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584011 kubelet[2625]: I0508 00:14:32.583316 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjgf2\" (UniqueName: \"kubernetes.io/projected/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-kube-api-access-sjgf2\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584011 kubelet[2625]: I0508 00:14:32.583335 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjfsf\" (UniqueName: \"kubernetes.io/projected/b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8-kube-api-access-fjfsf\") pod \"cilium-operator-5d85765b45-gd7nj\" (UID: \"b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8\") " pod="kube-system/cilium-operator-5d85765b45-gd7nj" May 8 00:14:32.584011 kubelet[2625]: I0508 00:14:32.583353 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-run\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584011 kubelet[2625]: I0508 00:14:32.583367 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-bpf-maps\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584176 kubelet[2625]: I0508 00:14:32.583381 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-config-path\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584176 kubelet[2625]: I0508 00:14:32.583398 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd3ec17a-77bb-4ca5-bd17-7c2393f9f306-lib-modules\") pod \"kube-proxy-vvlw2\" (UID: \"bd3ec17a-77bb-4ca5-bd17-7c2393f9f306\") " pod="kube-system/kube-proxy-vvlw2" May 8 00:14:32.584176 kubelet[2625]: I0508 00:14:32.583412 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cni-path\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584176 kubelet[2625]: I0508 00:14:32.583427 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-host-proc-sys-kernel\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584176 kubelet[2625]: I0508 00:14:32.583459 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd3ec17a-77bb-4ca5-bd17-7c2393f9f306-xtables-lock\") pod \"kube-proxy-vvlw2\" (UID: \"bd3ec17a-77bb-4ca5-bd17-7c2393f9f306\") " pod="kube-system/kube-proxy-vvlw2" May 8 00:14:32.584350 kubelet[2625]: I0508 00:14:32.583472 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ngkc\" (UniqueName: \"kubernetes.io/projected/bd3ec17a-77bb-4ca5-bd17-7c2393f9f306-kube-api-access-5ngkc\") pod \"kube-proxy-vvlw2\" (UID: \"bd3ec17a-77bb-4ca5-bd17-7c2393f9f306\") " pod="kube-system/kube-proxy-vvlw2" May 8 00:14:32.584350 kubelet[2625]: I0508 00:14:32.583491 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8-cilium-config-path\") pod \"cilium-operator-5d85765b45-gd7nj\" (UID: \"b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8\") " pod="kube-system/cilium-operator-5d85765b45-gd7nj" May 8 00:14:32.584350 kubelet[2625]: I0508 00:14:32.583505 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-hostproc\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584350 kubelet[2625]: I0508 00:14:32.583522 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-host-proc-sys-net\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584350 kubelet[2625]: I0508 00:14:32.583539 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-cgroup\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584515 kubelet[2625]: I0508 00:14:32.583556 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-lib-modules\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.584515 kubelet[2625]: I0508 00:14:32.583575 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-xtables-lock\") pod \"cilium-mk5fg\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " pod="kube-system/cilium-mk5fg" May 8 00:14:32.841316 containerd[1508]: time="2025-05-08T00:14:32.841159783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvlw2,Uid:bd3ec17a-77bb-4ca5-bd17-7c2393f9f306,Namespace:kube-system,Attempt:0,}" May 8 00:14:32.849339 containerd[1508]: time="2025-05-08T00:14:32.849243534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gd7nj,Uid:b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8,Namespace:kube-system,Attempt:0,}" May 8 00:14:32.854455 containerd[1508]: time="2025-05-08T00:14:32.854389227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mk5fg,Uid:bd1f9d6d-f453-4a66-afab-00c3b18e02b1,Namespace:kube-system,Attempt:0,}" May 8 00:14:32.903406 containerd[1508]: time="2025-05-08T00:14:32.903084821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:32.903406 containerd[1508]: time="2025-05-08T00:14:32.903215576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:32.903589 containerd[1508]: time="2025-05-08T00:14:32.903238458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:32.903589 containerd[1508]: time="2025-05-08T00:14:32.903396462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:32.905764 containerd[1508]: time="2025-05-08T00:14:32.905166008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:32.905764 containerd[1508]: time="2025-05-08T00:14:32.905225965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:32.905764 containerd[1508]: time="2025-05-08T00:14:32.905236304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:32.905764 containerd[1508]: time="2025-05-08T00:14:32.905314214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:32.935554 containerd[1508]: time="2025-05-08T00:14:32.932980314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:32.935554 containerd[1508]: time="2025-05-08T00:14:32.933240634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:32.935554 containerd[1508]: time="2025-05-08T00:14:32.933257443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:32.935554 containerd[1508]: time="2025-05-08T00:14:32.933516731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:33.000062 systemd[1]: Started cri-containerd-d07656ff0c56a216c4622184023704d604cc6b725d37f06ea0fa0fd36da7fcf3.scope - libcontainer container d07656ff0c56a216c4622184023704d604cc6b725d37f06ea0fa0fd36da7fcf3. May 8 00:14:33.006893 systemd[1]: Started cri-containerd-43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17.scope - libcontainer container 43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17. May 8 00:14:33.012737 systemd[1]: Started cri-containerd-0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e.scope - libcontainer container 0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e. May 8 00:14:33.041911 containerd[1508]: time="2025-05-08T00:14:33.041832940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvlw2,Uid:bd3ec17a-77bb-4ca5-bd17-7c2393f9f306,Namespace:kube-system,Attempt:0,} returns sandbox id \"d07656ff0c56a216c4622184023704d604cc6b725d37f06ea0fa0fd36da7fcf3\"" May 8 00:14:33.045934 containerd[1508]: time="2025-05-08T00:14:33.045895241Z" level=info msg="CreateContainer within sandbox \"d07656ff0c56a216c4622184023704d604cc6b725d37f06ea0fa0fd36da7fcf3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:14:33.046425 containerd[1508]: time="2025-05-08T00:14:33.046373194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mk5fg,Uid:bd1f9d6d-f453-4a66-afab-00c3b18e02b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\"" May 8 00:14:33.053955 containerd[1508]: time="2025-05-08T00:14:33.053904586Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:14:33.070575 containerd[1508]: time="2025-05-08T00:14:33.070515145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gd7nj,Uid:b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e\"" May 8 00:14:33.078629 containerd[1508]: time="2025-05-08T00:14:33.078544115Z" level=info msg="CreateContainer within sandbox \"d07656ff0c56a216c4622184023704d604cc6b725d37f06ea0fa0fd36da7fcf3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dbcb1d43eb2e8e0fffd7afa2dde013afa71399f6773a35610b2604143023bd9f\"" May 8 00:14:33.079621 containerd[1508]: time="2025-05-08T00:14:33.079537378Z" level=info msg="StartContainer for \"dbcb1d43eb2e8e0fffd7afa2dde013afa71399f6773a35610b2604143023bd9f\"" May 8 00:14:33.125958 systemd[1]: Started cri-containerd-dbcb1d43eb2e8e0fffd7afa2dde013afa71399f6773a35610b2604143023bd9f.scope - libcontainer container dbcb1d43eb2e8e0fffd7afa2dde013afa71399f6773a35610b2604143023bd9f. May 8 00:14:33.173217 containerd[1508]: time="2025-05-08T00:14:33.173164886Z" level=info msg="StartContainer for \"dbcb1d43eb2e8e0fffd7afa2dde013afa71399f6773a35610b2604143023bd9f\" returns successfully" May 8 00:14:33.782195 kubelet[2625]: I0508 00:14:33.781862 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vvlw2" podStartSLOduration=5.781841162 podStartE2EDuration="5.781841162s" podCreationTimestamp="2025-05-08 00:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:14:33.762529699 +0000 UTC m=+10.173270635" watchObservedRunningTime="2025-05-08 00:14:33.781841162 +0000 UTC m=+10.192582088" May 8 00:14:40.527762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379438803.mount: Deactivated successfully. May 8 00:14:46.770813 containerd[1508]: time="2025-05-08T00:14:46.770717121Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:46.772961 containerd[1508]: time="2025-05-08T00:14:46.772915602Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:14:46.775918 containerd[1508]: time="2025-05-08T00:14:46.775875596Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:46.778307 containerd[1508]: time="2025-05-08T00:14:46.778235265Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.724278414s" May 8 00:14:46.778307 containerd[1508]: time="2025-05-08T00:14:46.778286904Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:14:46.783596 containerd[1508]: time="2025-05-08T00:14:46.783564997Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:14:46.792800 containerd[1508]: time="2025-05-08T00:14:46.792736363Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:14:46.808790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2503936055.mount: Deactivated successfully. May 8 00:14:46.811883 containerd[1508]: time="2025-05-08T00:14:46.811824718Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\"" May 8 00:14:46.814657 containerd[1508]: time="2025-05-08T00:14:46.814200898Z" level=info msg="StartContainer for \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\"" May 8 00:14:46.853963 systemd[1]: Started cri-containerd-7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb.scope - libcontainer container 7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb. May 8 00:14:46.889574 containerd[1508]: time="2025-05-08T00:14:46.889520827Z" level=info msg="StartContainer for \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\" returns successfully" May 8 00:14:46.903862 systemd[1]: cri-containerd-7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb.scope: Deactivated successfully. May 8 00:14:47.805313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb-rootfs.mount: Deactivated successfully. May 8 00:14:48.635864 containerd[1508]: time="2025-05-08T00:14:48.635790273Z" level=info msg="shim disconnected" id=7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb namespace=k8s.io May 8 00:14:48.635864 containerd[1508]: time="2025-05-08T00:14:48.635856589Z" level=warning msg="cleaning up after shim disconnected" id=7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb namespace=k8s.io May 8 00:14:48.635864 containerd[1508]: time="2025-05-08T00:14:48.635869765Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:48.775287 containerd[1508]: time="2025-05-08T00:14:48.775185639Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:14:48.796414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701742383.mount: Deactivated successfully. May 8 00:14:48.802124 containerd[1508]: time="2025-05-08T00:14:48.802066176Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\"" May 8 00:14:48.802932 containerd[1508]: time="2025-05-08T00:14:48.802814401Z" level=info msg="StartContainer for \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\"" May 8 00:14:48.842660 systemd[1]: Started cri-containerd-bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1.scope - libcontainer container bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1. May 8 00:14:48.874878 containerd[1508]: time="2025-05-08T00:14:48.874807567Z" level=info msg="StartContainer for \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\" returns successfully" May 8 00:14:48.890710 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:14:48.891167 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:14:48.891716 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:14:48.898397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:14:48.901097 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:14:48.902651 systemd[1]: cri-containerd-bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1.scope: Deactivated successfully. May 8 00:14:48.923557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1-rootfs.mount: Deactivated successfully. May 8 00:14:48.951276 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:14:48.965158 containerd[1508]: time="2025-05-08T00:14:48.965094362Z" level=info msg="shim disconnected" id=bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1 namespace=k8s.io May 8 00:14:48.965158 containerd[1508]: time="2025-05-08T00:14:48.965152443Z" level=warning msg="cleaning up after shim disconnected" id=bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1 namespace=k8s.io May 8 00:14:48.965158 containerd[1508]: time="2025-05-08T00:14:48.965163865Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:49.778707 containerd[1508]: time="2025-05-08T00:14:49.778643468Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:14:49.802856 containerd[1508]: time="2025-05-08T00:14:49.802783265Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\"" May 8 00:14:49.803421 containerd[1508]: time="2025-05-08T00:14:49.803389018Z" level=info msg="StartContainer for \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\"" May 8 00:14:49.837834 systemd[1]: Started cri-containerd-f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81.scope - libcontainer container f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81. May 8 00:14:49.881021 containerd[1508]: time="2025-05-08T00:14:49.880869927Z" level=info msg="StartContainer for \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\" returns successfully" May 8 00:14:49.882328 systemd[1]: cri-containerd-f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81.scope: Deactivated successfully. May 8 00:14:49.910296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81-rootfs.mount: Deactivated successfully. May 8 00:14:49.915521 containerd[1508]: time="2025-05-08T00:14:49.915420939Z" level=info msg="shim disconnected" id=f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81 namespace=k8s.io May 8 00:14:49.915521 containerd[1508]: time="2025-05-08T00:14:49.915501473Z" level=warning msg="cleaning up after shim disconnected" id=f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81 namespace=k8s.io May 8 00:14:49.915521 containerd[1508]: time="2025-05-08T00:14:49.915514297Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:50.782146 containerd[1508]: time="2025-05-08T00:14:50.782094842Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:14:50.816362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437210185.mount: Deactivated successfully. May 8 00:14:51.384851 containerd[1508]: time="2025-05-08T00:14:51.384738912Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:51.470461 containerd[1508]: time="2025-05-08T00:14:51.470360964Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:14:51.659373 containerd[1508]: time="2025-05-08T00:14:51.659172325Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:14:51.659768 containerd[1508]: time="2025-05-08T00:14:51.659726440Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\"" May 8 00:14:51.660422 containerd[1508]: time="2025-05-08T00:14:51.660372068Z" level=info msg="StartContainer for \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\"" May 8 00:14:51.661146 containerd[1508]: time="2025-05-08T00:14:51.661103269Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.87749009s" May 8 00:14:51.661202 containerd[1508]: time="2025-05-08T00:14:51.661143816Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:14:51.663961 containerd[1508]: time="2025-05-08T00:14:51.663924156Z" level=info msg="CreateContainer within sandbox \"0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:14:51.697737 systemd[1]: Started cri-containerd-2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16.scope - libcontainer container 2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16. May 8 00:14:51.727747 systemd[1]: cri-containerd-2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16.scope: Deactivated successfully. May 8 00:14:51.747886 containerd[1508]: time="2025-05-08T00:14:51.747820856Z" level=info msg="StartContainer for \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\" returns successfully" May 8 00:14:51.753926 containerd[1508]: time="2025-05-08T00:14:51.753854045Z" level=info msg="CreateContainer within sandbox \"0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\"" May 8 00:14:51.754659 containerd[1508]: time="2025-05-08T00:14:51.754623730Z" level=info msg="StartContainer for \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\"" May 8 00:14:51.788997 systemd[1]: Started cri-containerd-cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b.scope - libcontainer container cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b. May 8 00:14:51.818424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16-rootfs.mount: Deactivated successfully. May 8 00:14:52.008679 containerd[1508]: time="2025-05-08T00:14:52.008567773Z" level=info msg="shim disconnected" id=2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16 namespace=k8s.io May 8 00:14:52.008679 containerd[1508]: time="2025-05-08T00:14:52.008672913Z" level=warning msg="cleaning up after shim disconnected" id=2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16 namespace=k8s.io May 8 00:14:52.008679 containerd[1508]: time="2025-05-08T00:14:52.008686919Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:52.016105 containerd[1508]: time="2025-05-08T00:14:52.016033453Z" level=info msg="StartContainer for \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\" returns successfully" May 8 00:14:52.818518 containerd[1508]: time="2025-05-08T00:14:52.818424673Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:14:52.863375 containerd[1508]: time="2025-05-08T00:14:52.863308621Z" level=info msg="CreateContainer within sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\"" May 8 00:14:52.865908 containerd[1508]: time="2025-05-08T00:14:52.864080098Z" level=info msg="StartContainer for \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\"" May 8 00:14:52.961847 systemd[1]: Started cri-containerd-74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d.scope - libcontainer container 74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d. May 8 00:14:53.002260 containerd[1508]: time="2025-05-08T00:14:53.002183637Z" level=info msg="StartContainer for \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\" returns successfully" May 8 00:14:53.245181 kubelet[2625]: I0508 00:14:53.245100 2625 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 00:14:53.277828 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:51390.service - OpenSSH per-connection server daemon (10.0.0.1:51390). May 8 00:14:53.358826 sshd[3402]: Accepted publickey for core from 10.0.0.1 port 51390 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:14:53.360536 sshd-session[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:53.369324 systemd-logind[1492]: New session 8 of user core. May 8 00:14:53.375004 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:14:53.378778 kubelet[2625]: I0508 00:14:53.378045 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gd7nj" podStartSLOduration=6.788396856 podStartE2EDuration="25.378021291s" podCreationTimestamp="2025-05-08 00:14:28 +0000 UTC" firstStartedPulling="2025-05-08 00:14:33.07270095 +0000 UTC m=+9.483441886" lastFinishedPulling="2025-05-08 00:14:51.662325385 +0000 UTC m=+28.073066321" observedRunningTime="2025-05-08 00:14:52.856819619 +0000 UTC m=+29.267560565" watchObservedRunningTime="2025-05-08 00:14:53.378021291 +0000 UTC m=+29.788762227" May 8 00:14:53.404290 systemd[1]: Created slice kubepods-burstable-pod0624fbd4_77d1_42b3_8ddb_0374fa827ecc.slice - libcontainer container kubepods-burstable-pod0624fbd4_77d1_42b3_8ddb_0374fa827ecc.slice. May 8 00:14:53.408936 systemd[1]: Created slice kubepods-burstable-pod73d49205_76bb_40fa_957e_c0b5294810cf.slice - libcontainer container kubepods-burstable-pod73d49205_76bb_40fa_957e_c0b5294810cf.slice. May 8 00:14:53.459107 kubelet[2625]: I0508 00:14:53.459051 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73d49205-76bb-40fa-957e-c0b5294810cf-config-volume\") pod \"coredns-6f6b679f8f-pr5q5\" (UID: \"73d49205-76bb-40fa-957e-c0b5294810cf\") " pod="kube-system/coredns-6f6b679f8f-pr5q5" May 8 00:14:53.459473 kubelet[2625]: I0508 00:14:53.459346 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77xr\" (UniqueName: \"kubernetes.io/projected/0624fbd4-77d1-42b3-8ddb-0374fa827ecc-kube-api-access-d77xr\") pod \"coredns-6f6b679f8f-8vwdk\" (UID: \"0624fbd4-77d1-42b3-8ddb-0374fa827ecc\") " pod="kube-system/coredns-6f6b679f8f-8vwdk" May 8 00:14:53.459473 kubelet[2625]: I0508 00:14:53.459385 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0624fbd4-77d1-42b3-8ddb-0374fa827ecc-config-volume\") pod \"coredns-6f6b679f8f-8vwdk\" (UID: \"0624fbd4-77d1-42b3-8ddb-0374fa827ecc\") " pod="kube-system/coredns-6f6b679f8f-8vwdk" May 8 00:14:53.459473 kubelet[2625]: I0508 00:14:53.459428 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgq5c\" (UniqueName: \"kubernetes.io/projected/73d49205-76bb-40fa-957e-c0b5294810cf-kube-api-access-lgq5c\") pod \"coredns-6f6b679f8f-pr5q5\" (UID: \"73d49205-76bb-40fa-957e-c0b5294810cf\") " pod="kube-system/coredns-6f6b679f8f-pr5q5" May 8 00:14:53.620567 sshd[3405]: Connection closed by 10.0.0.1 port 51390 May 8 00:14:53.621280 sshd-session[3402]: pam_unix(sshd:session): session closed for user core May 8 00:14:53.626971 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:51390.service: Deactivated successfully. May 8 00:14:53.629546 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:14:53.630631 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. May 8 00:14:53.631721 systemd-logind[1492]: Removed session 8. May 8 00:14:53.725725 containerd[1508]: time="2025-05-08T00:14:53.725534493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8vwdk,Uid:0624fbd4-77d1-42b3-8ddb-0374fa827ecc,Namespace:kube-system,Attempt:0,}" May 8 00:14:53.794089 containerd[1508]: time="2025-05-08T00:14:53.793999134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pr5q5,Uid:73d49205-76bb-40fa-957e-c0b5294810cf,Namespace:kube-system,Attempt:0,}" May 8 00:14:53.888915 kubelet[2625]: I0508 00:14:53.888534 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mk5fg" podStartSLOduration=12.158236519999999 podStartE2EDuration="25.888514273s" podCreationTimestamp="2025-05-08 00:14:28 +0000 UTC" firstStartedPulling="2025-05-08 00:14:33.053170762 +0000 UTC m=+9.463911698" lastFinishedPulling="2025-05-08 00:14:46.783448515 +0000 UTC m=+23.194189451" observedRunningTime="2025-05-08 00:14:53.888236686 +0000 UTC m=+30.298977632" watchObservedRunningTime="2025-05-08 00:14:53.888514273 +0000 UTC m=+30.299255209" May 8 00:14:55.403823 systemd-networkd[1425]: cilium_host: Link UP May 8 00:14:55.404615 systemd-networkd[1425]: cilium_net: Link UP May 8 00:14:55.404895 systemd-networkd[1425]: cilium_net: Gained carrier May 8 00:14:55.405116 systemd-networkd[1425]: cilium_host: Gained carrier May 8 00:14:55.520848 systemd-networkd[1425]: cilium_vxlan: Link UP May 8 00:14:55.520861 systemd-networkd[1425]: cilium_vxlan: Gained carrier May 8 00:14:55.751629 kernel: NET: Registered PF_ALG protocol family May 8 00:14:55.893823 systemd-networkd[1425]: cilium_host: Gained IPv6LL May 8 00:14:56.085872 systemd-networkd[1425]: cilium_net: Gained IPv6LL May 8 00:14:56.539721 systemd-networkd[1425]: lxc_health: Link UP May 8 00:14:56.540055 systemd-networkd[1425]: lxc_health: Gained carrier May 8 00:14:57.079806 kernel: eth0: renamed from tmp7b085 May 8 00:14:57.078351 systemd-networkd[1425]: lxc8e4267693aa0: Link UP May 8 00:14:57.085151 systemd-networkd[1425]: lxc8e4267693aa0: Gained carrier May 8 00:14:57.110952 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL May 8 00:14:57.117276 systemd-networkd[1425]: lxca836381aeae5: Link UP May 8 00:14:57.125691 kernel: eth0: renamed from tmpab535 May 8 00:14:57.129639 systemd-networkd[1425]: lxca836381aeae5: Gained carrier May 8 00:14:58.325770 systemd-networkd[1425]: lxc8e4267693aa0: Gained IPv6LL May 8 00:14:58.389800 systemd-networkd[1425]: lxc_health: Gained IPv6LL May 8 00:14:58.642903 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:33756.service - OpenSSH per-connection server daemon (10.0.0.1:33756). May 8 00:14:58.645894 systemd-networkd[1425]: lxca836381aeae5: Gained IPv6LL May 8 00:14:58.690957 sshd[3861]: Accepted publickey for core from 10.0.0.1 port 33756 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:14:58.692578 sshd-session[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:14:58.697108 systemd-logind[1492]: New session 9 of user core. May 8 00:14:58.712880 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:14:58.860817 sshd[3863]: Connection closed by 10.0.0.1 port 33756 May 8 00:14:58.861242 sshd-session[3861]: pam_unix(sshd:session): session closed for user core May 8 00:14:58.865740 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:33756.service: Deactivated successfully. May 8 00:14:58.868084 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:14:58.869086 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. May 8 00:14:58.870363 systemd-logind[1492]: Removed session 9. May 8 00:15:00.926886 containerd[1508]: time="2025-05-08T00:15:00.926729343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:15:00.926886 containerd[1508]: time="2025-05-08T00:15:00.926803764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:15:00.926886 containerd[1508]: time="2025-05-08T00:15:00.926818021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:15:00.927482 containerd[1508]: time="2025-05-08T00:15:00.926915416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:15:00.955996 systemd[1]: Started cri-containerd-ab535d9fdc989b53fcfc75ea4a3190ea86ec72e53c2bd2e8aed71700feae913f.scope - libcontainer container ab535d9fdc989b53fcfc75ea4a3190ea86ec72e53c2bd2e8aed71700feae913f. May 8 00:15:00.969823 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:15:00.997588 containerd[1508]: time="2025-05-08T00:15:00.997506958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8vwdk,Uid:0624fbd4-77d1-42b3-8ddb-0374fa827ecc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab535d9fdc989b53fcfc75ea4a3190ea86ec72e53c2bd2e8aed71700feae913f\"" May 8 00:15:00.999509 containerd[1508]: time="2025-05-08T00:15:00.999397865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:15:00.999846 containerd[1508]: time="2025-05-08T00:15:00.999704167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:15:00.999846 containerd[1508]: time="2025-05-08T00:15:00.999750715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:15:01.000494 containerd[1508]: time="2025-05-08T00:15:01.000430113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:15:01.003649 containerd[1508]: time="2025-05-08T00:15:01.003217139Z" level=info msg="CreateContainer within sandbox \"ab535d9fdc989b53fcfc75ea4a3190ea86ec72e53c2bd2e8aed71700feae913f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:15:01.033935 systemd[1]: Started cri-containerd-7b085207179232c8ee050fc85f57740a881d60d61cd41a4ea5abfaa46f8f07b5.scope - libcontainer container 7b085207179232c8ee050fc85f57740a881d60d61cd41a4ea5abfaa46f8f07b5. May 8 00:15:01.049110 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:15:01.077230 containerd[1508]: time="2025-05-08T00:15:01.077176159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pr5q5,Uid:73d49205-76bb-40fa-957e-c0b5294810cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b085207179232c8ee050fc85f57740a881d60d61cd41a4ea5abfaa46f8f07b5\"" May 8 00:15:01.079364 containerd[1508]: time="2025-05-08T00:15:01.079322870Z" level=info msg="CreateContainer within sandbox \"7b085207179232c8ee050fc85f57740a881d60d61cd41a4ea5abfaa46f8f07b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:15:01.402695 kubelet[2625]: I0508 00:15:01.402643 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:15:01.784540 containerd[1508]: time="2025-05-08T00:15:01.784471416Z" level=info msg="CreateContainer within sandbox \"7b085207179232c8ee050fc85f57740a881d60d61cd41a4ea5abfaa46f8f07b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e67a33d1bebc73b8ecf0ab0989d1cd55d8032c295095937bd9dc19f25afa881a\"" May 8 00:15:01.785129 containerd[1508]: time="2025-05-08T00:15:01.785072546Z" level=info msg="StartContainer for \"e67a33d1bebc73b8ecf0ab0989d1cd55d8032c295095937bd9dc19f25afa881a\"" May 8 00:15:01.789020 containerd[1508]: time="2025-05-08T00:15:01.788971370Z" level=info msg="CreateContainer within sandbox \"ab535d9fdc989b53fcfc75ea4a3190ea86ec72e53c2bd2e8aed71700feae913f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"859185c6e54e95e60c0aa3ea6e2534a290b7277b0b818aad12b46538090b74ea\"" May 8 00:15:01.789791 containerd[1508]: time="2025-05-08T00:15:01.789755407Z" level=info msg="StartContainer for \"859185c6e54e95e60c0aa3ea6e2534a290b7277b0b818aad12b46538090b74ea\"" May 8 00:15:01.816977 systemd[1]: Started cri-containerd-e67a33d1bebc73b8ecf0ab0989d1cd55d8032c295095937bd9dc19f25afa881a.scope - libcontainer container e67a33d1bebc73b8ecf0ab0989d1cd55d8032c295095937bd9dc19f25afa881a. May 8 00:15:01.825854 systemd[1]: Started cri-containerd-859185c6e54e95e60c0aa3ea6e2534a290b7277b0b818aad12b46538090b74ea.scope - libcontainer container 859185c6e54e95e60c0aa3ea6e2534a290b7277b0b818aad12b46538090b74ea. May 8 00:15:01.862034 containerd[1508]: time="2025-05-08T00:15:01.861953528Z" level=info msg="StartContainer for \"e67a33d1bebc73b8ecf0ab0989d1cd55d8032c295095937bd9dc19f25afa881a\" returns successfully" May 8 00:15:01.865987 containerd[1508]: time="2025-05-08T00:15:01.865949046Z" level=info msg="StartContainer for \"859185c6e54e95e60c0aa3ea6e2534a290b7277b0b818aad12b46538090b74ea\" returns successfully" May 8 00:15:03.233513 kubelet[2625]: I0508 00:15:03.233035 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8vwdk" podStartSLOduration=35.233009391 podStartE2EDuration="35.233009391s" podCreationTimestamp="2025-05-08 00:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:15:03.086025904 +0000 UTC m=+39.496766840" watchObservedRunningTime="2025-05-08 00:15:03.233009391 +0000 UTC m=+39.643750327" May 8 00:15:03.499771 kubelet[2625]: I0508 00:15:03.498864 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pr5q5" podStartSLOduration=35.498840977 podStartE2EDuration="35.498840977s" podCreationTimestamp="2025-05-08 00:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:15:03.498630509 +0000 UTC m=+39.909371465" watchObservedRunningTime="2025-05-08 00:15:03.498840977 +0000 UTC m=+39.909581913" May 8 00:15:03.880012 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). May 8 00:15:03.931637 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:03.933366 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:03.938494 systemd-logind[1492]: New session 10 of user core. May 8 00:15:03.952894 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:15:04.086328 sshd[4060]: Connection closed by 10.0.0.1 port 33768 May 8 00:15:04.086722 sshd-session[4058]: pam_unix(sshd:session): session closed for user core May 8 00:15:04.090717 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:33768.service: Deactivated successfully. May 8 00:15:04.092917 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:15:04.093654 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. May 8 00:15:04.094530 systemd-logind[1492]: Removed session 10. May 8 00:15:09.100887 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:51450.service - OpenSSH per-connection server daemon (10.0.0.1:51450). May 8 00:15:09.152111 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 51450 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:09.154165 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:09.159225 systemd-logind[1492]: New session 11 of user core. May 8 00:15:09.166795 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:15:09.297584 sshd[4080]: Connection closed by 10.0.0.1 port 51450 May 8 00:15:09.298083 sshd-session[4078]: pam_unix(sshd:session): session closed for user core May 8 00:15:09.302170 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:51450.service: Deactivated successfully. May 8 00:15:09.304486 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:15:09.305253 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. May 8 00:15:09.306454 systemd-logind[1492]: Removed session 11. May 8 00:15:14.312075 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:51452.service - OpenSSH per-connection server daemon (10.0.0.1:51452). May 8 00:15:14.361167 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 51452 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:14.362944 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:14.367448 systemd-logind[1492]: New session 12 of user core. May 8 00:15:14.381753 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:15:14.498194 sshd[4096]: Connection closed by 10.0.0.1 port 51452 May 8 00:15:14.498750 sshd-session[4094]: pam_unix(sshd:session): session closed for user core May 8 00:15:14.514432 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:51452.service: Deactivated successfully. May 8 00:15:14.516384 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:15:14.518209 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. May 8 00:15:14.525055 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:51460.service - OpenSSH per-connection server daemon (10.0.0.1:51460). May 8 00:15:14.526338 systemd-logind[1492]: Removed session 12. May 8 00:15:14.566214 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 51460 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:14.567800 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:14.572495 systemd-logind[1492]: New session 13 of user core. May 8 00:15:14.581759 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:15:14.868920 sshd[4112]: Connection closed by 10.0.0.1 port 51460 May 8 00:15:14.869166 sshd-session[4109]: pam_unix(sshd:session): session closed for user core May 8 00:15:14.882637 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:51460.service: Deactivated successfully. May 8 00:15:14.884617 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:15:14.886104 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. May 8 00:15:14.891902 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:51464.service - OpenSSH per-connection server daemon (10.0.0.1:51464). May 8 00:15:14.892895 systemd-logind[1492]: Removed session 13. May 8 00:15:14.934786 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 51464 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:14.936589 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:14.941377 systemd-logind[1492]: New session 14 of user core. May 8 00:15:14.956794 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:15:15.085779 sshd[4126]: Connection closed by 10.0.0.1 port 51464 May 8 00:15:15.086189 sshd-session[4123]: pam_unix(sshd:session): session closed for user core May 8 00:15:15.090424 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:51464.service: Deactivated successfully. May 8 00:15:15.092820 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:15:15.093653 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. May 8 00:15:15.094581 systemd-logind[1492]: Removed session 14. May 8 00:15:20.102424 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:37144.service - OpenSSH per-connection server daemon (10.0.0.1:37144). May 8 00:15:20.146523 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 37144 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:20.148117 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:20.152209 systemd-logind[1492]: New session 15 of user core. May 8 00:15:20.160770 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:15:20.482794 sshd[4142]: Connection closed by 10.0.0.1 port 37144 May 8 00:15:20.483205 sshd-session[4140]: pam_unix(sshd:session): session closed for user core May 8 00:15:20.486906 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:37144.service: Deactivated successfully. May 8 00:15:20.489199 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:15:20.490009 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. May 8 00:15:20.490999 systemd-logind[1492]: Removed session 15. May 8 00:15:25.500413 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:52524.service - OpenSSH per-connection server daemon (10.0.0.1:52524). May 8 00:15:25.544856 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 52524 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:25.546579 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:25.551133 systemd-logind[1492]: New session 16 of user core. May 8 00:15:25.560779 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:15:25.671258 sshd[4159]: Connection closed by 10.0.0.1 port 52524 May 8 00:15:25.671649 sshd-session[4157]: pam_unix(sshd:session): session closed for user core May 8 00:15:25.676781 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:52524.service: Deactivated successfully. May 8 00:15:25.679197 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:15:25.679913 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. May 8 00:15:25.681070 systemd-logind[1492]: Removed session 16. May 8 00:15:30.698948 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:52538.service - OpenSSH per-connection server daemon (10.0.0.1:52538). May 8 00:15:30.738329 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 52538 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:30.740098 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:30.744634 systemd-logind[1492]: New session 17 of user core. May 8 00:15:30.753742 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:15:30.865988 sshd[4175]: Connection closed by 10.0.0.1 port 52538 May 8 00:15:30.866393 sshd-session[4173]: pam_unix(sshd:session): session closed for user core May 8 00:15:30.882512 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:52538.service: Deactivated successfully. May 8 00:15:30.884793 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:15:30.886492 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. May 8 00:15:30.898879 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:52548.service - OpenSSH per-connection server daemon (10.0.0.1:52548). May 8 00:15:30.899947 systemd-logind[1492]: Removed session 17. May 8 00:15:30.940590 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 52548 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:30.942193 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:30.946987 systemd-logind[1492]: New session 18 of user core. May 8 00:15:30.958774 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:15:31.591013 sshd[4190]: Connection closed by 10.0.0.1 port 52548 May 8 00:15:31.591527 sshd-session[4187]: pam_unix(sshd:session): session closed for user core May 8 00:15:31.605839 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:52548.service: Deactivated successfully. May 8 00:15:31.608141 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:15:31.609962 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. May 8 00:15:31.614918 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:52558.service - OpenSSH per-connection server daemon (10.0.0.1:52558). May 8 00:15:31.615911 systemd-logind[1492]: Removed session 18. May 8 00:15:31.664040 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 52558 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:31.665690 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:31.670440 systemd-logind[1492]: New session 19 of user core. May 8 00:15:31.677746 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:15:34.097587 sshd[4203]: Connection closed by 10.0.0.1 port 52558 May 8 00:15:34.098091 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 8 00:15:34.110235 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:52558.service: Deactivated successfully. May 8 00:15:34.113360 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:15:34.115801 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. May 8 00:15:34.125058 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:52574.service - OpenSSH per-connection server daemon (10.0.0.1:52574). May 8 00:15:34.126296 systemd-logind[1492]: Removed session 19. May 8 00:15:34.168206 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 52574 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:34.169906 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:34.174612 systemd-logind[1492]: New session 20 of user core. May 8 00:15:34.185770 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:15:34.434753 sshd[4233]: Connection closed by 10.0.0.1 port 52574 May 8 00:15:34.435226 sshd-session[4230]: pam_unix(sshd:session): session closed for user core May 8 00:15:34.448524 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:52574.service: Deactivated successfully. May 8 00:15:34.452416 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:15:34.454950 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. May 8 00:15:34.464165 systemd[1]: Started sshd@20-10.0.0.113:22-10.0.0.1:52582.service - OpenSSH per-connection server daemon (10.0.0.1:52582). May 8 00:15:34.465583 systemd-logind[1492]: Removed session 20. May 8 00:15:34.510104 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 52582 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:34.511943 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:34.516727 systemd-logind[1492]: New session 21 of user core. May 8 00:15:34.521748 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:15:34.646777 sshd[4246]: Connection closed by 10.0.0.1 port 52582 May 8 00:15:34.647260 sshd-session[4243]: pam_unix(sshd:session): session closed for user core May 8 00:15:34.652468 systemd[1]: sshd@20-10.0.0.113:22-10.0.0.1:52582.service: Deactivated successfully. May 8 00:15:34.655323 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:15:34.656338 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. May 8 00:15:34.657543 systemd-logind[1492]: Removed session 21. May 8 00:15:39.666035 systemd[1]: Started sshd@21-10.0.0.113:22-10.0.0.1:43652.service - OpenSSH per-connection server daemon (10.0.0.1:43652). May 8 00:15:39.710331 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 43652 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:39.712225 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:39.716940 systemd-logind[1492]: New session 22 of user core. May 8 00:15:39.724742 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:15:39.833681 sshd[4261]: Connection closed by 10.0.0.1 port 43652 May 8 00:15:39.834079 sshd-session[4259]: pam_unix(sshd:session): session closed for user core May 8 00:15:39.838281 systemd[1]: sshd@21-10.0.0.113:22-10.0.0.1:43652.service: Deactivated successfully. May 8 00:15:39.840424 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:15:39.841160 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. May 8 00:15:39.842201 systemd-logind[1492]: Removed session 22. May 8 00:15:44.847152 systemd[1]: Started sshd@22-10.0.0.113:22-10.0.0.1:43666.service - OpenSSH per-connection server daemon (10.0.0.1:43666). May 8 00:15:44.892986 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 43666 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:44.894736 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:44.899346 systemd-logind[1492]: New session 23 of user core. May 8 00:15:44.908785 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:15:45.023082 sshd[4279]: Connection closed by 10.0.0.1 port 43666 May 8 00:15:45.023518 sshd-session[4277]: pam_unix(sshd:session): session closed for user core May 8 00:15:45.028647 systemd[1]: sshd@22-10.0.0.113:22-10.0.0.1:43666.service: Deactivated successfully. May 8 00:15:45.031812 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:15:45.032760 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. May 8 00:15:45.034022 systemd-logind[1492]: Removed session 23. May 8 00:15:50.036400 systemd[1]: Started sshd@23-10.0.0.113:22-10.0.0.1:57010.service - OpenSSH per-connection server daemon (10.0.0.1:57010). May 8 00:15:50.089171 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 57010 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:50.091004 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:50.096025 systemd-logind[1492]: New session 24 of user core. May 8 00:15:50.105743 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:15:50.254589 sshd[4294]: Connection closed by 10.0.0.1 port 57010 May 8 00:15:50.255060 sshd-session[4292]: pam_unix(sshd:session): session closed for user core May 8 00:15:50.260257 systemd[1]: sshd@23-10.0.0.113:22-10.0.0.1:57010.service: Deactivated successfully. May 8 00:15:50.262812 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:15:50.263667 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. May 8 00:15:50.264585 systemd-logind[1492]: Removed session 24. May 8 00:15:55.268147 systemd[1]: Started sshd@24-10.0.0.113:22-10.0.0.1:58734.service - OpenSSH per-connection server daemon (10.0.0.1:58734). May 8 00:15:55.312368 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 58734 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:55.349681 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:55.354623 systemd-logind[1492]: New session 25 of user core. May 8 00:15:55.368908 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:15:55.490852 sshd[4309]: Connection closed by 10.0.0.1 port 58734 May 8 00:15:55.491277 sshd-session[4307]: pam_unix(sshd:session): session closed for user core May 8 00:15:55.510105 systemd[1]: sshd@24-10.0.0.113:22-10.0.0.1:58734.service: Deactivated successfully. May 8 00:15:55.512593 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:15:55.514437 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. May 8 00:15:55.523218 systemd[1]: Started sshd@25-10.0.0.113:22-10.0.0.1:58750.service - OpenSSH per-connection server daemon (10.0.0.1:58750). May 8 00:15:55.524756 systemd-logind[1492]: Removed session 25. May 8 00:15:55.569876 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 58750 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:55.571887 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:55.577490 systemd-logind[1492]: New session 26 of user core. May 8 00:15:55.587803 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:15:57.091540 containerd[1508]: time="2025-05-08T00:15:57.091196210Z" level=info msg="StopContainer for \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\" with timeout 30 (s)" May 8 00:15:57.093260 containerd[1508]: time="2025-05-08T00:15:57.092758721Z" level=info msg="Stop container \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\" with signal terminated" May 8 00:15:57.111973 systemd[1]: cri-containerd-cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b.scope: Deactivated successfully. May 8 00:15:57.128231 containerd[1508]: time="2025-05-08T00:15:57.128163355Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:15:57.138107 containerd[1508]: time="2025-05-08T00:15:57.138056900Z" level=info msg="StopContainer for \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\" with timeout 2 (s)" May 8 00:15:57.138440 containerd[1508]: time="2025-05-08T00:15:57.138404213Z" level=info msg="Stop container \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\" with signal terminated" May 8 00:15:57.143151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b-rootfs.mount: Deactivated successfully. May 8 00:15:57.149554 systemd-networkd[1425]: lxc_health: Link DOWN May 8 00:15:57.149572 systemd-networkd[1425]: lxc_health: Lost carrier May 8 00:15:57.153047 containerd[1508]: time="2025-05-08T00:15:57.152787671Z" level=info msg="shim disconnected" id=cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b namespace=k8s.io May 8 00:15:57.153047 containerd[1508]: time="2025-05-08T00:15:57.153010291Z" level=warning msg="cleaning up after shim disconnected" id=cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b namespace=k8s.io May 8 00:15:57.153047 containerd[1508]: time="2025-05-08T00:15:57.153023686Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:15:57.172367 systemd[1]: cri-containerd-74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d.scope: Deactivated successfully. May 8 00:15:57.173209 systemd[1]: cri-containerd-74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d.scope: Consumed 7.518s CPU time, 125.1M memory peak, 396K read from disk, 13.3M written to disk. May 8 00:15:57.179488 containerd[1508]: time="2025-05-08T00:15:57.179427504Z" level=info msg="StopContainer for \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\" returns successfully" May 8 00:15:57.184736 containerd[1508]: time="2025-05-08T00:15:57.184658261Z" level=info msg="StopPodSandbox for \"0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e\"" May 8 00:15:57.187253 containerd[1508]: time="2025-05-08T00:15:57.187048772Z" level=info msg="Container to stop \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:15:57.190069 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e-shm.mount: Deactivated successfully. May 8 00:15:57.196235 systemd[1]: cri-containerd-0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e.scope: Deactivated successfully. May 8 00:15:57.199754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d-rootfs.mount: Deactivated successfully. May 8 00:15:57.245930 containerd[1508]: time="2025-05-08T00:15:57.245767866Z" level=info msg="shim disconnected" id=74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d namespace=k8s.io May 8 00:15:57.245930 containerd[1508]: time="2025-05-08T00:15:57.245831656Z" level=warning msg="cleaning up after shim disconnected" id=74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d namespace=k8s.io May 8 00:15:57.245930 containerd[1508]: time="2025-05-08T00:15:57.245849600Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:15:57.245930 containerd[1508]: time="2025-05-08T00:15:57.245831255Z" level=info msg="shim disconnected" id=0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e namespace=k8s.io May 8 00:15:57.246247 containerd[1508]: time="2025-05-08T00:15:57.245941824Z" level=warning msg="cleaning up after shim disconnected" id=0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e namespace=k8s.io May 8 00:15:57.246247 containerd[1508]: time="2025-05-08T00:15:57.245955169Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:15:57.262143 containerd[1508]: time="2025-05-08T00:15:57.262021625Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:15:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:15:57.265624 containerd[1508]: time="2025-05-08T00:15:57.265398403Z" level=info msg="TearDown network for sandbox \"0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e\" successfully" May 8 00:15:57.265624 containerd[1508]: time="2025-05-08T00:15:57.265438217Z" level=info msg="StopPodSandbox for \"0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e\" returns successfully" May 8 00:15:57.275373 containerd[1508]: time="2025-05-08T00:15:57.275326142Z" level=info msg="StopContainer for \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\" returns successfully" May 8 00:15:57.276019 containerd[1508]: time="2025-05-08T00:15:57.275819481Z" level=info msg="StopPodSandbox for \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\"" May 8 00:15:57.276019 containerd[1508]: time="2025-05-08T00:15:57.275863344Z" level=info msg="Container to stop \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:15:57.276019 containerd[1508]: time="2025-05-08T00:15:57.275901345Z" level=info msg="Container to stop \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:15:57.276019 containerd[1508]: time="2025-05-08T00:15:57.275931572Z" level=info msg="Container to stop \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:15:57.276019 containerd[1508]: time="2025-05-08T00:15:57.275942182Z" level=info msg="Container to stop \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:15:57.276019 containerd[1508]: time="2025-05-08T00:15:57.275950537Z" level=info msg="Container to stop \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:15:57.283445 systemd[1]: cri-containerd-43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17.scope: Deactivated successfully. May 8 00:15:57.311955 containerd[1508]: time="2025-05-08T00:15:57.311883084Z" level=info msg="shim disconnected" id=43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17 namespace=k8s.io May 8 00:15:57.311955 containerd[1508]: time="2025-05-08T00:15:57.311949319Z" level=warning msg="cleaning up after shim disconnected" id=43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17 namespace=k8s.io May 8 00:15:57.311955 containerd[1508]: time="2025-05-08T00:15:57.311958857Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:15:57.328396 containerd[1508]: time="2025-05-08T00:15:57.328343843Z" level=info msg="TearDown network for sandbox \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" successfully" May 8 00:15:57.328396 containerd[1508]: time="2025-05-08T00:15:57.328385772Z" level=info msg="StopPodSandbox for \"43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17\" returns successfully" May 8 00:15:57.359808 kubelet[2625]: I0508 00:15:57.359586 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjfsf\" (UniqueName: \"kubernetes.io/projected/b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8-kube-api-access-fjfsf\") pod \"b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8\" (UID: \"b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8\") " May 8 00:15:57.359808 kubelet[2625]: I0508 00:15:57.359704 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8-cilium-config-path\") pod \"b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8\" (UID: \"b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8\") " May 8 00:15:57.364223 kubelet[2625]: I0508 00:15:57.364176 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8" (UID: "b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:15:57.364298 kubelet[2625]: I0508 00:15:57.364217 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8-kube-api-access-fjfsf" (OuterVolumeSpecName: "kube-api-access-fjfsf") pod "b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8" (UID: "b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8"). InnerVolumeSpecName "kube-api-access-fjfsf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:15:57.460024 kubelet[2625]: I0508 00:15:57.459948 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-host-proc-sys-net\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460024 kubelet[2625]: I0508 00:15:57.460015 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-clustermesh-secrets\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460024 kubelet[2625]: I0508 00:15:57.460034 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-etc-cni-netd\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460278 kubelet[2625]: I0508 00:15:57.460055 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-run\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460278 kubelet[2625]: I0508 00:15:57.460076 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjgf2\" (UniqueName: \"kubernetes.io/projected/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-kube-api-access-sjgf2\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460278 kubelet[2625]: I0508 00:15:57.460097 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cni-path\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460278 kubelet[2625]: I0508 00:15:57.460116 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-hostproc\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460278 kubelet[2625]: I0508 00:15:57.460138 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-hubble-tls\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460278 kubelet[2625]: I0508 00:15:57.460157 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-bpf-maps\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460530 kubelet[2625]: I0508 00:15:57.460175 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-host-proc-sys-kernel\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460530 kubelet[2625]: I0508 00:15:57.460193 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-xtables-lock\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460530 kubelet[2625]: I0508 00:15:57.460213 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-cgroup\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460530 kubelet[2625]: I0508 00:15:57.460236 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-lib-modules\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460530 kubelet[2625]: I0508 00:15:57.460261 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-config-path\") pod \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\" (UID: \"bd1f9d6d-f453-4a66-afab-00c3b18e02b1\") " May 8 00:15:57.460530 kubelet[2625]: I0508 00:15:57.460301 2625 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fjfsf\" (UniqueName: \"kubernetes.io/projected/b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8-kube-api-access-fjfsf\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.460780 kubelet[2625]: I0508 00:15:57.460315 2625 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.460780 kubelet[2625]: I0508 00:15:57.460091 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.460780 kubelet[2625]: I0508 00:15:57.460117 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.460780 kubelet[2625]: I0508 00:15:57.460131 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.460780 kubelet[2625]: I0508 00:15:57.460148 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cni-path" (OuterVolumeSpecName: "cni-path") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.460954 kubelet[2625]: I0508 00:15:57.460676 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.460954 kubelet[2625]: I0508 00:15:57.460804 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-hostproc" (OuterVolumeSpecName: "hostproc") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.463011 kubelet[2625]: I0508 00:15:57.462881 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.463011 kubelet[2625]: I0508 00:15:57.462931 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.463011 kubelet[2625]: I0508 00:15:57.462958 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.463011 kubelet[2625]: I0508 00:15:57.462981 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:15:57.464277 kubelet[2625]: I0508 00:15:57.464183 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-kube-api-access-sjgf2" (OuterVolumeSpecName: "kube-api-access-sjgf2") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "kube-api-access-sjgf2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:15:57.464514 kubelet[2625]: I0508 00:15:57.464489 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:15:57.464771 kubelet[2625]: I0508 00:15:57.464741 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:15:57.465292 kubelet[2625]: I0508 00:15:57.465261 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bd1f9d6d-f453-4a66-afab-00c3b18e02b1" (UID: "bd1f9d6d-f453-4a66-afab-00c3b18e02b1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:15:57.560801 kubelet[2625]: I0508 00:15:57.560726 2625 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.560801 kubelet[2625]: I0508 00:15:57.560777 2625 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.560801 kubelet[2625]: I0508 00:15:57.560792 2625 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.560801 kubelet[2625]: I0508 00:15:57.560809 2625 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.560801 kubelet[2625]: I0508 00:15:57.560818 2625 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.560801 kubelet[2625]: I0508 00:15:57.560826 2625 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.560801 kubelet[2625]: I0508 00:15:57.560837 2625 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.561185 kubelet[2625]: I0508 00:15:57.560846 2625 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.561185 kubelet[2625]: I0508 00:15:57.560854 2625 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.561185 kubelet[2625]: I0508 00:15:57.560862 2625 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.561185 kubelet[2625]: I0508 00:15:57.560870 2625 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.561185 kubelet[2625]: I0508 00:15:57.560878 2625 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sjgf2\" (UniqueName: \"kubernetes.io/projected/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-kube-api-access-sjgf2\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.561185 kubelet[2625]: I0508 00:15:57.560886 2625 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.561185 kubelet[2625]: I0508 00:15:57.560894 2625 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1f9d6d-f453-4a66-afab-00c3b18e02b1-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:15:57.707620 systemd[1]: Removed slice kubepods-besteffort-podb0ce46fd_6349_4c35_9fcc_1dc15a8bd8c8.slice - libcontainer container kubepods-besteffort-podb0ce46fd_6349_4c35_9fcc_1dc15a8bd8c8.slice. May 8 00:15:57.708740 systemd[1]: Removed slice kubepods-burstable-podbd1f9d6d_f453_4a66_afab_00c3b18e02b1.slice - libcontainer container kubepods-burstable-podbd1f9d6d_f453_4a66_afab_00c3b18e02b1.slice. May 8 00:15:57.708849 systemd[1]: kubepods-burstable-podbd1f9d6d_f453_4a66_afab_00c3b18e02b1.slice: Consumed 7.649s CPU time, 125.5M memory peak, 404K read from disk, 13.3M written to disk. May 8 00:15:57.954082 kubelet[2625]: I0508 00:15:57.954049 2625 scope.go:117] "RemoveContainer" containerID="74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d" May 8 00:15:57.963490 containerd[1508]: time="2025-05-08T00:15:57.962935914Z" level=info msg="RemoveContainer for \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\"" May 8 00:15:57.968471 containerd[1508]: time="2025-05-08T00:15:57.968425568Z" level=info msg="RemoveContainer for \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\" returns successfully" May 8 00:15:57.968751 kubelet[2625]: I0508 00:15:57.968717 2625 scope.go:117] "RemoveContainer" containerID="2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16" May 8 00:15:57.969923 containerd[1508]: time="2025-05-08T00:15:57.969896788Z" level=info msg="RemoveContainer for \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\"" May 8 00:15:57.973353 containerd[1508]: time="2025-05-08T00:15:57.973324141Z" level=info msg="RemoveContainer for \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\" returns successfully" May 8 00:15:57.973640 kubelet[2625]: I0508 00:15:57.973530 2625 scope.go:117] "RemoveContainer" containerID="f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81" May 8 00:15:57.975441 containerd[1508]: time="2025-05-08T00:15:57.975400960Z" level=info msg="RemoveContainer for \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\"" May 8 00:15:57.979841 containerd[1508]: time="2025-05-08T00:15:57.979783460Z" level=info msg="RemoveContainer for \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\" returns successfully" May 8 00:15:57.980055 kubelet[2625]: I0508 00:15:57.980026 2625 scope.go:117] "RemoveContainer" containerID="bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1" May 8 00:15:57.981214 containerd[1508]: time="2025-05-08T00:15:57.981186262Z" level=info msg="RemoveContainer for \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\"" May 8 00:15:57.996745 containerd[1508]: time="2025-05-08T00:15:57.996685481Z" level=info msg="RemoveContainer for \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\" returns successfully" May 8 00:15:57.997045 kubelet[2625]: I0508 00:15:57.997007 2625 scope.go:117] "RemoveContainer" containerID="7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb" May 8 00:15:57.999839 containerd[1508]: time="2025-05-08T00:15:57.999794042Z" level=info msg="RemoveContainer for \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\"" May 8 00:15:58.018234 containerd[1508]: time="2025-05-08T00:15:58.018167742Z" level=info msg="RemoveContainer for \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\" returns successfully" May 8 00:15:58.018519 kubelet[2625]: I0508 00:15:58.018450 2625 scope.go:117] "RemoveContainer" containerID="74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d" May 8 00:15:58.018886 containerd[1508]: time="2025-05-08T00:15:58.018817625Z" level=error msg="ContainerStatus for \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\": not found" May 8 00:15:58.026455 kubelet[2625]: E0508 00:15:58.026411 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\": not found" containerID="74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d" May 8 00:15:58.026592 kubelet[2625]: I0508 00:15:58.026468 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d"} err="failed to get container status \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\": rpc error: code = NotFound desc = an error occurred when try to find container \"74eaacac7abd0dcbeb7682654556e07914152b4a6cadbaef8f44c6742d7e273d\": not found" May 8 00:15:58.026592 kubelet[2625]: I0508 00:15:58.026576 2625 scope.go:117] "RemoveContainer" containerID="2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16" May 8 00:15:58.026951 containerd[1508]: time="2025-05-08T00:15:58.026892987Z" level=error msg="ContainerStatus for \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\": not found" May 8 00:15:58.027118 kubelet[2625]: E0508 00:15:58.027085 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\": not found" containerID="2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16" May 8 00:15:58.027253 kubelet[2625]: I0508 00:15:58.027123 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16"} err="failed to get container status \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a28636d4e90cd29296f89249421862ae0e6c5f2b03b423d9151dcfe0ed74b16\": not found" May 8 00:15:58.027253 kubelet[2625]: I0508 00:15:58.027149 2625 scope.go:117] "RemoveContainer" containerID="f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81" May 8 00:15:58.027337 containerd[1508]: time="2025-05-08T00:15:58.027305504Z" level=error msg="ContainerStatus for \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\": not found" May 8 00:15:58.027464 kubelet[2625]: E0508 00:15:58.027440 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\": not found" containerID="f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81" May 8 00:15:58.027464 kubelet[2625]: I0508 00:15:58.027461 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81"} err="failed to get container status \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\": rpc error: code = NotFound desc = an error occurred when try to find container \"f67bfde6fa8a08157364f5059f73114c89742f00b8833c258b2607b601466e81\": not found" May 8 00:15:58.027464 kubelet[2625]: I0508 00:15:58.027476 2625 scope.go:117] "RemoveContainer" containerID="bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1" May 8 00:15:58.027685 containerd[1508]: time="2025-05-08T00:15:58.027652276Z" level=error msg="ContainerStatus for \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\": not found" May 8 00:15:58.027804 kubelet[2625]: E0508 00:15:58.027777 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\": not found" containerID="bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1" May 8 00:15:58.027839 kubelet[2625]: I0508 00:15:58.027800 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1"} err="failed to get container status \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"bbbf21828c5953a384974722bb070ccc38cf5804512345309fba3d2fbcb667f1\": not found" May 8 00:15:58.027839 kubelet[2625]: I0508 00:15:58.027814 2625 scope.go:117] "RemoveContainer" containerID="7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb" May 8 00:15:58.027980 containerd[1508]: time="2025-05-08T00:15:58.027949926Z" level=error msg="ContainerStatus for \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\": not found" May 8 00:15:58.028076 kubelet[2625]: E0508 00:15:58.028055 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\": not found" containerID="7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb" May 8 00:15:58.028114 kubelet[2625]: I0508 00:15:58.028076 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb"} err="failed to get container status \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e0bd71fbf8505e75e05f208afc16e2abbf17520f76868e1edbe2f35875542eb\": not found" May 8 00:15:58.028114 kubelet[2625]: I0508 00:15:58.028092 2625 scope.go:117] "RemoveContainer" containerID="cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b" May 8 00:15:58.029595 containerd[1508]: time="2025-05-08T00:15:58.029269480Z" level=info msg="RemoveContainer for \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\"" May 8 00:15:58.055578 containerd[1508]: time="2025-05-08T00:15:58.055507154Z" level=info msg="RemoveContainer for \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\" returns successfully" May 8 00:15:58.055773 kubelet[2625]: I0508 00:15:58.055747 2625 scope.go:117] "RemoveContainer" containerID="cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b" May 8 00:15:58.056027 containerd[1508]: time="2025-05-08T00:15:58.055990403Z" level=error msg="ContainerStatus for \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\": not found" May 8 00:15:58.056225 kubelet[2625]: E0508 00:15:58.056174 2625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\": not found" containerID="cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b" May 8 00:15:58.056317 kubelet[2625]: I0508 00:15:58.056228 2625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b"} err="failed to get container status \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\": rpc error: code = NotFound desc = an error occurred when try to find container \"cadac04bc4da16e0e40f03e629ba53d13916e532a3262506fde9d0a18c8eb52b\": not found" May 8 00:15:58.086111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17-rootfs.mount: Deactivated successfully. May 8 00:15:58.086255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c8a82a7061098039cc79f1ba93e100d375eb2c6b2ff6fe8416b08ad45a8ef6e-rootfs.mount: Deactivated successfully. May 8 00:15:58.086343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43fefe36d627daba48c2e8b5c2f6fc6eab88b492c390ac651c19ec5bfef7fe17-shm.mount: Deactivated successfully. May 8 00:15:58.086462 systemd[1]: var-lib-kubelet-pods-b0ce46fd\x2d6349\x2d4c35\x2d9fcc\x2d1dc15a8bd8c8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjfsf.mount: Deactivated successfully. May 8 00:15:58.086568 systemd[1]: var-lib-kubelet-pods-bd1f9d6d\x2df453\x2d4a66\x2dafab\x2d00c3b18e02b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjgf2.mount: Deactivated successfully. May 8 00:15:58.086672 systemd[1]: var-lib-kubelet-pods-bd1f9d6d\x2df453\x2d4a66\x2dafab\x2d00c3b18e02b1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:15:58.086758 systemd[1]: var-lib-kubelet-pods-bd1f9d6d\x2df453\x2d4a66\x2dafab\x2d00c3b18e02b1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:15:58.766542 kubelet[2625]: E0508 00:15:58.766492 2625 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:15:59.110316 sshd[4325]: Connection closed by 10.0.0.1 port 58750 May 8 00:15:59.124752 systemd[1]: Started sshd@26-10.0.0.113:22-10.0.0.1:58756.service - OpenSSH per-connection server daemon (10.0.0.1:58756). May 8 00:15:59.200844 sshd-session[4322]: pam_unix(sshd:session): session closed for user core May 8 00:15:59.205718 systemd[1]: sshd@25-10.0.0.113:22-10.0.0.1:58750.service: Deactivated successfully. May 8 00:15:59.209082 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:15:59.210119 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. May 8 00:15:59.211201 systemd-logind[1492]: Removed session 26. May 8 00:15:59.241559 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 58756 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:59.243112 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:59.248201 systemd-logind[1492]: New session 27 of user core. May 8 00:15:59.255774 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 00:15:59.703221 kubelet[2625]: I0508 00:15:59.703169 2625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8" path="/var/lib/kubelet/pods/b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8/volumes" May 8 00:15:59.703974 kubelet[2625]: I0508 00:15:59.703938 2625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1f9d6d-f453-4a66-afab-00c3b18e02b1" path="/var/lib/kubelet/pods/bd1f9d6d-f453-4a66-afab-00c3b18e02b1/volumes" May 8 00:15:59.917555 sshd[4485]: Connection closed by 10.0.0.1 port 58756 May 8 00:15:59.918061 sshd-session[4480]: pam_unix(sshd:session): session closed for user core May 8 00:15:59.931818 systemd[1]: sshd@26-10.0.0.113:22-10.0.0.1:58756.service: Deactivated successfully. May 8 00:15:59.934788 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:15:59.936559 systemd-logind[1492]: Session 27 logged out. Waiting for processes to exit. May 8 00:15:59.944969 systemd[1]: Started sshd@27-10.0.0.113:22-10.0.0.1:58766.service - OpenSSH per-connection server daemon (10.0.0.1:58766). May 8 00:15:59.945997 systemd-logind[1492]: Removed session 27. May 8 00:15:59.988129 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 58766 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:15:59.989944 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:59.994999 systemd-logind[1492]: New session 28 of user core. May 8 00:16:00.000355 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 00:16:00.055854 kubelet[2625]: E0508 00:16:00.055783 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1f9d6d-f453-4a66-afab-00c3b18e02b1" containerName="apply-sysctl-overwrites" May 8 00:16:00.055854 kubelet[2625]: E0508 00:16:00.055824 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1f9d6d-f453-4a66-afab-00c3b18e02b1" containerName="cilium-agent" May 8 00:16:00.055854 kubelet[2625]: E0508 00:16:00.055836 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1f9d6d-f453-4a66-afab-00c3b18e02b1" containerName="mount-cgroup" May 8 00:16:00.055854 kubelet[2625]: E0508 00:16:00.055846 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1f9d6d-f453-4a66-afab-00c3b18e02b1" containerName="clean-cilium-state" May 8 00:16:00.055854 kubelet[2625]: E0508 00:16:00.055857 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8" containerName="cilium-operator" May 8 00:16:00.055854 kubelet[2625]: E0508 00:16:00.055869 2625 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1f9d6d-f453-4a66-afab-00c3b18e02b1" containerName="mount-bpf-fs" May 8 00:16:00.056874 kubelet[2625]: I0508 00:16:00.055900 2625 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ce46fd-6349-4c35-9fcc-1dc15a8bd8c8" containerName="cilium-operator" May 8 00:16:00.056874 kubelet[2625]: I0508 00:16:00.055909 2625 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1f9d6d-f453-4a66-afab-00c3b18e02b1" containerName="cilium-agent" May 8 00:16:00.062250 sshd[4500]: Connection closed by 10.0.0.1 port 58766 May 8 00:16:00.065685 sshd-session[4496]: pam_unix(sshd:session): session closed for user core May 8 00:16:00.077815 systemd[1]: sshd@27-10.0.0.113:22-10.0.0.1:58766.service: Deactivated successfully. May 8 00:16:00.081737 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:16:00.085090 systemd-logind[1492]: Session 28 logged out. Waiting for processes to exit. May 8 00:16:00.095037 systemd[1]: Started sshd@28-10.0.0.113:22-10.0.0.1:58782.service - OpenSSH per-connection server daemon (10.0.0.1:58782). May 8 00:16:00.098371 systemd-logind[1492]: Removed session 28. May 8 00:16:00.099584 systemd[1]: Created slice kubepods-burstable-pod1daf3906_e45c_4dbd_a539_3564e2b3d081.slice - libcontainer container kubepods-burstable-pod1daf3906_e45c_4dbd_a539_3564e2b3d081.slice. May 8 00:16:00.139439 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 58782 ssh2: RSA SHA256:/4G/oSon2RG09UPwME85Ak0bUZFxPsMZB/1HHPGVG9k May 8 00:16:00.141703 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:00.147751 systemd-logind[1492]: New session 29 of user core. May 8 00:16:00.163951 systemd[1]: Started session-29.scope - Session 29 of User core. May 8 00:16:00.177272 kubelet[2625]: I0508 00:16:00.177229 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-bpf-maps\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177272 kubelet[2625]: I0508 00:16:00.177275 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-hostproc\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177439 kubelet[2625]: I0508 00:16:00.177350 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-lib-modules\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177439 kubelet[2625]: I0508 00:16:00.177379 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1daf3906-e45c-4dbd-a539-3564e2b3d081-clustermesh-secrets\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177439 kubelet[2625]: I0508 00:16:00.177401 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1daf3906-e45c-4dbd-a539-3564e2b3d081-cilium-config-path\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177439 kubelet[2625]: I0508 00:16:00.177420 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-host-proc-sys-net\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177546 kubelet[2625]: I0508 00:16:00.177484 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1daf3906-e45c-4dbd-a539-3564e2b3d081-hubble-tls\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177546 kubelet[2625]: I0508 00:16:00.177522 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-cni-path\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177620 kubelet[2625]: I0508 00:16:00.177548 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-xtables-lock\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177620 kubelet[2625]: I0508 00:16:00.177568 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1daf3906-e45c-4dbd-a539-3564e2b3d081-cilium-ipsec-secrets\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177686 kubelet[2625]: I0508 00:16:00.177620 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-cilium-run\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177686 kubelet[2625]: I0508 00:16:00.177648 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-host-proc-sys-kernel\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177686 kubelet[2625]: I0508 00:16:00.177668 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-etc-cni-netd\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177762 kubelet[2625]: I0508 00:16:00.177689 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1daf3906-e45c-4dbd-a539-3564e2b3d081-cilium-cgroup\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.177762 kubelet[2625]: I0508 00:16:00.177711 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grq75\" (UniqueName: \"kubernetes.io/projected/1daf3906-e45c-4dbd-a539-3564e2b3d081-kube-api-access-grq75\") pod \"cilium-lmhpw\" (UID: \"1daf3906-e45c-4dbd-a539-3564e2b3d081\") " pod="kube-system/cilium-lmhpw" May 8 00:16:00.405450 containerd[1508]: time="2025-05-08T00:16:00.405299653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmhpw,Uid:1daf3906-e45c-4dbd-a539-3564e2b3d081,Namespace:kube-system,Attempt:0,}" May 8 00:16:00.728817 containerd[1508]: time="2025-05-08T00:16:00.728717696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:16:00.728817 containerd[1508]: time="2025-05-08T00:16:00.728778640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:16:00.728817 containerd[1508]: time="2025-05-08T00:16:00.728798718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:16:00.729066 containerd[1508]: time="2025-05-08T00:16:00.728914265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:16:00.752842 systemd[1]: Started cri-containerd-80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608.scope - libcontainer container 80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608. May 8 00:16:00.782901 containerd[1508]: time="2025-05-08T00:16:00.782857385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmhpw,Uid:1daf3906-e45c-4dbd-a539-3564e2b3d081,Namespace:kube-system,Attempt:0,} returns sandbox id \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\"" May 8 00:16:00.787117 containerd[1508]: time="2025-05-08T00:16:00.787067840Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:16:01.104507 containerd[1508]: time="2025-05-08T00:16:01.104282918Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7554818706d4b7ef4fa3fcad233e70ab64b48d1c00bb04b2af4c053d157df1d9\"" May 8 00:16:01.105167 containerd[1508]: time="2025-05-08T00:16:01.105106248Z" level=info msg="StartContainer for \"7554818706d4b7ef4fa3fcad233e70ab64b48d1c00bb04b2af4c053d157df1d9\"" May 8 00:16:01.136816 systemd[1]: Started cri-containerd-7554818706d4b7ef4fa3fcad233e70ab64b48d1c00bb04b2af4c053d157df1d9.scope - libcontainer container 7554818706d4b7ef4fa3fcad233e70ab64b48d1c00bb04b2af4c053d157df1d9. May 8 00:16:01.254098 systemd[1]: cri-containerd-7554818706d4b7ef4fa3fcad233e70ab64b48d1c00bb04b2af4c053d157df1d9.scope: Deactivated successfully. May 8 00:16:01.257228 containerd[1508]: time="2025-05-08T00:16:01.257182766Z" level=info msg="StartContainer for \"7554818706d4b7ef4fa3fcad233e70ab64b48d1c00bb04b2af4c053d157df1d9\" returns successfully" May 8 00:16:01.445778 containerd[1508]: time="2025-05-08T00:16:01.445583334Z" level=info msg="shim disconnected" id=7554818706d4b7ef4fa3fcad233e70ab64b48d1c00bb04b2af4c053d157df1d9 namespace=k8s.io May 8 00:16:01.445778 containerd[1508]: time="2025-05-08T00:16:01.445655931Z" level=warning msg="cleaning up after shim disconnected" id=7554818706d4b7ef4fa3fcad233e70ab64b48d1c00bb04b2af4c053d157df1d9 namespace=k8s.io May 8 00:16:01.445778 containerd[1508]: time="2025-05-08T00:16:01.445664557Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:01.970176 containerd[1508]: time="2025-05-08T00:16:01.970126130Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:16:01.989221 containerd[1508]: time="2025-05-08T00:16:01.989159065Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b873ace3ac625b21930c860e6677cd7059c96a7e57f69a665db2bcacfbd8d86\"" May 8 00:16:01.989937 containerd[1508]: time="2025-05-08T00:16:01.989881214Z" level=info msg="StartContainer for \"1b873ace3ac625b21930c860e6677cd7059c96a7e57f69a665db2bcacfbd8d86\"" May 8 00:16:02.028144 systemd[1]: Started cri-containerd-1b873ace3ac625b21930c860e6677cd7059c96a7e57f69a665db2bcacfbd8d86.scope - libcontainer container 1b873ace3ac625b21930c860e6677cd7059c96a7e57f69a665db2bcacfbd8d86. May 8 00:16:02.071093 systemd[1]: cri-containerd-1b873ace3ac625b21930c860e6677cd7059c96a7e57f69a665db2bcacfbd8d86.scope: Deactivated successfully. May 8 00:16:02.109457 containerd[1508]: time="2025-05-08T00:16:02.109359786Z" level=info msg="StartContainer for \"1b873ace3ac625b21930c860e6677cd7059c96a7e57f69a665db2bcacfbd8d86\" returns successfully" May 8 00:16:02.198379 containerd[1508]: time="2025-05-08T00:16:02.198265913Z" level=info msg="shim disconnected" id=1b873ace3ac625b21930c860e6677cd7059c96a7e57f69a665db2bcacfbd8d86 namespace=k8s.io May 8 00:16:02.198379 containerd[1508]: time="2025-05-08T00:16:02.198358166Z" level=warning msg="cleaning up after shim disconnected" id=1b873ace3ac625b21930c860e6677cd7059c96a7e57f69a665db2bcacfbd8d86 namespace=k8s.io May 8 00:16:02.198379 containerd[1508]: time="2025-05-08T00:16:02.198369779Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:02.287196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b873ace3ac625b21930c860e6677cd7059c96a7e57f69a665db2bcacfbd8d86-rootfs.mount: Deactivated successfully. May 8 00:16:02.975009 containerd[1508]: time="2025-05-08T00:16:02.974950164Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:16:03.000893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421214089.mount: Deactivated successfully. May 8 00:16:03.010670 containerd[1508]: time="2025-05-08T00:16:03.010508290Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529\"" May 8 00:16:03.011536 containerd[1508]: time="2025-05-08T00:16:03.011460352Z" level=info msg="StartContainer for \"7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529\"" May 8 00:16:03.058434 systemd[1]: Started cri-containerd-7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529.scope - libcontainer container 7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529. May 8 00:16:03.146866 systemd[1]: cri-containerd-7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529.scope: Deactivated successfully. May 8 00:16:03.153204 containerd[1508]: time="2025-05-08T00:16:03.153120672Z" level=info msg="StartContainer for \"7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529\" returns successfully" May 8 00:16:03.241102 containerd[1508]: time="2025-05-08T00:16:03.240881678Z" level=info msg="shim disconnected" id=7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529 namespace=k8s.io May 8 00:16:03.241102 containerd[1508]: time="2025-05-08T00:16:03.240953202Z" level=warning msg="cleaning up after shim disconnected" id=7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529 namespace=k8s.io May 8 00:16:03.241102 containerd[1508]: time="2025-05-08T00:16:03.240965304Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:03.287578 systemd[1]: run-containerd-runc-k8s.io-7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529-runc.GkC432.mount: Deactivated successfully. May 8 00:16:03.287785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ad9c270e0f01c3f290d2bad438d232ecfa3944e3bb49387d94b223d08acb529-rootfs.mount: Deactivated successfully. May 8 00:16:03.767565 kubelet[2625]: E0508 00:16:03.767515 2625 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:16:03.980806 containerd[1508]: time="2025-05-08T00:16:03.980759888Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:16:03.998494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3562406610.mount: Deactivated successfully. May 8 00:16:04.012192 containerd[1508]: time="2025-05-08T00:16:04.012107096Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"87dc2c5521bd66da7dd2f01c75d78146533b0426a1438e115873ad31f28d41d4\"" May 8 00:16:04.012939 containerd[1508]: time="2025-05-08T00:16:04.012899136Z" level=info msg="StartContainer for \"87dc2c5521bd66da7dd2f01c75d78146533b0426a1438e115873ad31f28d41d4\"" May 8 00:16:04.052915 systemd[1]: Started cri-containerd-87dc2c5521bd66da7dd2f01c75d78146533b0426a1438e115873ad31f28d41d4.scope - libcontainer container 87dc2c5521bd66da7dd2f01c75d78146533b0426a1438e115873ad31f28d41d4. May 8 00:16:04.084780 systemd[1]: cri-containerd-87dc2c5521bd66da7dd2f01c75d78146533b0426a1438e115873ad31f28d41d4.scope: Deactivated successfully. May 8 00:16:04.089347 containerd[1508]: time="2025-05-08T00:16:04.089264328Z" level=info msg="StartContainer for \"87dc2c5521bd66da7dd2f01c75d78146533b0426a1438e115873ad31f28d41d4\" returns successfully" May 8 00:16:04.118503 containerd[1508]: time="2025-05-08T00:16:04.118382690Z" level=info msg="shim disconnected" id=87dc2c5521bd66da7dd2f01c75d78146533b0426a1438e115873ad31f28d41d4 namespace=k8s.io May 8 00:16:04.118503 containerd[1508]: time="2025-05-08T00:16:04.118454234Z" level=warning msg="cleaning up after shim disconnected" id=87dc2c5521bd66da7dd2f01c75d78146533b0426a1438e115873ad31f28d41d4 namespace=k8s.io May 8 00:16:04.118503 containerd[1508]: time="2025-05-08T00:16:04.118462870Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:04.287380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87dc2c5521bd66da7dd2f01c75d78146533b0426a1438e115873ad31f28d41d4-rootfs.mount: Deactivated successfully. May 8 00:16:04.985888 containerd[1508]: time="2025-05-08T00:16:04.985815959Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:16:05.387170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556229633.mount: Deactivated successfully. May 8 00:16:05.451260 containerd[1508]: time="2025-05-08T00:16:05.451178415Z" level=info msg="CreateContainer within sandbox \"80f503306cb2de2f2b993a577e4f65dfff2748c6727d3c9961607764620c2608\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8bfee4cf8dbbade236c694088d8c14c3ae241417a27ebe743ddb450f1d281dda\"" May 8 00:16:05.451870 containerd[1508]: time="2025-05-08T00:16:05.451843457Z" level=info msg="StartContainer for \"8bfee4cf8dbbade236c694088d8c14c3ae241417a27ebe743ddb450f1d281dda\"" May 8 00:16:05.485104 systemd[1]: Started cri-containerd-8bfee4cf8dbbade236c694088d8c14c3ae241417a27ebe743ddb450f1d281dda.scope - libcontainer container 8bfee4cf8dbbade236c694088d8c14c3ae241417a27ebe743ddb450f1d281dda. May 8 00:16:05.588453 containerd[1508]: time="2025-05-08T00:16:05.588389221Z" level=info msg="StartContainer for \"8bfee4cf8dbbade236c694088d8c14c3ae241417a27ebe743ddb450f1d281dda\" returns successfully" May 8 00:16:06.075672 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:16:06.411475 kubelet[2625]: I0508 00:16:06.411276 2625 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:16:06Z","lastTransitionTime":"2025-05-08T00:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:16:09.869336 systemd-networkd[1425]: lxc_health: Link UP May 8 00:16:09.869782 systemd-networkd[1425]: lxc_health: Gained carrier May 8 00:16:10.438642 kubelet[2625]: I0508 00:16:10.438490 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lmhpw" podStartSLOduration=11.438455756 podStartE2EDuration="11.438455756s" podCreationTimestamp="2025-05-08 00:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:16:06.01009242 +0000 UTC m=+102.420833367" watchObservedRunningTime="2025-05-08 00:16:10.438455756 +0000 UTC m=+106.849196692" May 8 00:16:11.349977 systemd-networkd[1425]: lxc_health: Gained IPv6LL May 8 00:16:15.364301 sshd[4510]: Connection closed by 10.0.0.1 port 58782 May 8 00:16:15.365187 sshd-session[4507]: pam_unix(sshd:session): session closed for user core May 8 00:16:15.370361 systemd[1]: sshd@28-10.0.0.113:22-10.0.0.1:58782.service: Deactivated successfully. May 8 00:16:15.373324 systemd[1]: session-29.scope: Deactivated successfully. May 8 00:16:15.374593 systemd-logind[1492]: Session 29 logged out. Waiting for processes to exit. May 8 00:16:15.375872 systemd-logind[1492]: Removed session 29.