May 15 00:03:38.936548 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:19:37 -00 2025 May 15 00:03:38.936571 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 15 00:03:38.936583 kernel: BIOS-provided physical RAM map: May 15 00:03:38.936589 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 00:03:38.936596 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 00:03:38.936602 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 00:03:38.936610 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 15 00:03:38.936617 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 15 00:03:38.936623 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 00:03:38.936632 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 00:03:38.936639 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 00:03:38.936645 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 00:03:38.936656 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 00:03:38.936663 kernel: NX (Execute Disable) protection: active May 15 00:03:38.936671 kernel: APIC: Static calls initialized May 15 00:03:38.936683 kernel: SMBIOS 2.8 present. May 15 00:03:38.936691 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 15 00:03:38.936701 kernel: Hypervisor detected: KVM May 15 00:03:38.936710 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 00:03:38.936719 kernel: kvm-clock: using sched offset of 3511462670 cycles May 15 00:03:38.936729 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 00:03:38.936739 kernel: tsc: Detected 2794.748 MHz processor May 15 00:03:38.936751 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 00:03:38.936763 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 00:03:38.936773 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 15 00:03:38.936790 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 00:03:38.936802 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 00:03:38.936814 kernel: Using GB pages for direct mapping May 15 00:03:38.936826 kernel: ACPI: Early table checksum verification disabled May 15 00:03:38.936838 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 15 00:03:38.936850 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:03:38.936862 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:03:38.936872 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:03:38.936885 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 15 00:03:38.936894 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:03:38.936903 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:03:38.936912 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:03:38.936921 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:03:38.936930 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 15 00:03:38.936940 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 15 00:03:38.936954 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 15 00:03:38.936968 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 15 00:03:38.936976 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 15 00:03:38.936986 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 15 00:03:38.936996 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 15 00:03:38.937006 kernel: No NUMA configuration found May 15 00:03:38.937013 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 15 00:03:38.937021 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 15 00:03:38.937031 kernel: Zone ranges: May 15 00:03:38.937038 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 00:03:38.937046 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 15 00:03:38.937053 kernel: Normal empty May 15 00:03:38.937061 kernel: Movable zone start for each node May 15 00:03:38.937077 kernel: Early memory node ranges May 15 00:03:38.937084 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 00:03:38.937106 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 15 00:03:38.937115 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 15 00:03:38.937125 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:03:38.937136 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 00:03:38.937143 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 15 00:03:38.937151 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 00:03:38.937158 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 00:03:38.937166 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 00:03:38.937173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 00:03:38.937180 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 00:03:38.937188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 00:03:38.937198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 00:03:38.937206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 00:03:38.937213 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 00:03:38.937221 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 00:03:38.937228 kernel: TSC deadline timer available May 15 00:03:38.937235 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 00:03:38.937243 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 00:03:38.937250 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 00:03:38.937260 kernel: kvm-guest: setup PV sched yield May 15 00:03:38.937267 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 00:03:38.937277 kernel: Booting paravirtualized kernel on KVM May 15 00:03:38.937285 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 00:03:38.937293 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 15 00:03:38.937300 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 15 00:03:38.937308 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 15 00:03:38.937315 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 00:03:38.937322 kernel: kvm-guest: PV spinlocks enabled May 15 00:03:38.937330 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 00:03:38.937339 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 15 00:03:38.937349 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:03:38.937357 kernel: random: crng init done May 15 00:03:38.937364 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:03:38.937372 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:03:38.937380 kernel: Fallback order for Node 0: 0 May 15 00:03:38.937387 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 15 00:03:38.937394 kernel: Policy zone: DMA32 May 15 00:03:38.937412 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:03:38.937430 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 138948K reserved, 0K cma-reserved) May 15 00:03:38.937440 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:03:38.937462 kernel: ftrace: allocating 37918 entries in 149 pages May 15 00:03:38.937471 kernel: ftrace: allocated 149 pages with 4 groups May 15 00:03:38.937491 kernel: Dynamic Preempt: voluntary May 15 00:03:38.937511 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:03:38.937531 kernel: rcu: RCU event tracing is enabled. May 15 00:03:38.937555 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:03:38.937573 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:03:38.937585 kernel: Rude variant of Tasks RCU enabled. May 15 00:03:38.937609 kernel: Tracing variant of Tasks RCU enabled. May 15 00:03:38.937618 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:03:38.937628 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:03:38.937635 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 00:03:38.937643 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:03:38.937650 kernel: Console: colour VGA+ 80x25 May 15 00:03:38.937658 kernel: printk: console [ttyS0] enabled May 15 00:03:38.937665 kernel: ACPI: Core revision 20230628 May 15 00:03:38.937677 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 00:03:38.937687 kernel: APIC: Switch to symmetric I/O mode setup May 15 00:03:38.937696 kernel: x2apic enabled May 15 00:03:38.937705 kernel: APIC: Switched APIC routing to: physical x2apic May 15 00:03:38.937715 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 00:03:38.937724 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 00:03:38.937734 kernel: kvm-guest: setup PV IPIs May 15 00:03:38.937756 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 00:03:38.937766 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 00:03:38.937776 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 00:03:38.937786 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 00:03:38.937795 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 00:03:38.937808 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 00:03:38.937817 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 00:03:38.937826 kernel: Spectre V2 : Mitigation: Retpolines May 15 00:03:38.937833 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 00:03:38.937844 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 00:03:38.937851 kernel: RETBleed: Mitigation: untrained return thunk May 15 00:03:38.937862 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 00:03:38.937870 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 00:03:38.937877 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 00:03:38.937886 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 00:03:38.937894 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 00:03:38.937901 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 00:03:38.937909 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 00:03:38.937920 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 00:03:38.937928 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 00:03:38.937936 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 00:03:38.937943 kernel: Freeing SMP alternatives memory: 32K May 15 00:03:38.937951 kernel: pid_max: default: 32768 minimum: 301 May 15 00:03:38.937959 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:03:38.937967 kernel: landlock: Up and running. May 15 00:03:38.937974 kernel: SELinux: Initializing. May 15 00:03:38.937982 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:03:38.937992 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:03:38.938000 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 00:03:38.938008 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:03:38.938016 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:03:38.938024 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:03:38.938031 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 00:03:38.938039 kernel: ... version: 0 May 15 00:03:38.938049 kernel: ... bit width: 48 May 15 00:03:38.938059 kernel: ... generic registers: 6 May 15 00:03:38.938074 kernel: ... value mask: 0000ffffffffffff May 15 00:03:38.938081 kernel: ... max period: 00007fffffffffff May 15 00:03:38.938101 kernel: ... fixed-purpose events: 0 May 15 00:03:38.938109 kernel: ... event mask: 000000000000003f May 15 00:03:38.938116 kernel: signal: max sigframe size: 1776 May 15 00:03:38.938125 kernel: rcu: Hierarchical SRCU implementation. May 15 00:03:38.938133 kernel: rcu: Max phase no-delay instances is 400. May 15 00:03:38.938140 kernel: smp: Bringing up secondary CPUs ... May 15 00:03:38.938148 kernel: smpboot: x86: Booting SMP configuration: May 15 00:03:38.938159 kernel: .... node #0, CPUs: #1 #2 #3 May 15 00:03:38.938166 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:03:38.938174 kernel: smpboot: Max logical packages: 1 May 15 00:03:38.938182 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 00:03:38.938189 kernel: devtmpfs: initialized May 15 00:03:38.938197 kernel: x86/mm: Memory block size: 128MB May 15 00:03:38.938205 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:03:38.938213 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:03:38.938221 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:03:38.938231 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:03:38.938238 kernel: audit: initializing netlink subsys (disabled) May 15 00:03:38.938246 kernel: audit: type=2000 audit(1747267417.983:1): state=initialized audit_enabled=0 res=1 May 15 00:03:38.938254 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:03:38.938261 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 00:03:38.938269 kernel: cpuidle: using governor menu May 15 00:03:38.938277 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:03:38.938285 kernel: dca service started, version 1.12.1 May 15 00:03:38.938292 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 15 00:03:38.938303 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 00:03:38.938311 kernel: PCI: Using configuration type 1 for base access May 15 00:03:38.938318 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 00:03:38.938326 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:03:38.938334 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:03:38.938342 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:03:38.938349 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:03:38.938357 kernel: ACPI: Added _OSI(Module Device) May 15 00:03:38.938365 kernel: ACPI: Added _OSI(Processor Device) May 15 00:03:38.938375 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:03:38.938383 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:03:38.938391 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:03:38.938398 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 00:03:38.938406 kernel: ACPI: Interpreter enabled May 15 00:03:38.938415 kernel: ACPI: PM: (supports S0 S3 S5) May 15 00:03:38.938425 kernel: ACPI: Using IOAPIC for interrupt routing May 15 00:03:38.938436 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 00:03:38.938445 kernel: PCI: Using E820 reservations for host bridge windows May 15 00:03:38.938456 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 00:03:38.938464 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:03:38.938678 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:03:38.938954 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 00:03:38.939162 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 00:03:38.939175 kernel: PCI host bridge to bus 0000:00 May 15 00:03:38.939349 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 00:03:38.939503 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 00:03:38.939629 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 00:03:38.939750 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 00:03:38.939870 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 00:03:38.939990 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 15 00:03:38.940145 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:03:38.940319 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 00:03:38.940542 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 00:03:38.940699 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 15 00:03:38.940833 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 15 00:03:38.940985 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 15 00:03:38.941182 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 00:03:38.941375 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:03:38.941576 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 15 00:03:38.941724 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 15 00:03:38.941856 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 15 00:03:38.942036 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 00:03:38.942213 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 15 00:03:38.942353 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 15 00:03:38.942533 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 15 00:03:38.942722 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 00:03:38.942864 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 15 00:03:38.942996 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 15 00:03:38.943170 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 15 00:03:38.943303 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 15 00:03:38.943456 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 00:03:38.943592 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 00:03:38.943750 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 00:03:38.943930 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 15 00:03:38.944117 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 15 00:03:38.944274 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 00:03:38.944408 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 15 00:03:38.944423 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 00:03:38.944440 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 00:03:38.944450 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 00:03:38.944458 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 00:03:38.944466 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 00:03:38.944473 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 00:03:38.944481 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 00:03:38.944489 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 00:03:38.944497 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 00:03:38.944505 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 00:03:38.944513 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 00:03:38.944523 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 00:03:38.944531 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 00:03:38.944539 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 00:03:38.944547 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 00:03:38.944555 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 00:03:38.944563 kernel: iommu: Default domain type: Translated May 15 00:03:38.944571 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 00:03:38.944579 kernel: PCI: Using ACPI for IRQ routing May 15 00:03:38.944587 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 00:03:38.944597 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 00:03:38.944605 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 15 00:03:38.944744 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 00:03:38.944876 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 00:03:38.945006 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 00:03:38.945017 kernel: vgaarb: loaded May 15 00:03:38.945025 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 00:03:38.945033 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 00:03:38.945046 kernel: clocksource: Switched to clocksource kvm-clock May 15 00:03:38.945057 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:03:38.945079 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:03:38.945183 kernel: pnp: PnP ACPI init May 15 00:03:38.945365 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 00:03:38.945382 kernel: pnp: PnP ACPI: found 6 devices May 15 00:03:38.945393 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 00:03:38.945405 kernel: NET: Registered PF_INET protocol family May 15 00:03:38.945421 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:03:38.945432 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:03:38.945444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:03:38.945455 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:03:38.945466 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:03:38.945477 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:03:38.945487 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:03:38.945494 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:03:38.945502 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:03:38.945514 kernel: NET: Registered PF_XDP protocol family May 15 00:03:38.945647 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 00:03:38.945767 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 00:03:38.945887 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 00:03:38.946006 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 00:03:38.946150 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 00:03:38.946271 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 15 00:03:38.946282 kernel: PCI: CLS 0 bytes, default 64 May 15 00:03:38.946294 kernel: Initialise system trusted keyrings May 15 00:03:38.946303 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:03:38.946311 kernel: Key type asymmetric registered May 15 00:03:38.946319 kernel: Asymmetric key parser 'x509' registered May 15 00:03:38.946327 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 00:03:38.946335 kernel: io scheduler mq-deadline registered May 15 00:03:38.946343 kernel: io scheduler kyber registered May 15 00:03:38.946351 kernel: io scheduler bfq registered May 15 00:03:38.946358 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 00:03:38.946369 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 00:03:38.946377 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 00:03:38.946385 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 00:03:38.946393 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:03:38.946402 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 00:03:38.946409 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 00:03:38.946420 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 00:03:38.946431 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 00:03:38.946593 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 00:03:38.946747 kernel: rtc_cmos 00:04: registered as rtc0 May 15 00:03:38.946765 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 15 00:03:38.946912 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T00:03:38 UTC (1747267418) May 15 00:03:38.947050 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 00:03:38.947061 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 00:03:38.947080 kernel: NET: Registered PF_INET6 protocol family May 15 00:03:38.947149 kernel: Segment Routing with IPv6 May 15 00:03:38.947157 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:03:38.947170 kernel: NET: Registered PF_PACKET protocol family May 15 00:03:38.947178 kernel: Key type dns_resolver registered May 15 00:03:38.947186 kernel: IPI shorthand broadcast: enabled May 15 00:03:38.947194 kernel: sched_clock: Marking stable (798004097, 108661528)->(933362952, -26697327) May 15 00:03:38.947202 kernel: registered taskstats version 1 May 15 00:03:38.947210 kernel: Loading compiled-in X.509 certificates May 15 00:03:38.947218 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: e21d6dc0691a7e1e8bef90d9217bc8c09d6860f3' May 15 00:03:38.947226 kernel: Key type .fscrypt registered May 15 00:03:38.947233 kernel: Key type fscrypt-provisioning registered May 15 00:03:38.947244 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:03:38.947252 kernel: ima: Allocated hash algorithm: sha1 May 15 00:03:38.947260 kernel: ima: No architecture policies found May 15 00:03:38.947268 kernel: clk: Disabling unused clocks May 15 00:03:38.947276 kernel: Freeing unused kernel image (initmem) memory: 43484K May 15 00:03:38.947284 kernel: Write protecting the kernel read-only data: 38912k May 15 00:03:38.947292 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 15 00:03:38.947300 kernel: Run /init as init process May 15 00:03:38.947308 kernel: with arguments: May 15 00:03:38.947318 kernel: /init May 15 00:03:38.947326 kernel: with environment: May 15 00:03:38.947334 kernel: HOME=/ May 15 00:03:38.947341 kernel: TERM=linux May 15 00:03:38.947349 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:03:38.947358 systemd[1]: Successfully made /usr/ read-only. May 15 00:03:38.947370 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:03:38.947379 systemd[1]: Detected virtualization kvm. May 15 00:03:38.947390 systemd[1]: Detected architecture x86-64. May 15 00:03:38.947398 systemd[1]: Running in initrd. May 15 00:03:38.947406 systemd[1]: No hostname configured, using default hostname. May 15 00:03:38.947417 systemd[1]: Hostname set to . May 15 00:03:38.947428 systemd[1]: Initializing machine ID from VM UUID. May 15 00:03:38.947439 systemd[1]: Queued start job for default target initrd.target. May 15 00:03:38.947450 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:03:38.947458 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:03:38.947471 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:03:38.947492 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:03:38.947503 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:03:38.947513 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:03:38.947525 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:03:38.947534 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:03:38.947543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:03:38.947551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:03:38.947560 systemd[1]: Reached target paths.target - Path Units. May 15 00:03:38.947569 systemd[1]: Reached target slices.target - Slice Units. May 15 00:03:38.947577 systemd[1]: Reached target swap.target - Swaps. May 15 00:03:38.947585 systemd[1]: Reached target timers.target - Timer Units. May 15 00:03:38.947594 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:03:38.947606 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:03:38.947615 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:03:38.947626 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 00:03:38.947634 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:03:38.947643 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:03:38.947651 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:03:38.947660 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:03:38.947669 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:03:38.947680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:03:38.947688 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:03:38.947697 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:03:38.947705 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:03:38.947714 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:03:38.947722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:03:38.947731 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:03:38.947739 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:03:38.947751 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:03:38.947760 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:03:38.947794 systemd-journald[193]: Collecting audit messages is disabled. May 15 00:03:38.947817 systemd-journald[193]: Journal started May 15 00:03:38.947839 systemd-journald[193]: Runtime Journal (/run/log/journal/9f0b7e53568a4cffba35cea0e2069c2f) is 6M, max 48.4M, 42.3M free. May 15 00:03:38.939269 systemd-modules-load[195]: Inserted module 'overlay' May 15 00:03:38.973319 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:03:38.973346 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:03:38.973359 kernel: Bridge firewalling registered May 15 00:03:38.972034 systemd-modules-load[195]: Inserted module 'br_netfilter' May 15 00:03:38.976507 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:03:38.978951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:03:38.982695 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:03:38.999477 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:03:39.003657 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:03:39.007079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:03:39.010718 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:03:39.022918 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:03:39.026744 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:03:39.029318 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:03:39.032170 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:03:39.051469 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:03:39.053720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:03:39.069231 dracut-cmdline[229]: dracut-dracut-053 May 15 00:03:39.073650 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 15 00:03:39.104003 systemd-resolved[230]: Positive Trust Anchors: May 15 00:03:39.104027 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:03:39.104076 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:03:39.106769 systemd-resolved[230]: Defaulting to hostname 'linux'. May 15 00:03:39.108242 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:03:39.114538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:03:39.194140 kernel: SCSI subsystem initialized May 15 00:03:39.206135 kernel: Loading iSCSI transport class v2.0-870. May 15 00:03:39.218128 kernel: iscsi: registered transport (tcp) May 15 00:03:39.256123 kernel: iscsi: registered transport (qla4xxx) May 15 00:03:39.256192 kernel: QLogic iSCSI HBA Driver May 15 00:03:39.310266 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:03:39.333263 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:03:39.358449 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:03:39.358507 kernel: device-mapper: uevent: version 1.0.3 May 15 00:03:39.359675 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:03:39.403140 kernel: raid6: avx2x4 gen() 28898 MB/s May 15 00:03:39.420144 kernel: raid6: avx2x2 gen() 27284 MB/s May 15 00:03:39.437280 kernel: raid6: avx2x1 gen() 24410 MB/s May 15 00:03:39.437390 kernel: raid6: using algorithm avx2x4 gen() 28898 MB/s May 15 00:03:39.455626 kernel: raid6: .... xor() 6191 MB/s, rmw enabled May 15 00:03:39.455664 kernel: raid6: using avx2x2 recovery algorithm May 15 00:03:39.483145 kernel: xor: automatically using best checksumming function avx May 15 00:03:39.674129 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:03:39.689079 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:03:39.701320 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:03:39.718740 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 15 00:03:39.724345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:03:39.740305 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:03:39.756229 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation May 15 00:03:39.793031 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:03:39.806303 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:03:39.880976 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:03:39.893230 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:03:39.908717 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:03:39.912249 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:03:39.915040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:03:39.916303 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:03:39.923115 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 15 00:03:39.925997 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:03:39.929420 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:03:39.984765 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:03:39.984811 kernel: GPT:9289727 != 19775487 May 15 00:03:39.984822 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:03:39.984833 kernel: GPT:9289727 != 19775487 May 15 00:03:39.984843 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:03:39.984854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:03:39.945509 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:03:39.993119 kernel: cryptd: max_cpu_qlen set to 1000 May 15 00:03:39.998168 kernel: libata version 3.00 loaded. May 15 00:03:40.004116 kernel: ahci 0000:00:1f.2: version 3.0 May 15 00:03:40.006125 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 00:03:40.011283 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 00:03:40.011560 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 00:03:40.014455 kernel: AVX2 version of gcm_enc/dec engaged. May 15 00:03:40.014492 kernel: scsi host0: ahci May 15 00:03:40.015274 kernel: AES CTR mode by8 optimization enabled May 15 00:03:40.022128 kernel: scsi host1: ahci May 15 00:03:40.024290 kernel: scsi host2: ahci May 15 00:03:40.024540 kernel: scsi host3: ahci May 15 00:03:40.026980 kernel: scsi host4: ahci May 15 00:03:40.027612 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:03:40.043757 kernel: scsi host5: ahci May 15 00:03:40.044027 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 15 00:03:40.044051 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 15 00:03:40.044062 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (470) May 15 00:03:40.044073 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 15 00:03:40.044083 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 15 00:03:40.044124 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 15 00:03:40.044135 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 15 00:03:40.044146 kernel: BTRFS: device fsid 11358d57-dfa4-4197-9524-595753ed5512 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (461) May 15 00:03:40.027768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:03:40.037477 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:03:40.038996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:03:40.039233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:03:40.040742 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:03:40.057404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:03:40.069587 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 00:03:40.099603 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 00:03:40.184104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:03:40.206005 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 00:03:40.248200 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 00:03:40.249801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:03:40.266284 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:03:40.298735 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:03:40.323722 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:03:40.377125 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 00:03:40.377223 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 00:03:40.377234 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 00:03:40.378131 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 00:03:40.379133 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 00:03:40.380120 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 00:03:40.381480 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 00:03:40.381510 kernel: ata3.00: applying bridge limits May 15 00:03:40.382161 kernel: ata3.00: configured for UDMA/100 May 15 00:03:40.383128 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 00:03:40.468580 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 00:03:40.468954 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 00:03:40.481378 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 00:03:40.538522 disk-uuid[556]: Primary Header is updated. May 15 00:03:40.538522 disk-uuid[556]: Secondary Entries is updated. May 15 00:03:40.538522 disk-uuid[556]: Secondary Header is updated. May 15 00:03:40.544138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:03:40.549132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:03:41.550136 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:03:41.550823 disk-uuid[578]: The operation has completed successfully. May 15 00:03:41.583551 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:03:41.583682 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:03:41.640257 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:03:41.653874 sh[593]: Success May 15 00:03:41.667148 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 00:03:41.710520 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:03:41.724938 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:03:41.728021 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:03:41.740513 kernel: BTRFS info (device dm-0): first mount of filesystem 11358d57-dfa4-4197-9524-595753ed5512 May 15 00:03:41.740552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 00:03:41.740564 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:03:41.741540 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:03:41.742291 kernel: BTRFS info (device dm-0): using free space tree May 15 00:03:41.748148 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:03:41.751157 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:03:41.765391 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:03:41.768933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:03:41.790378 kernel: BTRFS info (device vda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:03:41.790435 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:03:41.790447 kernel: BTRFS info (device vda6): using free space tree May 15 00:03:41.794145 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:03:41.800120 kernel: BTRFS info (device vda6): last unmount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:03:41.886056 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:03:41.902336 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:03:41.936292 systemd-networkd[769]: lo: Link UP May 15 00:03:41.936305 systemd-networkd[769]: lo: Gained carrier May 15 00:03:41.938522 systemd-networkd[769]: Enumeration completed May 15 00:03:41.938980 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:03:41.938986 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:03:41.939877 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:03:41.939982 systemd-networkd[769]: eth0: Link UP May 15 00:03:41.939988 systemd-networkd[769]: eth0: Gained carrier May 15 00:03:41.940008 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:03:41.951823 systemd[1]: Reached target network.target - Network. May 15 00:03:41.971956 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:03:41.974221 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:03:41.978509 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:03:42.032946 ignition[773]: Ignition 2.20.0 May 15 00:03:42.032960 ignition[773]: Stage: fetch-offline May 15 00:03:42.033026 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 15 00:03:42.033041 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:03:42.033184 ignition[773]: parsed url from cmdline: "" May 15 00:03:42.033189 ignition[773]: no config URL provided May 15 00:03:42.033195 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:03:42.033206 ignition[773]: no config at "/usr/lib/ignition/user.ign" May 15 00:03:42.033236 ignition[773]: op(1): [started] loading QEMU firmware config module May 15 00:03:42.033242 ignition[773]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:03:42.043520 ignition[773]: op(1): [finished] loading QEMU firmware config module May 15 00:03:42.086528 ignition[773]: parsing config with SHA512: 441db42995042b3d282581358e534281847ae88369fdaa4f4c8ceab1deddd6539d0ecf62feb10fe4366fc66f5c34e29a4a768342d20f8f895421de80c0317dfc May 15 00:03:42.092114 unknown[773]: fetched base config from "system" May 15 00:03:42.092129 unknown[773]: fetched user config from "qemu" May 15 00:03:42.094083 ignition[773]: fetch-offline: fetch-offline passed May 15 00:03:42.094273 ignition[773]: Ignition finished successfully May 15 00:03:42.097297 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:03:42.098905 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:03:42.110293 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:03:42.130291 ignition[784]: Ignition 2.20.0 May 15 00:03:42.130304 ignition[784]: Stage: kargs May 15 00:03:42.130526 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 15 00:03:42.130543 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:03:42.131716 ignition[784]: kargs: kargs passed May 15 00:03:42.131777 ignition[784]: Ignition finished successfully May 15 00:03:42.136326 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:03:42.144564 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:03:42.157349 ignition[794]: Ignition 2.20.0 May 15 00:03:42.157362 ignition[794]: Stage: disks May 15 00:03:42.157554 ignition[794]: no configs at "/usr/lib/ignition/base.d" May 15 00:03:42.157568 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:03:42.161375 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:03:42.158443 ignition[794]: disks: disks passed May 15 00:03:42.163832 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:03:42.158508 ignition[794]: Ignition finished successfully May 15 00:03:42.166154 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:03:42.168451 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:03:42.168536 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:03:42.168977 systemd[1]: Reached target basic.target - Basic System. May 15 00:03:42.183327 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:03:42.199729 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:03:42.433602 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:03:42.447321 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:03:42.544131 kernel: EXT4-fs (vda9): mounted filesystem 36fdaeac-383d-468b-a0a4-9f47e3957a15 r/w with ordered data mode. Quota mode: none. May 15 00:03:42.545263 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:03:42.545920 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:03:42.559257 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:03:42.562579 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:03:42.562962 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:03:42.563022 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:03:42.563052 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:03:42.574804 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:03:42.577556 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) May 15 00:03:42.577281 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:03:42.582102 kernel: BTRFS info (device vda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:03:42.582134 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:03:42.582150 kernel: BTRFS info (device vda6): using free space tree May 15 00:03:42.585106 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:03:42.586869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:03:42.620282 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:03:42.625595 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory May 15 00:03:42.630183 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:03:42.634644 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:03:42.738238 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:03:42.755262 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:03:42.757292 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:03:42.763213 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:03:42.764610 kernel: BTRFS info (device vda6): last unmount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:03:42.783989 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:03:42.788709 ignition[927]: INFO : Ignition 2.20.0 May 15 00:03:42.788709 ignition[927]: INFO : Stage: mount May 15 00:03:42.790550 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:03:42.790550 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:03:42.790550 ignition[927]: INFO : mount: mount passed May 15 00:03:42.790550 ignition[927]: INFO : Ignition finished successfully May 15 00:03:42.794971 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:03:42.807239 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:03:42.814450 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:03:42.829130 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (938) May 15 00:03:42.832892 kernel: BTRFS info (device vda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:03:42.832917 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:03:42.832929 kernel: BTRFS info (device vda6): using free space tree May 15 00:03:42.836113 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:03:42.837997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:03:42.869788 ignition[955]: INFO : Ignition 2.20.0 May 15 00:03:42.869788 ignition[955]: INFO : Stage: files May 15 00:03:42.871797 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:03:42.871797 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:03:42.874340 ignition[955]: DEBUG : files: compiled without relabeling support, skipping May 15 00:03:42.875723 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:03:42.877128 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:03:42.879777 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:03:42.881339 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:03:42.883173 unknown[955]: wrote ssh authorized keys file for user: core May 15 00:03:42.884423 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:03:42.886600 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 00:03:42.888501 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 00:03:42.928000 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:03:43.049799 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 00:03:43.049799 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:03:43.054282 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 00:03:43.246359 systemd-networkd[769]: eth0: Gained IPv6LL May 15 00:03:43.410062 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:03:43.522655 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:03:43.522655 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:03:43.527361 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 00:03:43.803371 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:03:44.126838 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:03:44.126838 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 00:03:44.131755 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:03:44.131755 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:03:44.131755 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 00:03:44.131755 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 00:03:44.131755 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:03:44.131755 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:03:44.131755 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 00:03:44.131755 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:03:44.147396 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:03:44.151701 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:03:44.153522 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:03:44.153522 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 00:03:44.153522 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:03:44.153522 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:03:44.153522 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:03:44.153522 ignition[955]: INFO : files: files passed May 15 00:03:44.153522 ignition[955]: INFO : Ignition finished successfully May 15 00:03:44.155047 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:03:44.169222 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:03:44.171360 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:03:44.173394 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:03:44.173541 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:03:44.180896 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory May 15 00:03:44.183397 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:03:44.183397 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:03:44.186768 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:03:44.189173 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:03:44.192415 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:03:44.208274 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:03:44.231283 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:03:44.231443 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:03:44.233786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:03:44.234784 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:03:44.236711 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:03:44.240060 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:03:44.260349 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:03:44.262138 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:03:44.278805 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:03:44.280184 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:03:44.282389 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:03:44.284464 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:03:44.284605 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:03:44.286955 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:03:44.288567 systemd[1]: Stopped target basic.target - Basic System. May 15 00:03:44.290681 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:03:44.292726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:03:44.294962 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:03:44.297194 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:03:44.299398 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:03:44.301764 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:03:44.303782 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:03:44.306117 systemd[1]: Stopped target swap.target - Swaps. May 15 00:03:44.307925 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:03:44.308140 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:03:44.310614 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:03:44.312299 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:03:44.314522 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:03:44.314682 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:03:44.316796 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:03:44.316981 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:03:44.319365 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:03:44.319529 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:03:44.321407 systemd[1]: Stopped target paths.target - Path Units. May 15 00:03:44.323174 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:03:44.327164 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:03:44.328982 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:03:44.331002 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:03:44.333065 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:03:44.333222 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:03:44.335314 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:03:44.335433 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:03:44.337814 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:03:44.337994 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:03:44.339944 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:03:44.340110 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:03:44.350365 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:03:44.350489 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:03:44.350655 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:03:44.352128 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:03:44.352361 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:03:44.352500 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:03:44.352979 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:03:44.353133 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:03:44.359512 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:03:44.359655 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:03:44.368207 ignition[1011]: INFO : Ignition 2.20.0 May 15 00:03:44.368207 ignition[1011]: INFO : Stage: umount May 15 00:03:44.368207 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:03:44.368207 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:03:44.368207 ignition[1011]: INFO : umount: umount passed May 15 00:03:44.368207 ignition[1011]: INFO : Ignition finished successfully May 15 00:03:44.370069 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:03:44.370233 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:03:44.372281 systemd[1]: Stopped target network.target - Network. May 15 00:03:44.374176 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:03:44.374235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:03:44.376013 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:03:44.376067 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:03:44.377984 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:03:44.378050 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:03:44.380085 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:03:44.380148 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:03:44.385980 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:03:44.387772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:03:44.391344 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:03:44.397890 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:03:44.398071 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:03:44.402762 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 00:03:44.403074 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:03:44.403225 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:03:44.407430 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 00:03:44.408464 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:03:44.408546 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:03:44.417261 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:03:44.418227 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:03:44.418299 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:03:44.420473 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:03:44.420530 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:03:44.423878 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:03:44.423966 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:03:44.425380 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:03:44.425442 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:03:44.427927 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:03:44.431645 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:03:44.431731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 00:03:44.492188 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:03:44.492345 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:03:44.494497 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:03:44.494671 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:03:44.497550 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:03:44.497633 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:03:44.498795 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:03:44.498839 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:03:44.501081 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:03:44.501155 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:03:44.503235 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:03:44.503286 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:03:44.505278 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:03:44.505333 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:03:44.521501 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:03:44.524158 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:03:44.524249 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:03:44.526741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:03:44.526799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:03:44.530116 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 00:03:44.530191 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 00:03:44.530644 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:03:44.530765 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:03:45.107678 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:03:45.107880 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:03:45.110216 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:03:45.112143 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:03:45.112218 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:03:45.126286 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:03:45.135936 systemd[1]: Switching root. May 15 00:03:45.172011 systemd-journald[193]: Journal stopped May 15 00:03:47.211863 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 15 00:03:47.211940 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:03:47.211960 kernel: SELinux: policy capability open_perms=1 May 15 00:03:47.211973 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:03:47.211984 kernel: SELinux: policy capability always_check_network=0 May 15 00:03:47.212001 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:03:47.212019 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:03:47.212030 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:03:47.212042 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:03:47.212054 kernel: audit: type=1403 audit(1747267426.150:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:03:47.212067 systemd[1]: Successfully loaded SELinux policy in 46.687ms. May 15 00:03:47.212113 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.455ms. May 15 00:03:47.212128 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:03:47.212141 systemd[1]: Detected virtualization kvm. May 15 00:03:47.212157 systemd[1]: Detected architecture x86-64. May 15 00:03:47.212170 systemd[1]: Detected first boot. May 15 00:03:47.212182 systemd[1]: Initializing machine ID from VM UUID. May 15 00:03:47.212195 zram_generator::config[1058]: No configuration found. May 15 00:03:47.212208 kernel: Guest personality initialized and is inactive May 15 00:03:47.212222 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 00:03:47.212234 kernel: Initialized host personality May 15 00:03:47.212246 kernel: NET: Registered PF_VSOCK protocol family May 15 00:03:47.212260 systemd[1]: Populated /etc with preset unit settings. May 15 00:03:47.212274 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 00:03:47.212287 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:03:47.212299 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:03:47.212311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:03:47.212324 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:03:47.212337 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:03:47.212350 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:03:47.212368 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:03:47.212386 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:03:47.212403 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:03:47.212420 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:03:47.212435 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:03:47.212448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:03:47.212461 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:03:47.212474 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:03:47.212487 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:03:47.212503 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:03:47.212519 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:03:47.212531 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 00:03:47.212544 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:03:47.212557 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:03:47.212570 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:03:47.212582 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:03:47.212601 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:03:47.212628 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:03:47.212645 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:03:47.212661 systemd[1]: Reached target slices.target - Slice Units. May 15 00:03:47.212677 systemd[1]: Reached target swap.target - Swaps. May 15 00:03:47.212690 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:03:47.212702 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:03:47.212715 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 00:03:47.212729 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:03:47.212746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:03:47.212767 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:03:47.212781 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:03:47.212793 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:03:47.212806 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:03:47.212820 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:03:47.212839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:03:47.212856 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:03:47.212883 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:03:47.212896 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:03:47.212917 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:03:47.212935 systemd[1]: Reached target machines.target - Containers. May 15 00:03:47.212952 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:03:47.212968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:03:47.212983 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:03:47.212997 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:03:47.213014 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:03:47.213030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:03:47.213051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:03:47.213066 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:03:47.213082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:03:47.213117 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:03:47.213134 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:03:47.213150 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:03:47.213166 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:03:47.213182 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:03:47.213204 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:03:47.213223 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:03:47.213263 systemd-journald[1122]: Collecting audit messages is disabled. May 15 00:03:47.213298 kernel: loop: module loaded May 15 00:03:47.213313 kernel: fuse: init (API version 7.39) May 15 00:03:47.213329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:03:47.213344 systemd-journald[1122]: Journal started May 15 00:03:47.213376 systemd-journald[1122]: Runtime Journal (/run/log/journal/9f0b7e53568a4cffba35cea0e2069c2f) is 6M, max 48.4M, 42.3M free. May 15 00:03:46.799675 systemd[1]: Queued start job for default target multi-user.target. May 15 00:03:46.815908 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 00:03:46.816561 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:03:47.218254 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:03:47.223392 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:03:47.234697 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 00:03:47.285911 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:03:47.285987 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:03:47.287126 systemd[1]: Stopped verity-setup.service. May 15 00:03:47.291132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:03:47.293139 kernel: ACPI: bus type drm_connector registered May 15 00:03:47.343144 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:03:47.345652 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:03:47.347350 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:03:47.349040 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:03:47.350472 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:03:47.352039 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:03:47.353414 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:03:47.354853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:03:47.372923 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:03:47.373262 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:03:47.375380 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:03:47.375692 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:03:47.414690 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:03:47.415029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:03:47.417192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:03:47.417493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:03:47.419673 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:03:47.420029 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:03:47.421922 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:03:47.422239 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:03:47.424310 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:03:47.426428 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:03:47.428609 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:03:47.430912 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 00:03:47.450040 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:03:47.463667 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:03:47.467411 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:03:47.468847 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:03:47.468963 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:03:47.471298 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 00:03:47.477211 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:03:47.481459 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:03:47.482838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:03:47.487249 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:03:47.494312 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:03:47.496008 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:03:47.512347 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:03:47.514962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:03:47.519439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:03:47.523529 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:03:47.529336 systemd-journald[1122]: Time spent on flushing to /var/log/journal/9f0b7e53568a4cffba35cea0e2069c2f is 12.982ms for 965 entries. May 15 00:03:47.529336 systemd-journald[1122]: System Journal (/var/log/journal/9f0b7e53568a4cffba35cea0e2069c2f) is 8M, max 195.6M, 187.6M free. May 15 00:03:47.791370 systemd-journald[1122]: Received client request to flush runtime journal. May 15 00:03:47.791455 kernel: loop0: detected capacity change from 0 to 138176 May 15 00:03:47.791497 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:03:47.791528 kernel: loop1: detected capacity change from 0 to 147912 May 15 00:03:47.791556 kernel: loop2: detected capacity change from 0 to 218376 May 15 00:03:47.791582 kernel: loop3: detected capacity change from 0 to 138176 May 15 00:03:47.529030 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:03:47.532600 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:03:47.535324 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:03:47.538031 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:03:47.540273 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:03:47.561976 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:03:47.571315 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:03:47.573509 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:03:47.595146 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 00:03:47.598654 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:03:47.602411 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:03:47.615300 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 00:03:47.631790 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:03:47.642365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:03:47.716189 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 15 00:03:47.716203 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 15 00:03:47.722798 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:03:47.796155 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:03:47.798525 kernel: loop4: detected capacity change from 0 to 147912 May 15 00:03:47.845216 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:03:47.846111 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 00:03:47.853117 kernel: loop5: detected capacity change from 0 to 218376 May 15 00:03:47.865728 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 00:03:47.866510 (sd-merge)[1198]: Merged extensions into '/usr'. May 15 00:03:47.872024 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:03:47.872200 systemd[1]: Reloading... May 15 00:03:47.939146 zram_generator::config[1234]: No configuration found. May 15 00:03:48.056486 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:03:48.074533 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:03:48.146645 systemd[1]: Reloading finished in 273 ms. May 15 00:03:48.169639 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:03:48.171968 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:03:48.191621 systemd[1]: Starting ensure-sysext.service... May 15 00:03:48.198426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:03:48.214379 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... May 15 00:03:48.214403 systemd[1]: Reloading... May 15 00:03:48.224275 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:03:48.224649 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:03:48.225773 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:03:48.226154 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 15 00:03:48.226252 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 15 00:03:48.231576 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:03:48.231594 systemd-tmpfiles[1269]: Skipping /boot May 15 00:03:48.256618 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:03:48.256632 systemd-tmpfiles[1269]: Skipping /boot May 15 00:03:48.283130 zram_generator::config[1299]: No configuration found. May 15 00:03:48.532944 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:03:48.607725 systemd[1]: Reloading finished in 392 ms. May 15 00:03:48.626134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:03:48.654856 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:03:48.666394 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:03:48.669362 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:03:48.672446 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:03:48.677413 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:03:48.684208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:03:48.692962 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:03:48.697958 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:03:48.698199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:03:48.702426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:03:48.708321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:03:48.711350 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:03:48.715310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:03:48.715462 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:03:48.718970 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:03:48.720527 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:03:48.722227 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:03:48.724672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:03:48.725196 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:03:48.727474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:03:48.727794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:03:48.730273 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:03:48.730574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:03:48.741444 augenrules[1366]: No rules May 15 00:03:48.756003 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:03:48.756392 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:03:48.768181 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:03:48.771905 systemd-udevd[1342]: Using default interface naming scheme 'v255'. May 15 00:03:48.775560 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:03:48.775971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:03:48.790608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:03:48.794482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:03:48.807882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:03:48.809673 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:03:48.809799 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:03:48.814175 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:03:48.815795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:03:48.817405 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:03:48.824479 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:03:48.836669 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:03:48.867417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:03:48.867678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:03:48.869512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:03:48.869743 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:03:48.871759 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:03:48.872008 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:03:48.885672 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:03:48.927551 systemd[1]: Finished ensure-sysext.service. May 15 00:03:48.933993 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 00:03:48.936409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:03:48.950132 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1398) May 15 00:03:48.954314 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 00:03:48.951543 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:03:48.952811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:03:48.954411 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:03:48.958278 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:03:48.961372 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:03:48.964304 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:03:48.964708 systemd-resolved[1340]: Positive Trust Anchors: May 15 00:03:48.964725 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:03:48.964760 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:03:48.965789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:03:48.965849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:03:48.972246 systemd-resolved[1340]: Defaulting to hostname 'linux'. May 15 00:03:48.981002 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:03:48.984629 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:03:48.986190 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:03:48.986225 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:03:48.989536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:03:48.991368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:03:48.991626 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:03:48.994353 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:03:48.994813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:03:48.996722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:03:48.997381 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:03:49.004983 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:03:49.005437 augenrules[1417]: /sbin/augenrules: No change May 15 00:03:49.005522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:03:49.007680 kernel: ACPI: button: Power Button [PWRF] May 15 00:03:49.020105 augenrules[1446]: No rules May 15 00:03:49.025017 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:03:49.025668 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:03:49.037566 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 15 00:03:49.044509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:03:49.046006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:03:49.054260 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:03:49.055453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:03:49.055513 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:03:49.062434 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 00:03:49.062718 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 00:03:49.062924 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 00:03:49.097577 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:03:49.122301 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:03:49.130060 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:03:49.136571 systemd-networkd[1429]: lo: Link UP May 15 00:03:49.166236 kernel: mousedev: PS/2 mouse device common for all mice May 15 00:03:49.136587 systemd-networkd[1429]: lo: Gained carrier May 15 00:03:49.138471 systemd-networkd[1429]: Enumeration completed May 15 00:03:49.165636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:03:49.167204 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:03:49.167233 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:03:49.167240 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:03:49.168566 systemd-networkd[1429]: eth0: Link UP May 15 00:03:49.168580 systemd-networkd[1429]: eth0: Gained carrier May 15 00:03:49.168595 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:03:49.168995 systemd[1]: Reached target network.target - Network. May 15 00:03:49.176337 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 00:03:49.182470 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:03:49.187850 kernel: kvm_amd: TSC scaling supported May 15 00:03:49.188001 kernel: kvm_amd: Nested Virtualization enabled May 15 00:03:49.188044 kernel: kvm_amd: Nested Paging enabled May 15 00:03:49.188139 kernel: kvm_amd: LBR virtualization supported May 15 00:03:49.188159 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 15 00:03:49.188193 kernel: kvm_amd: Virtual GIF supported May 15 00:03:49.188274 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:03:49.189763 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. May 15 00:03:49.192894 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:03:49.192991 systemd-timesyncd[1430]: Initial clock synchronization to Thu 2025-05-15 00:03:49.463197 UTC. May 15 00:03:49.210037 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 00:03:49.218109 kernel: EDAC MC: Ver: 3.0.0 May 15 00:03:49.257411 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:03:49.300329 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:03:49.302501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:03:49.310879 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:03:49.356653 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:03:49.358534 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:03:49.359951 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:03:49.361359 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:03:49.363149 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:03:49.365264 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:03:49.366743 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:03:49.368577 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:03:49.370181 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:03:49.370229 systemd[1]: Reached target paths.target - Path Units. May 15 00:03:49.371476 systemd[1]: Reached target timers.target - Timer Units. May 15 00:03:49.373914 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:03:49.377520 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:03:49.381656 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 00:03:49.384400 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 00:03:49.385912 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 00:03:49.390673 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:03:49.392433 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 00:03:49.395807 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:03:49.397862 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:03:49.399206 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:03:49.400350 systemd[1]: Reached target basic.target - Basic System. May 15 00:03:49.401590 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:03:49.401640 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:03:49.403146 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:03:49.405734 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:03:49.408222 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:03:49.409862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:03:49.413949 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:03:49.415413 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:03:49.420343 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:03:49.425168 jq[1478]: false May 15 00:03:49.426074 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:03:49.431027 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:03:49.436059 extend-filesystems[1479]: Found loop3 May 15 00:03:49.436059 extend-filesystems[1479]: Found loop4 May 15 00:03:49.436059 extend-filesystems[1479]: Found loop5 May 15 00:03:49.436059 extend-filesystems[1479]: Found sr0 May 15 00:03:49.436059 extend-filesystems[1479]: Found vda May 15 00:03:49.436059 extend-filesystems[1479]: Found vda1 May 15 00:03:49.436059 extend-filesystems[1479]: Found vda2 May 15 00:03:49.436059 extend-filesystems[1479]: Found vda3 May 15 00:03:49.436059 extend-filesystems[1479]: Found usr May 15 00:03:49.436059 extend-filesystems[1479]: Found vda4 May 15 00:03:49.436059 extend-filesystems[1479]: Found vda6 May 15 00:03:49.436059 extend-filesystems[1479]: Found vda7 May 15 00:03:49.436059 extend-filesystems[1479]: Found vda9 May 15 00:03:49.436059 extend-filesystems[1479]: Checking size of /dev/vda9 May 15 00:03:49.470211 extend-filesystems[1479]: Resized partition /dev/vda9 May 15 00:03:49.444489 dbus-daemon[1477]: [system] SELinux support is enabled May 15 00:03:49.443340 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:03:49.458985 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:03:49.460422 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:03:49.461277 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:03:49.464733 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:03:49.471459 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:03:49.474369 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:03:49.478159 extend-filesystems[1500]: resize2fs 1.47.1 (20-May-2024) May 15 00:03:49.481129 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:03:49.491166 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1381) May 15 00:03:49.491804 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:03:49.492219 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:03:49.492872 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:03:49.494299 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:03:49.496700 jq[1499]: true May 15 00:03:49.500911 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:03:49.501335 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:03:49.520162 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:03:49.531720 jq[1504]: true May 15 00:03:49.536627 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:03:49.537182 update_engine[1497]: I20250515 00:03:49.536828 1497 main.cc:92] Flatcar Update Engine starting May 15 00:03:49.539168 update_engine[1497]: I20250515 00:03:49.538218 1497 update_check_scheduler.cc:74] Next update check in 6m46s May 15 00:03:49.567395 tar[1503]: linux-amd64/LICENSE May 15 00:03:49.570401 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:03:49.579970 systemd[1]: Started update-engine.service - Update Engine. May 15 00:03:49.582018 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:03:49.582060 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:03:49.619261 tar[1503]: linux-amd64/helm May 15 00:03:49.584198 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:03:49.584218 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:03:49.596503 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:03:49.620962 systemd-logind[1495]: Watching system buttons on /dev/input/event1 (Power Button) May 15 00:03:49.624259 extend-filesystems[1500]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:03:49.624259 extend-filesystems[1500]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:03:49.624259 extend-filesystems[1500]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:03:49.621001 systemd-logind[1495]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 00:03:49.635299 extend-filesystems[1479]: Resized filesystem in /dev/vda9 May 15 00:03:49.624374 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:03:49.624860 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:03:49.625084 systemd-logind[1495]: New seat seat0. May 15 00:03:49.635253 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:03:49.642451 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:03:49.651124 bash[1532]: Updated "/home/core/.ssh/authorized_keys" May 15 00:03:49.653907 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:03:49.656664 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 00:03:49.795793 containerd[1505]: time="2025-05-15T00:03:49.794860290Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 00:03:49.826614 containerd[1505]: time="2025-05-15T00:03:49.826426188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:03:49.829128 containerd[1505]: time="2025-05-15T00:03:49.829069226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:03:49.829224 containerd[1505]: time="2025-05-15T00:03:49.829204249Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:03:49.829340 containerd[1505]: time="2025-05-15T00:03:49.829316039Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:03:49.829645 containerd[1505]: time="2025-05-15T00:03:49.829625399Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:03:49.829720 containerd[1505]: time="2025-05-15T00:03:49.829701933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:03:49.829923 containerd[1505]: time="2025-05-15T00:03:49.829894283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:03:49.829998 containerd[1505]: time="2025-05-15T00:03:49.829981327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:03:49.830441 containerd[1505]: time="2025-05-15T00:03:49.830409099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:03:49.830518 containerd[1505]: time="2025-05-15T00:03:49.830500981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:03:49.830588 containerd[1505]: time="2025-05-15T00:03:49.830569841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:03:49.830647 containerd[1505]: time="2025-05-15T00:03:49.830632278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:03:49.830923 containerd[1505]: time="2025-05-15T00:03:49.830899118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:03:49.831342 containerd[1505]: time="2025-05-15T00:03:49.831313285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:03:49.831670 containerd[1505]: time="2025-05-15T00:03:49.831645057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:03:49.831762 containerd[1505]: time="2025-05-15T00:03:49.831739374Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:03:49.831975 containerd[1505]: time="2025-05-15T00:03:49.831952815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:03:49.832353 containerd[1505]: time="2025-05-15T00:03:49.832129917Z" level=info msg="metadata content store policy set" policy=shared May 15 00:03:49.843372 containerd[1505]: time="2025-05-15T00:03:49.843292525Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:03:49.843589 containerd[1505]: time="2025-05-15T00:03:49.843570476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.843752748Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.843785289Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.843820255Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844067989Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844394782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844542209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844560673Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844577455Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844594477Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844619774Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844635434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844654279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844687151Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:03:49.846129 containerd[1505]: time="2025-05-15T00:03:49.844705535Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844721665Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844739308Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844764385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844781878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844798089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844824618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844840037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844857190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844872057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844888799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844905450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844925458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844940706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846583 containerd[1505]: time="2025-05-15T00:03:49.844956586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.844972927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.844991001Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845014465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845042177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845056664Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845136453Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845162683Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845175907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845192128Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845205292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845230119Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845244476Z" level=info msg="NRI interface is disabled by configuration." May 15 00:03:49.846980 containerd[1505]: time="2025-05-15T00:03:49.845258602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:03:49.847466 containerd[1505]: time="2025-05-15T00:03:49.845638475Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:03:49.847466 containerd[1505]: time="2025-05-15T00:03:49.845698017Z" level=info msg="Connect containerd service" May 15 00:03:49.847466 containerd[1505]: time="2025-05-15T00:03:49.845758310Z" level=info msg="using legacy CRI server" May 15 00:03:49.847466 containerd[1505]: time="2025-05-15T00:03:49.845769070Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:03:49.847466 containerd[1505]: time="2025-05-15T00:03:49.845903151Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:03:49.850583 containerd[1505]: time="2025-05-15T00:03:49.850499283Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:03:49.850968 containerd[1505]: time="2025-05-15T00:03:49.850873565Z" level=info msg="Start subscribing containerd event" May 15 00:03:49.850968 containerd[1505]: time="2025-05-15T00:03:49.850983681Z" level=info msg="Start recovering state" May 15 00:03:49.851152 containerd[1505]: time="2025-05-15T00:03:49.851120738Z" level=info msg="Start event monitor" May 15 00:03:49.851152 containerd[1505]: time="2025-05-15T00:03:49.851151356Z" level=info msg="Start snapshots syncer" May 15 00:03:49.851214 containerd[1505]: time="2025-05-15T00:03:49.851165462Z" level=info msg="Start cni network conf syncer for default" May 15 00:03:49.851214 containerd[1505]: time="2025-05-15T00:03:49.851182173Z" level=info msg="Start streaming server" May 15 00:03:49.851401 containerd[1505]: time="2025-05-15T00:03:49.851374835Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:03:49.851549 containerd[1505]: time="2025-05-15T00:03:49.851528022Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:03:49.851881 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:03:49.852180 containerd[1505]: time="2025-05-15T00:03:49.852158034Z" level=info msg="containerd successfully booted in 0.059168s" May 15 00:03:49.852596 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:03:49.886078 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:03:49.896362 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:03:49.906024 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:03:49.906345 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:03:49.916450 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:03:49.930562 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:03:49.940438 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:03:49.943309 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 00:03:49.945038 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:03:50.082813 tar[1503]: linux-amd64/README.md May 15 00:03:50.099990 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:03:50.223306 systemd-networkd[1429]: eth0: Gained IPv6LL May 15 00:03:50.226661 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:03:50.239533 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:03:50.253452 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 00:03:50.256596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:03:50.259534 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:03:50.291522 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:03:50.293772 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 00:03:50.294117 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 00:03:50.297663 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:03:51.033113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:03:51.035052 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:03:51.036712 systemd[1]: Startup finished in 942ms (kernel) + 7.416s (initrd) + 4.932s (userspace) = 13.291s. May 15 00:03:51.037979 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:03:51.504625 kubelet[1594]: E0515 00:03:51.504465 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:03:51.509010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:03:51.509313 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:03:51.509740 systemd[1]: kubelet.service: Consumed 1.040s CPU time, 260.3M memory peak. May 15 00:03:52.329348 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:03:52.339442 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:45596.service - OpenSSH per-connection server daemon (10.0.0.1:45596). May 15 00:03:52.391650 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 45596 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:03:52.393933 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:52.408555 systemd-logind[1495]: New session 1 of user core. May 15 00:03:52.410444 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:03:52.426438 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:03:52.441706 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:03:52.454399 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:03:52.457558 (systemd)[1611]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:03:52.460340 systemd-logind[1495]: New session c1 of user core. May 15 00:03:52.623686 systemd[1611]: Queued start job for default target default.target. May 15 00:03:52.634576 systemd[1611]: Created slice app.slice - User Application Slice. May 15 00:03:52.634602 systemd[1611]: Reached target paths.target - Paths. May 15 00:03:52.634645 systemd[1611]: Reached target timers.target - Timers. May 15 00:03:52.636434 systemd[1611]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:03:52.649969 systemd[1611]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:03:52.650104 systemd[1611]: Reached target sockets.target - Sockets. May 15 00:03:52.650167 systemd[1611]: Reached target basic.target - Basic System. May 15 00:03:52.650212 systemd[1611]: Reached target default.target - Main User Target. May 15 00:03:52.650250 systemd[1611]: Startup finished in 180ms. May 15 00:03:52.651272 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:03:52.663325 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:03:52.742438 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:45606.service - OpenSSH per-connection server daemon (10.0.0.1:45606). May 15 00:03:52.779182 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 45606 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:03:52.781018 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:52.785465 systemd-logind[1495]: New session 2 of user core. May 15 00:03:52.806277 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:03:52.861994 sshd[1624]: Connection closed by 10.0.0.1 port 45606 May 15 00:03:52.862416 sshd-session[1622]: pam_unix(sshd:session): session closed for user core May 15 00:03:52.873427 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:45606.service: Deactivated successfully. May 15 00:03:52.875845 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:03:52.877990 systemd-logind[1495]: Session 2 logged out. Waiting for processes to exit. May 15 00:03:52.896568 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:45618.service - OpenSSH per-connection server daemon (10.0.0.1:45618). May 15 00:03:52.898096 systemd-logind[1495]: Removed session 2. May 15 00:03:52.935683 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 45618 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:03:52.937513 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:52.942438 systemd-logind[1495]: New session 3 of user core. May 15 00:03:52.952308 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:03:53.004376 sshd[1632]: Connection closed by 10.0.0.1 port 45618 May 15 00:03:53.004730 sshd-session[1629]: pam_unix(sshd:session): session closed for user core May 15 00:03:53.021164 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:45618.service: Deactivated successfully. May 15 00:03:53.023200 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:03:53.024999 systemd-logind[1495]: Session 3 logged out. Waiting for processes to exit. May 15 00:03:53.040564 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:45620.service - OpenSSH per-connection server daemon (10.0.0.1:45620). May 15 00:03:53.041926 systemd-logind[1495]: Removed session 3. May 15 00:03:53.077544 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 45620 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:03:53.079137 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:53.083667 systemd-logind[1495]: New session 4 of user core. May 15 00:03:53.093256 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:03:53.147696 sshd[1640]: Connection closed by 10.0.0.1 port 45620 May 15 00:03:53.147975 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 15 00:03:53.163790 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:45620.service: Deactivated successfully. May 15 00:03:53.165730 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:03:53.167547 systemd-logind[1495]: Session 4 logged out. Waiting for processes to exit. May 15 00:03:53.169366 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:45622.service - OpenSSH per-connection server daemon (10.0.0.1:45622). May 15 00:03:53.170204 systemd-logind[1495]: Removed session 4. May 15 00:03:53.210954 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 45622 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:03:53.212597 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:53.217160 systemd-logind[1495]: New session 5 of user core. May 15 00:03:53.233276 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:03:53.296177 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:03:53.296608 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:03:53.312500 sudo[1649]: pam_unix(sudo:session): session closed for user root May 15 00:03:53.314311 sshd[1648]: Connection closed by 10.0.0.1 port 45622 May 15 00:03:53.315005 sshd-session[1645]: pam_unix(sshd:session): session closed for user core May 15 00:03:53.334872 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:45622.service: Deactivated successfully. May 15 00:03:53.337044 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:03:53.338208 systemd-logind[1495]: Session 5 logged out. Waiting for processes to exit. May 15 00:03:53.350604 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:38984.service - OpenSSH per-connection server daemon (10.0.0.1:38984). May 15 00:03:53.351968 systemd-logind[1495]: Removed session 5. May 15 00:03:53.393186 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 38984 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:03:53.395014 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:53.400075 systemd-logind[1495]: New session 6 of user core. May 15 00:03:53.409292 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:03:53.639840 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:03:53.640390 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:03:53.646504 sudo[1659]: pam_unix(sudo:session): session closed for user root May 15 00:03:53.654828 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 00:03:53.655208 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:03:53.678579 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:03:53.717258 augenrules[1681]: No rules May 15 00:03:53.719364 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:03:53.719679 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:03:53.720989 sudo[1658]: pam_unix(sudo:session): session closed for user root May 15 00:03:53.722626 sshd[1657]: Connection closed by 10.0.0.1 port 38984 May 15 00:03:53.723037 sshd-session[1654]: pam_unix(sshd:session): session closed for user core May 15 00:03:53.736605 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:38984.service: Deactivated successfully. May 15 00:03:53.738556 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:03:53.740262 systemd-logind[1495]: Session 6 logged out. Waiting for processes to exit. May 15 00:03:53.745414 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:38988.service - OpenSSH per-connection server daemon (10.0.0.1:38988). May 15 00:03:53.746423 systemd-logind[1495]: Removed session 6. May 15 00:03:53.783513 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 38988 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:03:53.785599 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:03:53.791449 systemd-logind[1495]: New session 7 of user core. May 15 00:03:53.801286 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:03:53.857925 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:03:53.858409 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:03:54.453522 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:03:54.453623 (dockerd)[1712]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:03:55.746062 dockerd[1712]: time="2025-05-15T00:03:55.745961099Z" level=info msg="Starting up" May 15 00:03:56.805063 dockerd[1712]: time="2025-05-15T00:03:56.804753448Z" level=info msg="Loading containers: start." May 15 00:03:57.149139 kernel: Initializing XFRM netlink socket May 15 00:03:57.248466 systemd-networkd[1429]: docker0: Link UP May 15 00:03:57.305709 dockerd[1712]: time="2025-05-15T00:03:57.305226926Z" level=info msg="Loading containers: done." May 15 00:03:57.324542 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2021488859-merged.mount: Deactivated successfully. May 15 00:03:57.379365 dockerd[1712]: time="2025-05-15T00:03:57.379281154Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:03:57.379572 dockerd[1712]: time="2025-05-15T00:03:57.379438230Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 15 00:03:57.379678 dockerd[1712]: time="2025-05-15T00:03:57.379644312Z" level=info msg="Daemon has completed initialization" May 15 00:03:57.680835 dockerd[1712]: time="2025-05-15T00:03:57.680599077Z" level=info msg="API listen on /run/docker.sock" May 15 00:03:57.681001 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:03:58.545829 containerd[1505]: time="2025-05-15T00:03:58.545764880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 00:03:59.279967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3079437395.mount: Deactivated successfully. May 15 00:04:01.206242 containerd[1505]: time="2025-05-15T00:04:01.206125006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:01.207527 containerd[1505]: time="2025-05-15T00:04:01.207431342Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 15 00:04:01.212132 containerd[1505]: time="2025-05-15T00:04:01.212044296Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:01.230148 containerd[1505]: time="2025-05-15T00:04:01.228044057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:01.232725 containerd[1505]: time="2025-05-15T00:04:01.232519953Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.68670342s" May 15 00:04:01.232725 containerd[1505]: time="2025-05-15T00:04:01.232596973Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 00:04:01.233539 containerd[1505]: time="2025-05-15T00:04:01.233483867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 00:04:01.759756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:04:01.773368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:04:02.062651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:02.069279 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:04:02.958447 kubelet[1969]: E0515 00:04:02.958378 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:04:02.966812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:04:02.967363 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:04:02.967827 systemd[1]: kubelet.service: Consumed 361ms CPU time, 106.1M memory peak. May 15 00:04:06.347272 containerd[1505]: time="2025-05-15T00:04:06.347195130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:06.352773 containerd[1505]: time="2025-05-15T00:04:06.352703308Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 15 00:04:06.360799 containerd[1505]: time="2025-05-15T00:04:06.360707920Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:06.367524 containerd[1505]: time="2025-05-15T00:04:06.367374273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:06.368883 containerd[1505]: time="2025-05-15T00:04:06.368812277Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 5.135270141s" May 15 00:04:06.368883 containerd[1505]: time="2025-05-15T00:04:06.368873435Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 00:04:06.369653 containerd[1505]: time="2025-05-15T00:04:06.369608793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 00:04:08.517109 containerd[1505]: time="2025-05-15T00:04:08.516983753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:08.625330 containerd[1505]: time="2025-05-15T00:04:08.625181641Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 15 00:04:08.683167 containerd[1505]: time="2025-05-15T00:04:08.683079670Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:08.726536 containerd[1505]: time="2025-05-15T00:04:08.726447068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:08.728295 containerd[1505]: time="2025-05-15T00:04:08.728220427Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.358566s" May 15 00:04:08.728295 containerd[1505]: time="2025-05-15T00:04:08.728285196Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 00:04:08.728962 containerd[1505]: time="2025-05-15T00:04:08.728884067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 00:04:11.137370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532584999.mount: Deactivated successfully. May 15 00:04:12.211324 containerd[1505]: time="2025-05-15T00:04:12.211208951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:12.220310 containerd[1505]: time="2025-05-15T00:04:12.220165716Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 15 00:04:12.231955 containerd[1505]: time="2025-05-15T00:04:12.231809793Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:12.240273 containerd[1505]: time="2025-05-15T00:04:12.240081496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:12.241421 containerd[1505]: time="2025-05-15T00:04:12.241295818Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 3.512325326s" May 15 00:04:12.241421 containerd[1505]: time="2025-05-15T00:04:12.241411402Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 00:04:12.242602 containerd[1505]: time="2025-05-15T00:04:12.242381369Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 00:04:13.217793 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:04:13.235485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:04:13.414431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:13.419934 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:04:13.578137 kubelet[2001]: E0515 00:04:13.577844 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:04:13.582592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:04:13.582851 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:04:13.583348 systemd[1]: kubelet.service: Consumed 255ms CPU time, 106.3M memory peak. May 15 00:04:14.573981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229503026.mount: Deactivated successfully. May 15 00:04:16.435472 containerd[1505]: time="2025-05-15T00:04:16.435374179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:16.438449 containerd[1505]: time="2025-05-15T00:04:16.438383987Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 00:04:16.440121 containerd[1505]: time="2025-05-15T00:04:16.440021800Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:16.445216 containerd[1505]: time="2025-05-15T00:04:16.445130179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:16.446662 containerd[1505]: time="2025-05-15T00:04:16.446608536Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.204149892s" May 15 00:04:16.446708 containerd[1505]: time="2025-05-15T00:04:16.446660257Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 00:04:16.447317 containerd[1505]: time="2025-05-15T00:04:16.447271165Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:04:17.009224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832190277.mount: Deactivated successfully. May 15 00:04:17.020588 containerd[1505]: time="2025-05-15T00:04:17.020505770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:17.022922 containerd[1505]: time="2025-05-15T00:04:17.022843143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 00:04:17.024411 containerd[1505]: time="2025-05-15T00:04:17.024319951Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:17.027462 containerd[1505]: time="2025-05-15T00:04:17.027393793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:17.028007 containerd[1505]: time="2025-05-15T00:04:17.027958472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 580.653127ms" May 15 00:04:17.028007 containerd[1505]: time="2025-05-15T00:04:17.027988555Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 00:04:17.029135 containerd[1505]: time="2025-05-15T00:04:17.029080207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 00:04:19.106932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673974484.mount: Deactivated successfully. May 15 00:04:23.833453 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 00:04:23.846287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:04:24.013415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:24.018971 (kubelet)[2082]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:04:25.028800 kubelet[2082]: E0515 00:04:25.028640 2082 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:04:25.034364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:04:25.034672 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:04:25.035233 systemd[1]: kubelet.service: Consumed 236ms CPU time, 106.4M memory peak. May 15 00:04:28.135384 containerd[1505]: time="2025-05-15T00:04:28.135309912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:28.136265 containerd[1505]: time="2025-05-15T00:04:28.136186639Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 15 00:04:28.139573 containerd[1505]: time="2025-05-15T00:04:28.139531836Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:28.142809 containerd[1505]: time="2025-05-15T00:04:28.142757312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:04:28.144010 containerd[1505]: time="2025-05-15T00:04:28.143982184Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 11.114845453s" May 15 00:04:28.144010 containerd[1505]: time="2025-05-15T00:04:28.144009849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 00:04:30.380742 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:30.380975 systemd[1]: kubelet.service: Consumed 236ms CPU time, 106.4M memory peak. May 15 00:04:30.397684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:04:30.431121 systemd[1]: Reload requested from client PID 2168 ('systemctl') (unit session-7.scope)... May 15 00:04:30.431135 systemd[1]: Reloading... May 15 00:04:30.511174 zram_generator::config[2209]: No configuration found. May 15 00:04:30.986456 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:04:31.111971 systemd[1]: Reloading finished in 680 ms. May 15 00:04:31.166477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:31.171393 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:04:31.172415 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:04:31.172797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:31.172860 systemd[1]: kubelet.service: Consumed 159ms CPU time, 91.7M memory peak. May 15 00:04:31.175019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:04:31.373888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:31.380351 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:04:31.448063 kubelet[2262]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:04:31.448063 kubelet[2262]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:04:31.448063 kubelet[2262]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:04:31.448648 kubelet[2262]: I0515 00:04:31.448223 2262 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:04:31.807518 kubelet[2262]: I0515 00:04:31.807369 2262 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:04:31.807518 kubelet[2262]: I0515 00:04:31.807413 2262 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:04:31.807710 kubelet[2262]: I0515 00:04:31.807684 2262 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:04:31.857872 kubelet[2262]: E0515 00:04:31.857735 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:31.864607 kubelet[2262]: I0515 00:04:31.864056 2262 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:04:31.878598 kubelet[2262]: E0515 00:04:31.878500 2262 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:04:31.878598 kubelet[2262]: I0515 00:04:31.878570 2262 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:04:31.886809 kubelet[2262]: I0515 00:04:31.886686 2262 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:04:31.887245 kubelet[2262]: I0515 00:04:31.887037 2262 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:04:31.887342 kubelet[2262]: I0515 00:04:31.887115 2262 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:04:31.887590 kubelet[2262]: I0515 00:04:31.887348 2262 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:04:31.887590 kubelet[2262]: I0515 00:04:31.887361 2262 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:04:31.888846 kubelet[2262]: I0515 00:04:31.888795 2262 state_mem.go:36] "Initialized new in-memory state store" May 15 00:04:31.895787 kubelet[2262]: I0515 00:04:31.895711 2262 kubelet.go:446] "Attempting to sync node with API server" May 15 00:04:31.895787 kubelet[2262]: I0515 00:04:31.895765 2262 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:04:31.895787 kubelet[2262]: I0515 00:04:31.895798 2262 kubelet.go:352] "Adding apiserver pod source" May 15 00:04:31.895787 kubelet[2262]: I0515 00:04:31.895813 2262 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:04:31.898432 kubelet[2262]: W0515 00:04:31.898314 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 15 00:04:31.898432 kubelet[2262]: E0515 00:04:31.898401 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:31.899570 kubelet[2262]: W0515 00:04:31.899412 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 15 00:04:31.899570 kubelet[2262]: E0515 00:04:31.899501 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:31.901649 kubelet[2262]: I0515 00:04:31.901601 2262 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:04:31.902203 kubelet[2262]: I0515 00:04:31.902170 2262 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:04:31.902272 kubelet[2262]: W0515 00:04:31.902255 2262 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:04:31.905611 kubelet[2262]: I0515 00:04:31.905461 2262 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:04:31.905611 kubelet[2262]: I0515 00:04:31.905522 2262 server.go:1287] "Started kubelet" May 15 00:04:31.906384 kubelet[2262]: I0515 00:04:31.906320 2262 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:04:31.906951 kubelet[2262]: I0515 00:04:31.906597 2262 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:04:31.906951 kubelet[2262]: I0515 00:04:31.906689 2262 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:04:31.912584 kubelet[2262]: I0515 00:04:31.912529 2262 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:04:31.913444 kubelet[2262]: I0515 00:04:31.913417 2262 server.go:490] "Adding debug handlers to kubelet server" May 15 00:04:31.913972 kubelet[2262]: I0515 00:04:31.913918 2262 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:04:31.917515 kubelet[2262]: I0515 00:04:31.916532 2262 factory.go:221] Registration of the systemd container factory successfully May 15 00:04:31.917515 kubelet[2262]: I0515 00:04:31.916678 2262 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:04:31.920573 kubelet[2262]: E0515 00:04:31.920338 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:04:31.920573 kubelet[2262]: I0515 00:04:31.920383 2262 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:04:31.920974 kubelet[2262]: I0515 00:04:31.920919 2262 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:04:31.921027 kubelet[2262]: I0515 00:04:31.921004 2262 reconciler.go:26] "Reconciler: start to sync state" May 15 00:04:31.921786 kubelet[2262]: E0515 00:04:31.916026 2262 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8a821e8583ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:04:31.905489866 +0000 UTC m=+0.515566812,LastTimestamp:2025-05-15 00:04:31.905489866 +0000 UTC m=+0.515566812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:04:31.921786 kubelet[2262]: E0515 00:04:31.921420 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" May 15 00:04:31.921786 kubelet[2262]: W0515 00:04:31.921489 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 15 00:04:31.921786 kubelet[2262]: E0515 00:04:31.921529 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:31.923957 kubelet[2262]: I0515 00:04:31.923923 2262 factory.go:221] Registration of the containerd container factory successfully May 15 00:04:31.924301 kubelet[2262]: E0515 00:04:31.924263 2262 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:04:31.937868 kubelet[2262]: I0515 00:04:31.937819 2262 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:04:31.937868 kubelet[2262]: I0515 00:04:31.937840 2262 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:04:31.937868 kubelet[2262]: I0515 00:04:31.937858 2262 state_mem.go:36] "Initialized new in-memory state store" May 15 00:04:32.025922 kubelet[2262]: E0515 00:04:32.025840 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:04:32.123254 kubelet[2262]: E0515 00:04:32.123014 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" May 15 00:04:32.126121 kubelet[2262]: E0515 00:04:32.126037 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:04:32.227533 kubelet[2262]: E0515 00:04:32.227297 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:04:32.312480 kubelet[2262]: I0515 00:04:32.312062 2262 policy_none.go:49] "None policy: Start" May 15 00:04:32.312480 kubelet[2262]: I0515 00:04:32.312124 2262 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:04:32.312480 kubelet[2262]: I0515 00:04:32.312142 2262 state_mem.go:35] "Initializing new in-memory state store" May 15 00:04:32.318771 kubelet[2262]: I0515 00:04:32.318700 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:04:32.320594 kubelet[2262]: I0515 00:04:32.320568 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:04:32.320640 kubelet[2262]: I0515 00:04:32.320603 2262 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:04:32.320640 kubelet[2262]: I0515 00:04:32.320631 2262 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:04:32.320713 kubelet[2262]: I0515 00:04:32.320641 2262 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:04:32.320757 kubelet[2262]: E0515 00:04:32.320706 2262 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:04:32.326415 kubelet[2262]: W0515 00:04:32.326112 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 15 00:04:32.326415 kubelet[2262]: E0515 00:04:32.326208 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:32.327445 kubelet[2262]: E0515 00:04:32.327407 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:04:32.327665 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:04:32.346661 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:04:32.352399 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:04:32.361698 kubelet[2262]: I0515 00:04:32.361638 2262 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:04:32.362105 kubelet[2262]: I0515 00:04:32.361941 2262 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:04:32.362105 kubelet[2262]: I0515 00:04:32.361965 2262 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:04:32.363139 kubelet[2262]: I0515 00:04:32.362935 2262 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:04:32.363565 kubelet[2262]: E0515 00:04:32.363385 2262 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:04:32.363565 kubelet[2262]: E0515 00:04:32.363447 2262 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:04:32.429956 kubelet[2262]: I0515 00:04:32.429800 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae89a92d125af607a9905f1dc314a0a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae89a92d125af607a9905f1dc314a0a2\") " pod="kube-system/kube-apiserver-localhost" May 15 00:04:32.429956 kubelet[2262]: I0515 00:04:32.429846 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae89a92d125af607a9905f1dc314a0a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae89a92d125af607a9905f1dc314a0a2\") " pod="kube-system/kube-apiserver-localhost" May 15 00:04:32.429956 kubelet[2262]: I0515 00:04:32.429874 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae89a92d125af607a9905f1dc314a0a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae89a92d125af607a9905f1dc314a0a2\") " pod="kube-system/kube-apiserver-localhost" May 15 00:04:32.434356 systemd[1]: Created slice kubepods-burstable-podae89a92d125af607a9905f1dc314a0a2.slice - libcontainer container kubepods-burstable-podae89a92d125af607a9905f1dc314a0a2.slice. May 15 00:04:32.447937 kubelet[2262]: E0515 00:04:32.447872 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:04:32.452841 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 15 00:04:32.464181 kubelet[2262]: E0515 00:04:32.464142 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:04:32.465244 kubelet[2262]: I0515 00:04:32.465203 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:04:32.465784 kubelet[2262]: E0515 00:04:32.465743 2262 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" May 15 00:04:32.467762 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 15 00:04:32.470053 kubelet[2262]: E0515 00:04:32.470004 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:04:32.524116 kubelet[2262]: E0515 00:04:32.523998 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" May 15 00:04:32.530214 kubelet[2262]: I0515 00:04:32.530063 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:32.530365 kubelet[2262]: I0515 00:04:32.530248 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:32.530365 kubelet[2262]: I0515 00:04:32.530292 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:32.530365 kubelet[2262]: I0515 00:04:32.530313 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:32.530365 kubelet[2262]: I0515 00:04:32.530334 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:32.530365 kubelet[2262]: I0515 00:04:32.530356 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 00:04:32.668272 kubelet[2262]: I0515 00:04:32.667572 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:04:32.668970 kubelet[2262]: E0515 00:04:32.668910 2262 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" May 15 00:04:32.750161 containerd[1505]: time="2025-05-15T00:04:32.749894192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae89a92d125af607a9905f1dc314a0a2,Namespace:kube-system,Attempt:0,}" May 15 00:04:32.766333 containerd[1505]: time="2025-05-15T00:04:32.766270950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 15 00:04:32.772317 containerd[1505]: time="2025-05-15T00:04:32.772268456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 15 00:04:33.071590 kubelet[2262]: I0515 00:04:33.071421 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:04:33.072079 kubelet[2262]: E0515 00:04:33.071989 2262 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" May 15 00:04:33.134917 kubelet[2262]: W0515 00:04:33.134806 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 15 00:04:33.134917 kubelet[2262]: E0515 00:04:33.134913 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:33.268507 kubelet[2262]: W0515 00:04:33.268409 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 15 00:04:33.268507 kubelet[2262]: E0515 00:04:33.268471 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:33.325034 kubelet[2262]: E0515 00:04:33.324865 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="1.6s" May 15 00:04:33.380463 kubelet[2262]: W0515 00:04:33.380320 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 15 00:04:33.380463 kubelet[2262]: E0515 00:04:33.380468 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:33.533069 kubelet[2262]: W0515 00:04:33.532942 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 15 00:04:33.533069 kubelet[2262]: E0515 00:04:33.533030 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:33.645884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116728731.mount: Deactivated successfully. May 15 00:04:33.661227 containerd[1505]: time="2025-05-15T00:04:33.661141587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:04:33.667531 containerd[1505]: time="2025-05-15T00:04:33.667425860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 15 00:04:33.674438 containerd[1505]: time="2025-05-15T00:04:33.674294781Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:04:33.676030 containerd[1505]: time="2025-05-15T00:04:33.675956187Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:04:33.677678 containerd[1505]: time="2025-05-15T00:04:33.677454918Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:04:33.679030 containerd[1505]: time="2025-05-15T00:04:33.678914121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:04:33.680121 containerd[1505]: time="2025-05-15T00:04:33.680044298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:04:33.681597 containerd[1505]: time="2025-05-15T00:04:33.681544543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:04:33.685306 containerd[1505]: time="2025-05-15T00:04:33.685039900Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 934.992022ms" May 15 00:04:33.688217 containerd[1505]: time="2025-05-15T00:04:33.688158163Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 915.785833ms" May 15 00:04:33.689702 containerd[1505]: time="2025-05-15T00:04:33.689635477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 923.225845ms" May 15 00:04:33.874511 kubelet[2262]: I0515 00:04:33.874443 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:04:33.874955 kubelet[2262]: E0515 00:04:33.874829 2262 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" May 15 00:04:33.918036 kubelet[2262]: E0515 00:04:33.917840 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 15 00:04:34.006803 containerd[1505]: time="2025-05-15T00:04:34.005217644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:04:34.006803 containerd[1505]: time="2025-05-15T00:04:34.006632472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:04:34.006803 containerd[1505]: time="2025-05-15T00:04:34.006650973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:34.006803 containerd[1505]: time="2025-05-15T00:04:34.006741865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:34.008262 containerd[1505]: time="2025-05-15T00:04:34.008114309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:04:34.008262 containerd[1505]: time="2025-05-15T00:04:34.008175585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:04:34.008262 containerd[1505]: time="2025-05-15T00:04:34.008186730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:34.009178 containerd[1505]: time="2025-05-15T00:04:34.009080232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:34.012248 containerd[1505]: time="2025-05-15T00:04:34.011803403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:04:34.012248 containerd[1505]: time="2025-05-15T00:04:34.011900148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:04:34.012248 containerd[1505]: time="2025-05-15T00:04:34.011923550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:34.012248 containerd[1505]: time="2025-05-15T00:04:34.012057988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:34.059430 systemd[1]: Started cri-containerd-a739565bae9904997cd33807dee46d4e1f19ede139ae0608b70bb6ffece25fe6.scope - libcontainer container a739565bae9904997cd33807dee46d4e1f19ede139ae0608b70bb6ffece25fe6. May 15 00:04:34.079225 systemd[1]: Started cri-containerd-2bb4c6edb989425586dfdf68b109c5bfb3f881eca29fc125ba96353eac524aa2.scope - libcontainer container 2bb4c6edb989425586dfdf68b109c5bfb3f881eca29fc125ba96353eac524aa2. May 15 00:04:34.085281 systemd[1]: Started cri-containerd-daa6bd68bc5f3352ef8ed2e491ec73bb52df86593d75a3d9295c7cc422840937.scope - libcontainer container daa6bd68bc5f3352ef8ed2e491ec73bb52df86593d75a3d9295c7cc422840937. May 15 00:04:34.184289 containerd[1505]: time="2025-05-15T00:04:34.180574510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"a739565bae9904997cd33807dee46d4e1f19ede139ae0608b70bb6ffece25fe6\"" May 15 00:04:34.187423 containerd[1505]: time="2025-05-15T00:04:34.187373953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"daa6bd68bc5f3352ef8ed2e491ec73bb52df86593d75a3d9295c7cc422840937\"" May 15 00:04:34.189272 containerd[1505]: time="2025-05-15T00:04:34.189241356Z" level=info msg="CreateContainer within sandbox \"a739565bae9904997cd33807dee46d4e1f19ede139ae0608b70bb6ffece25fe6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:04:34.190305 containerd[1505]: time="2025-05-15T00:04:34.190277614Z" level=info msg="CreateContainer within sandbox \"daa6bd68bc5f3352ef8ed2e491ec73bb52df86593d75a3d9295c7cc422840937\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:04:34.197999 containerd[1505]: time="2025-05-15T00:04:34.197868984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae89a92d125af607a9905f1dc314a0a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bb4c6edb989425586dfdf68b109c5bfb3f881eca29fc125ba96353eac524aa2\"" May 15 00:04:34.200686 containerd[1505]: time="2025-05-15T00:04:34.200639290Z" level=info msg="CreateContainer within sandbox \"2bb4c6edb989425586dfdf68b109c5bfb3f881eca29fc125ba96353eac524aa2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:04:34.632789 containerd[1505]: time="2025-05-15T00:04:34.632683229Z" level=info msg="CreateContainer within sandbox \"2bb4c6edb989425586dfdf68b109c5bfb3f881eca29fc125ba96353eac524aa2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14dd83dc948119f16581984e37f02d7d8eec143f6ef75b11940af0354b1d09de\"" May 15 00:04:34.633545 containerd[1505]: time="2025-05-15T00:04:34.633511647Z" level=info msg="StartContainer for \"14dd83dc948119f16581984e37f02d7d8eec143f6ef75b11940af0354b1d09de\"" May 15 00:04:34.634684 containerd[1505]: time="2025-05-15T00:04:34.634627703Z" level=info msg="CreateContainer within sandbox \"a739565bae9904997cd33807dee46d4e1f19ede139ae0608b70bb6ffece25fe6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df29eabd912f4a138c95f769bec119abd403723b7aaa637b1b5f6cdbf0be0847\"" May 15 00:04:34.635125 containerd[1505]: time="2025-05-15T00:04:34.635069833Z" level=info msg="StartContainer for \"df29eabd912f4a138c95f769bec119abd403723b7aaa637b1b5f6cdbf0be0847\"" May 15 00:04:34.641152 containerd[1505]: time="2025-05-15T00:04:34.641083533Z" level=info msg="CreateContainer within sandbox \"daa6bd68bc5f3352ef8ed2e491ec73bb52df86593d75a3d9295c7cc422840937\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9418810d491ea6ab7b83327780856ca74baa1944ab8df4d7898d0af84de032c1\"" May 15 00:04:34.644427 containerd[1505]: time="2025-05-15T00:04:34.644382031Z" level=info msg="StartContainer for \"9418810d491ea6ab7b83327780856ca74baa1944ab8df4d7898d0af84de032c1\"" May 15 00:04:34.687383 systemd[1]: Started cri-containerd-14dd83dc948119f16581984e37f02d7d8eec143f6ef75b11940af0354b1d09de.scope - libcontainer container 14dd83dc948119f16581984e37f02d7d8eec143f6ef75b11940af0354b1d09de. May 15 00:04:34.689680 systemd[1]: Started cri-containerd-df29eabd912f4a138c95f769bec119abd403723b7aaa637b1b5f6cdbf0be0847.scope - libcontainer container df29eabd912f4a138c95f769bec119abd403723b7aaa637b1b5f6cdbf0be0847. May 15 00:04:34.694749 systemd[1]: Started cri-containerd-9418810d491ea6ab7b83327780856ca74baa1944ab8df4d7898d0af84de032c1.scope - libcontainer container 9418810d491ea6ab7b83327780856ca74baa1944ab8df4d7898d0af84de032c1. May 15 00:04:34.745959 update_engine[1497]: I20250515 00:04:34.745887 1497 update_attempter.cc:509] Updating boot flags... May 15 00:04:34.770126 containerd[1505]: time="2025-05-15T00:04:34.769639270Z" level=info msg="StartContainer for \"9418810d491ea6ab7b83327780856ca74baa1944ab8df4d7898d0af84de032c1\" returns successfully" May 15 00:04:34.770126 containerd[1505]: time="2025-05-15T00:04:34.769789474Z" level=info msg="StartContainer for \"14dd83dc948119f16581984e37f02d7d8eec143f6ef75b11940af0354b1d09de\" returns successfully" May 15 00:04:34.790586 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2532) May 15 00:04:34.798623 containerd[1505]: time="2025-05-15T00:04:34.798544551Z" level=info msg="StartContainer for \"df29eabd912f4a138c95f769bec119abd403723b7aaa637b1b5f6cdbf0be0847\" returns successfully" May 15 00:04:34.961138 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2537) May 15 00:04:35.336177 kubelet[2262]: E0515 00:04:35.335676 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:04:35.338574 kubelet[2262]: E0515 00:04:35.338523 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:04:35.339780 kubelet[2262]: E0515 00:04:35.339739 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:04:35.477907 kubelet[2262]: I0515 00:04:35.477211 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:04:35.640843 systemd[1]: run-containerd-runc-k8s.io-14dd83dc948119f16581984e37f02d7d8eec143f6ef75b11940af0354b1d09de-runc.Mu45fO.mount: Deactivated successfully. May 15 00:04:36.342502 kubelet[2262]: E0515 00:04:36.342428 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:04:36.343055 kubelet[2262]: E0515 00:04:36.342765 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:04:36.343607 kubelet[2262]: E0515 00:04:36.343572 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:04:36.756658 kubelet[2262]: E0515 00:04:36.756456 2262 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:04:36.910983 kubelet[2262]: I0515 00:04:36.910904 2262 apiserver.go:52] "Watching apiserver" May 15 00:04:36.914035 kubelet[2262]: I0515 00:04:36.913984 2262 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 00:04:36.921554 kubelet[2262]: I0515 00:04:36.921474 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:04:36.921709 kubelet[2262]: I0515 00:04:36.921579 2262 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:04:36.926602 kubelet[2262]: E0515 00:04:36.926550 2262 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 00:04:36.926718 kubelet[2262]: I0515 00:04:36.926611 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 00:04:36.928952 kubelet[2262]: E0515 00:04:36.928890 2262 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 00:04:36.928952 kubelet[2262]: I0515 00:04:36.928941 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:04:36.930622 kubelet[2262]: E0515 00:04:36.930568 2262 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 00:04:37.342440 kubelet[2262]: I0515 00:04:37.342402 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:04:37.342611 kubelet[2262]: I0515 00:04:37.342504 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:04:37.344397 kubelet[2262]: E0515 00:04:37.344373 2262 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 00:04:37.344473 kubelet[2262]: E0515 00:04:37.344372 2262 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 00:04:39.233789 kubelet[2262]: I0515 00:04:39.233732 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:04:39.529920 systemd[1]: Reload requested from client PID 2554 ('systemctl') (unit session-7.scope)... May 15 00:04:39.529938 systemd[1]: Reloading... May 15 00:04:39.634155 zram_generator::config[2601]: No configuration found. May 15 00:04:39.779634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:04:39.914998 systemd[1]: Reloading finished in 384 ms. May 15 00:04:39.947995 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:04:39.969930 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:04:39.970287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:39.970352 systemd[1]: kubelet.service: Consumed 1.460s CPU time, 128.7M memory peak. May 15 00:04:39.983690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:04:40.538637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:04:40.544316 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:04:40.639713 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:04:40.639713 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:04:40.639713 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:04:40.640298 kubelet[2643]: I0515 00:04:40.639789 2643 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:04:40.647011 kubelet[2643]: I0515 00:04:40.646952 2643 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:04:40.647011 kubelet[2643]: I0515 00:04:40.646993 2643 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:04:40.647441 kubelet[2643]: I0515 00:04:40.647413 2643 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:04:40.649083 kubelet[2643]: I0515 00:04:40.649054 2643 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:04:40.652226 kubelet[2643]: I0515 00:04:40.652004 2643 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:04:40.658616 kubelet[2643]: E0515 00:04:40.658561 2643 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:04:40.658616 kubelet[2643]: I0515 00:04:40.658602 2643 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:04:40.663621 kubelet[2643]: I0515 00:04:40.663573 2643 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:04:40.663905 kubelet[2643]: I0515 00:04:40.663859 2643 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:04:40.664071 kubelet[2643]: I0515 00:04:40.663897 2643 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:04:40.664071 kubelet[2643]: I0515 00:04:40.664069 2643 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:04:40.664232 kubelet[2643]: I0515 00:04:40.664078 2643 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:04:40.664232 kubelet[2643]: I0515 00:04:40.664142 2643 state_mem.go:36] "Initialized new in-memory state store" May 15 00:04:40.664332 kubelet[2643]: I0515 00:04:40.664313 2643 kubelet.go:446] "Attempting to sync node with API server" May 15 00:04:40.664332 kubelet[2643]: I0515 00:04:40.664328 2643 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:04:40.664388 kubelet[2643]: I0515 00:04:40.664344 2643 kubelet.go:352] "Adding apiserver pod source" May 15 00:04:40.664388 kubelet[2643]: I0515 00:04:40.664356 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:04:40.665484 kubelet[2643]: I0515 00:04:40.665433 2643 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:04:40.665835 kubelet[2643]: I0515 00:04:40.665817 2643 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:04:40.666323 kubelet[2643]: I0515 00:04:40.666297 2643 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:04:40.666361 kubelet[2643]: I0515 00:04:40.666335 2643 server.go:1287] "Started kubelet" May 15 00:04:40.669648 kubelet[2643]: I0515 00:04:40.669590 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:04:40.669905 kubelet[2643]: I0515 00:04:40.669876 2643 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:04:40.669946 kubelet[2643]: I0515 00:04:40.669927 2643 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:04:40.670157 kubelet[2643]: I0515 00:04:40.670134 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:04:40.670287 kubelet[2643]: I0515 00:04:40.670184 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:04:40.674277 kubelet[2643]: I0515 00:04:40.673891 2643 server.go:490] "Adding debug handlers to kubelet server" May 15 00:04:40.677114 kubelet[2643]: I0515 00:04:40.676769 2643 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:04:40.677114 kubelet[2643]: E0515 00:04:40.676911 2643 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:04:40.677248 kubelet[2643]: I0515 00:04:40.677234 2643 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:04:40.677422 kubelet[2643]: I0515 00:04:40.677410 2643 reconciler.go:26] "Reconciler: start to sync state" May 15 00:04:40.681441 kubelet[2643]: I0515 00:04:40.681422 2643 factory.go:221] Registration of the systemd container factory successfully May 15 00:04:40.682200 kubelet[2643]: I0515 00:04:40.682137 2643 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:04:40.684282 kubelet[2643]: E0515 00:04:40.684202 2643 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:04:40.684396 kubelet[2643]: I0515 00:04:40.684371 2643 factory.go:221] Registration of the containerd container factory successfully May 15 00:04:40.687855 kubelet[2643]: I0515 00:04:40.687823 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:04:40.689316 kubelet[2643]: I0515 00:04:40.689284 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:04:40.689354 kubelet[2643]: I0515 00:04:40.689324 2643 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:04:40.689354 kubelet[2643]: I0515 00:04:40.689348 2643 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:04:40.689409 kubelet[2643]: I0515 00:04:40.689356 2643 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:04:40.689435 kubelet[2643]: E0515 00:04:40.689405 2643 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:04:40.719439 kubelet[2643]: I0515 00:04:40.719392 2643 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:04:40.719439 kubelet[2643]: I0515 00:04:40.719417 2643 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:04:40.719439 kubelet[2643]: I0515 00:04:40.719436 2643 state_mem.go:36] "Initialized new in-memory state store" May 15 00:04:40.719960 kubelet[2643]: I0515 00:04:40.719577 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:04:40.719960 kubelet[2643]: I0515 00:04:40.719588 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:04:40.719960 kubelet[2643]: I0515 00:04:40.719605 2643 policy_none.go:49] "None policy: Start" May 15 00:04:40.719960 kubelet[2643]: I0515 00:04:40.719626 2643 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:04:40.719960 kubelet[2643]: I0515 00:04:40.719636 2643 state_mem.go:35] "Initializing new in-memory state store" May 15 00:04:40.719960 kubelet[2643]: I0515 00:04:40.719723 2643 state_mem.go:75] "Updated machine memory state" May 15 00:04:40.726848 kubelet[2643]: I0515 00:04:40.726819 2643 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:04:40.727012 kubelet[2643]: I0515 00:04:40.726996 2643 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:04:40.727058 kubelet[2643]: I0515 00:04:40.727015 2643 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:04:40.727267 kubelet[2643]: I0515 00:04:40.727246 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:04:40.728290 kubelet[2643]: E0515 00:04:40.728263 2643 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:04:40.790398 kubelet[2643]: I0515 00:04:40.790291 2643 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:04:40.790398 kubelet[2643]: I0515 00:04:40.790326 2643 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:04:40.790525 kubelet[2643]: I0515 00:04:40.790295 2643 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 00:04:40.829375 kubelet[2643]: I0515 00:04:40.829348 2643 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:04:40.879243 kubelet[2643]: I0515 00:04:40.879175 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:40.879243 kubelet[2643]: I0515 00:04:40.879233 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:40.879449 kubelet[2643]: I0515 00:04:40.879267 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 00:04:40.879449 kubelet[2643]: I0515 00:04:40.879295 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:40.879449 kubelet[2643]: I0515 00:04:40.879326 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae89a92d125af607a9905f1dc314a0a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae89a92d125af607a9905f1dc314a0a2\") " pod="kube-system/kube-apiserver-localhost" May 15 00:04:40.879449 kubelet[2643]: I0515 00:04:40.879349 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae89a92d125af607a9905f1dc314a0a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae89a92d125af607a9905f1dc314a0a2\") " pod="kube-system/kube-apiserver-localhost" May 15 00:04:40.879449 kubelet[2643]: I0515 00:04:40.879373 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae89a92d125af607a9905f1dc314a0a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae89a92d125af607a9905f1dc314a0a2\") " pod="kube-system/kube-apiserver-localhost" May 15 00:04:40.879604 kubelet[2643]: I0515 00:04:40.879444 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:40.879604 kubelet[2643]: I0515 00:04:40.879495 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:04:40.909986 kubelet[2643]: E0515 00:04:40.909841 2643 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 00:04:40.997893 kubelet[2643]: I0515 00:04:40.997834 2643 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 15 00:04:40.998138 kubelet[2643]: I0515 00:04:40.997974 2643 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 00:04:41.180214 sudo[2677]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 00:04:41.180693 sudo[2677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 00:04:41.665169 kubelet[2643]: I0515 00:04:41.665124 2643 apiserver.go:52] "Watching apiserver" May 15 00:04:41.678400 kubelet[2643]: I0515 00:04:41.678343 2643 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:04:41.702126 kubelet[2643]: I0515 00:04:41.700322 2643 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:04:41.702126 kubelet[2643]: I0515 00:04:41.700619 2643 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:04:41.743318 sudo[2677]: pam_unix(sudo:session): session closed for user root May 15 00:04:41.961570 kubelet[2643]: I0515 00:04:41.960767 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.960747723 podStartE2EDuration="2.960747723s" podCreationTimestamp="2025-05-15 00:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:04:41.96047699 +0000 UTC m=+1.408996556" watchObservedRunningTime="2025-05-15 00:04:41.960747723 +0000 UTC m=+1.409267279" May 15 00:04:41.961570 kubelet[2643]: E0515 00:04:41.960997 2643 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 00:04:41.961570 kubelet[2643]: E0515 00:04:41.961346 2643 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:04:42.019348 kubelet[2643]: I0515 00:04:42.019232 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.019210262 podStartE2EDuration="2.019210262s" podCreationTimestamp="2025-05-15 00:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:04:42.019182613 +0000 UTC m=+1.467702169" watchObservedRunningTime="2025-05-15 00:04:42.019210262 +0000 UTC m=+1.467729828" May 15 00:04:43.751177 sudo[1693]: pam_unix(sudo:session): session closed for user root May 15 00:04:43.752781 sshd[1692]: Connection closed by 10.0.0.1 port 38988 May 15 00:04:43.755813 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 15 00:04:43.761213 systemd-logind[1495]: Session 7 logged out. Waiting for processes to exit. May 15 00:04:43.761789 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:38988.service: Deactivated successfully. May 15 00:04:43.764602 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:04:43.764881 systemd[1]: session-7.scope: Consumed 5.024s CPU time, 254.6M memory peak. May 15 00:04:43.766576 systemd-logind[1495]: Removed session 7. May 15 00:04:44.499054 kubelet[2643]: I0515 00:04:44.499009 2643 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:04:44.499659 kubelet[2643]: I0515 00:04:44.499542 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:04:44.499703 containerd[1505]: time="2025-05-15T00:04:44.499353040Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:04:44.573977 kubelet[2643]: I0515 00:04:44.573113 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.573053146 podStartE2EDuration="4.573053146s" podCreationTimestamp="2025-05-15 00:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:04:42.12818009 +0000 UTC m=+1.576699636" watchObservedRunningTime="2025-05-15 00:04:44.573053146 +0000 UTC m=+4.021572702" May 15 00:04:44.942688 systemd[1]: Created slice kubepods-besteffort-podafd1674d_b635_4260_bc4e_4afb54e0519e.slice - libcontainer container kubepods-besteffort-podafd1674d_b635_4260_bc4e_4afb54e0519e.slice. May 15 00:04:44.965900 systemd[1]: Created slice kubepods-burstable-pode6da0ac1_727d_4ba9_9691_bc8d9332c446.slice - libcontainer container kubepods-burstable-pode6da0ac1_727d_4ba9_9691_bc8d9332c446.slice. May 15 00:04:45.107795 kubelet[2643]: I0515 00:04:45.107691 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-config-path\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.107795 kubelet[2643]: I0515 00:04:45.107755 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-host-proc-sys-net\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.107795 kubelet[2643]: I0515 00:04:45.107770 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mvhb\" (UniqueName: \"kubernetes.io/projected/e6da0ac1-727d-4ba9-9691-bc8d9332c446-kube-api-access-2mvhb\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.107795 kubelet[2643]: I0515 00:04:45.107786 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afd1674d-b635-4260-bc4e-4afb54e0519e-xtables-lock\") pod \"kube-proxy-rkl8l\" (UID: \"afd1674d-b635-4260-bc4e-4afb54e0519e\") " pod="kube-system/kube-proxy-rkl8l" May 15 00:04:45.107795 kubelet[2643]: I0515 00:04:45.107801 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82bqk\" (UniqueName: \"kubernetes.io/projected/afd1674d-b635-4260-bc4e-4afb54e0519e-kube-api-access-82bqk\") pod \"kube-proxy-rkl8l\" (UID: \"afd1674d-b635-4260-bc4e-4afb54e0519e\") " pod="kube-system/kube-proxy-rkl8l" May 15 00:04:45.108190 kubelet[2643]: I0515 00:04:45.107818 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-host-proc-sys-kernel\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108190 kubelet[2643]: I0515 00:04:45.107836 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-lib-modules\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108190 kubelet[2643]: I0515 00:04:45.107850 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-xtables-lock\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108190 kubelet[2643]: I0515 00:04:45.107865 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afd1674d-b635-4260-bc4e-4afb54e0519e-lib-modules\") pod \"kube-proxy-rkl8l\" (UID: \"afd1674d-b635-4260-bc4e-4afb54e0519e\") " pod="kube-system/kube-proxy-rkl8l" May 15 00:04:45.108190 kubelet[2643]: I0515 00:04:45.107899 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-run\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108190 kubelet[2643]: I0515 00:04:45.107916 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-cgroup\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108376 kubelet[2643]: I0515 00:04:45.107931 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6da0ac1-727d-4ba9-9691-bc8d9332c446-hubble-tls\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108376 kubelet[2643]: I0515 00:04:45.107969 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-hostproc\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108376 kubelet[2643]: I0515 00:04:45.107999 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6da0ac1-727d-4ba9-9691-bc8d9332c446-clustermesh-secrets\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108376 kubelet[2643]: I0515 00:04:45.108027 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afd1674d-b635-4260-bc4e-4afb54e0519e-kube-proxy\") pod \"kube-proxy-rkl8l\" (UID: \"afd1674d-b635-4260-bc4e-4afb54e0519e\") " pod="kube-system/kube-proxy-rkl8l" May 15 00:04:45.108376 kubelet[2643]: I0515 00:04:45.108052 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-bpf-maps\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108376 kubelet[2643]: I0515 00:04:45.108069 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cni-path\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.108564 kubelet[2643]: I0515 00:04:45.108085 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-etc-cni-netd\") pod \"cilium-nc2jx\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " pod="kube-system/cilium-nc2jx" May 15 00:04:45.256006 systemd[1]: Created slice kubepods-besteffort-pod2db0b230_560e_4513_a6ed_4816a80ace05.slice - libcontainer container kubepods-besteffort-pod2db0b230_560e_4513_a6ed_4816a80ace05.slice. May 15 00:04:45.257293 containerd[1505]: time="2025-05-15T00:04:45.257240627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkl8l,Uid:afd1674d-b635-4260-bc4e-4afb54e0519e,Namespace:kube-system,Attempt:0,}" May 15 00:04:45.271047 containerd[1505]: time="2025-05-15T00:04:45.270937926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nc2jx,Uid:e6da0ac1-727d-4ba9-9691-bc8d9332c446,Namespace:kube-system,Attempt:0,}" May 15 00:04:45.310493 kubelet[2643]: I0515 00:04:45.310405 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2db0b230-560e-4513-a6ed-4816a80ace05-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6db9f\" (UID: \"2db0b230-560e-4513-a6ed-4816a80ace05\") " pod="kube-system/cilium-operator-6c4d7847fc-6db9f" May 15 00:04:45.310493 kubelet[2643]: I0515 00:04:45.310469 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbb9q\" (UniqueName: \"kubernetes.io/projected/2db0b230-560e-4513-a6ed-4816a80ace05-kube-api-access-mbb9q\") pod \"cilium-operator-6c4d7847fc-6db9f\" (UID: \"2db0b230-560e-4513-a6ed-4816a80ace05\") " pod="kube-system/cilium-operator-6c4d7847fc-6db9f" May 15 00:04:45.511730 containerd[1505]: time="2025-05-15T00:04:45.511480116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:04:45.511730 containerd[1505]: time="2025-05-15T00:04:45.511581638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:04:45.511730 containerd[1505]: time="2025-05-15T00:04:45.511624527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:45.512670 containerd[1505]: time="2025-05-15T00:04:45.511742242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:45.516873 containerd[1505]: time="2025-05-15T00:04:45.516761819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:04:45.517170 containerd[1505]: time="2025-05-15T00:04:45.517119213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:04:45.517263 containerd[1505]: time="2025-05-15T00:04:45.517189399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:45.517643 containerd[1505]: time="2025-05-15T00:04:45.517474442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:45.535315 systemd[1]: Started cri-containerd-69461bee90cb29b193a1dad4d72c4d4473b5ef0476526f5a44813560add8da40.scope - libcontainer container 69461bee90cb29b193a1dad4d72c4d4473b5ef0476526f5a44813560add8da40. May 15 00:04:45.540297 systemd[1]: Started cri-containerd-bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741.scope - libcontainer container bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741. May 15 00:04:45.562172 containerd[1505]: time="2025-05-15T00:04:45.562008383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6db9f,Uid:2db0b230-560e-4513-a6ed-4816a80ace05,Namespace:kube-system,Attempt:0,}" May 15 00:04:45.576501 containerd[1505]: time="2025-05-15T00:04:45.576349301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rkl8l,Uid:afd1674d-b635-4260-bc4e-4afb54e0519e,Namespace:kube-system,Attempt:0,} returns sandbox id \"69461bee90cb29b193a1dad4d72c4d4473b5ef0476526f5a44813560add8da40\"" May 15 00:04:45.579542 containerd[1505]: time="2025-05-15T00:04:45.579501061Z" level=info msg="CreateContainer within sandbox \"69461bee90cb29b193a1dad4d72c4d4473b5ef0476526f5a44813560add8da40\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:04:45.586840 containerd[1505]: time="2025-05-15T00:04:45.586781394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nc2jx,Uid:e6da0ac1-727d-4ba9-9691-bc8d9332c446,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\"" May 15 00:04:45.591381 containerd[1505]: time="2025-05-15T00:04:45.591325873Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:04:45.668782 containerd[1505]: time="2025-05-15T00:04:45.668386472Z" level=info msg="CreateContainer within sandbox \"69461bee90cb29b193a1dad4d72c4d4473b5ef0476526f5a44813560add8da40\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d422900e23a2a97cb1659f1c15bc2ab34f111e39152c57c8040a40de0121232\"" May 15 00:04:45.668984 containerd[1505]: time="2025-05-15T00:04:45.668846399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:04:45.668984 containerd[1505]: time="2025-05-15T00:04:45.668941327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:04:45.668984 containerd[1505]: time="2025-05-15T00:04:45.668957391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:45.669336 containerd[1505]: time="2025-05-15T00:04:45.669198001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:04:45.669804 containerd[1505]: time="2025-05-15T00:04:45.669757927Z" level=info msg="StartContainer for \"8d422900e23a2a97cb1659f1c15bc2ab34f111e39152c57c8040a40de0121232\"" May 15 00:04:45.705417 systemd[1]: Started cri-containerd-3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07.scope - libcontainer container 3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07. May 15 00:04:45.709867 systemd[1]: Started cri-containerd-8d422900e23a2a97cb1659f1c15bc2ab34f111e39152c57c8040a40de0121232.scope - libcontainer container 8d422900e23a2a97cb1659f1c15bc2ab34f111e39152c57c8040a40de0121232. May 15 00:04:45.759787 containerd[1505]: time="2025-05-15T00:04:45.759623928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6db9f,Uid:2db0b230-560e-4513-a6ed-4816a80ace05,Namespace:kube-system,Attempt:0,} returns sandbox id \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\"" May 15 00:04:45.774288 containerd[1505]: time="2025-05-15T00:04:45.774130482Z" level=info msg="StartContainer for \"8d422900e23a2a97cb1659f1c15bc2ab34f111e39152c57c8040a40de0121232\" returns successfully" May 15 00:04:46.732598 kubelet[2643]: I0515 00:04:46.732243 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rkl8l" podStartSLOduration=2.732223356 podStartE2EDuration="2.732223356s" podCreationTimestamp="2025-05-15 00:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:04:46.73202257 +0000 UTC m=+6.180542126" watchObservedRunningTime="2025-05-15 00:04:46.732223356 +0000 UTC m=+6.180742912" May 15 00:04:52.821647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1077592601.mount: Deactivated successfully. May 15 00:05:01.694243 containerd[1505]: time="2025-05-15T00:05:01.694160999Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:01.725499 containerd[1505]: time="2025-05-15T00:05:01.725394281Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 00:05:01.787992 containerd[1505]: time="2025-05-15T00:05:01.787895474Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:01.847733 containerd[1505]: time="2025-05-15T00:05:01.847635276Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.256242276s" May 15 00:05:01.847733 containerd[1505]: time="2025-05-15T00:05:01.847697560Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 00:05:01.853718 containerd[1505]: time="2025-05-15T00:05:01.853423860Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:05:01.864747 containerd[1505]: time="2025-05-15T00:05:01.864679816Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:05:01.912507 containerd[1505]: time="2025-05-15T00:05:01.912415488Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\"" May 15 00:05:01.914399 containerd[1505]: time="2025-05-15T00:05:01.914342043Z" level=info msg="StartContainer for \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\"" May 15 00:05:01.957286 systemd[1]: Started cri-containerd-7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab.scope - libcontainer container 7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab. May 15 00:05:02.157559 containerd[1505]: time="2025-05-15T00:05:02.157466121Z" level=info msg="StartContainer for \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\" returns successfully" May 15 00:05:02.165467 systemd[1]: cri-containerd-7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab.scope: Deactivated successfully. May 15 00:05:02.901807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab-rootfs.mount: Deactivated successfully. May 15 00:05:03.830762 containerd[1505]: time="2025-05-15T00:05:03.830613856Z" level=info msg="shim disconnected" id=7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab namespace=k8s.io May 15 00:05:03.830762 containerd[1505]: time="2025-05-15T00:05:03.830683615Z" level=warning msg="cleaning up after shim disconnected" id=7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab namespace=k8s.io May 15 00:05:03.830762 containerd[1505]: time="2025-05-15T00:05:03.830695388Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:03.945185 containerd[1505]: time="2025-05-15T00:05:03.945083815Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:05:04.003727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168902966.mount: Deactivated successfully. May 15 00:05:04.007448 containerd[1505]: time="2025-05-15T00:05:04.007368013Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\"" May 15 00:05:04.008162 containerd[1505]: time="2025-05-15T00:05:04.008128274Z" level=info msg="StartContainer for \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\"" May 15 00:05:04.046403 systemd[1]: Started cri-containerd-7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15.scope - libcontainer container 7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15. May 15 00:05:04.086356 containerd[1505]: time="2025-05-15T00:05:04.086194096Z" level=info msg="StartContainer for \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\" returns successfully" May 15 00:05:04.098340 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:05:04.098594 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:05:04.100893 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 00:05:04.107944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:05:04.110453 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:05:04.111463 systemd[1]: cri-containerd-7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15.scope: Deactivated successfully. May 15 00:05:04.129516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:05:04.155852 containerd[1505]: time="2025-05-15T00:05:04.155753530Z" level=info msg="shim disconnected" id=7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15 namespace=k8s.io May 15 00:05:04.155852 containerd[1505]: time="2025-05-15T00:05:04.155832817Z" level=warning msg="cleaning up after shim disconnected" id=7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15 namespace=k8s.io May 15 00:05:04.155852 containerd[1505]: time="2025-05-15T00:05:04.155847636Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:04.949361 containerd[1505]: time="2025-05-15T00:05:04.949052764Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:05:05.000128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15-rootfs.mount: Deactivated successfully. May 15 00:05:05.011023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173952785.mount: Deactivated successfully. May 15 00:05:05.096821 containerd[1505]: time="2025-05-15T00:05:05.096771531Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\"" May 15 00:05:05.097728 containerd[1505]: time="2025-05-15T00:05:05.097661618Z" level=info msg="StartContainer for \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\"" May 15 00:05:05.129451 systemd[1]: Started cri-containerd-17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424.scope - libcontainer container 17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424. May 15 00:05:05.170506 systemd[1]: cri-containerd-17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424.scope: Deactivated successfully. May 15 00:05:05.171925 containerd[1505]: time="2025-05-15T00:05:05.171019503Z" level=info msg="StartContainer for \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\" returns successfully" May 15 00:05:05.371670 containerd[1505]: time="2025-05-15T00:05:05.371598268Z" level=info msg="shim disconnected" id=17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424 namespace=k8s.io May 15 00:05:05.371670 containerd[1505]: time="2025-05-15T00:05:05.371661734Z" level=warning msg="cleaning up after shim disconnected" id=17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424 namespace=k8s.io May 15 00:05:05.371670 containerd[1505]: time="2025-05-15T00:05:05.371673036Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:05.445266 containerd[1505]: time="2025-05-15T00:05:05.445198895Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:05.447233 containerd[1505]: time="2025-05-15T00:05:05.447188465Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 00:05:05.449811 containerd[1505]: time="2025-05-15T00:05:05.449772004Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:05:05.451344 containerd[1505]: time="2025-05-15T00:05:05.451292122Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.597822401s" May 15 00:05:05.451344 containerd[1505]: time="2025-05-15T00:05:05.451328905Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 00:05:05.453567 containerd[1505]: time="2025-05-15T00:05:05.453518502Z" level=info msg="CreateContainer within sandbox \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:05:05.469770 containerd[1505]: time="2025-05-15T00:05:05.469727481Z" level=info msg="CreateContainer within sandbox \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\"" May 15 00:05:05.470294 containerd[1505]: time="2025-05-15T00:05:05.470270730Z" level=info msg="StartContainer for \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\"" May 15 00:05:05.499272 systemd[1]: Started cri-containerd-0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e.scope - libcontainer container 0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e. May 15 00:05:05.529576 containerd[1505]: time="2025-05-15T00:05:05.529514054Z" level=info msg="StartContainer for \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\" returns successfully" May 15 00:05:05.953635 containerd[1505]: time="2025-05-15T00:05:05.953584010Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:05:06.123430 containerd[1505]: time="2025-05-15T00:05:06.123366516Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\"" May 15 00:05:06.124057 containerd[1505]: time="2025-05-15T00:05:06.123998028Z" level=info msg="StartContainer for \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\"" May 15 00:05:06.160490 systemd[1]: Started cri-containerd-9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2.scope - libcontainer container 9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2. May 15 00:05:06.189695 systemd[1]: cri-containerd-9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2.scope: Deactivated successfully. May 15 00:05:06.248159 containerd[1505]: time="2025-05-15T00:05:06.247987569Z" level=info msg="StartContainer for \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\" returns successfully" May 15 00:05:06.284631 kubelet[2643]: I0515 00:05:06.284488 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6db9f" podStartSLOduration=1.592978418 podStartE2EDuration="21.284456757s" podCreationTimestamp="2025-05-15 00:04:45 +0000 UTC" firstStartedPulling="2025-05-15 00:04:45.76064845 +0000 UTC m=+5.209168006" lastFinishedPulling="2025-05-15 00:05:05.452126789 +0000 UTC m=+24.900646345" observedRunningTime="2025-05-15 00:05:06.282749912 +0000 UTC m=+25.731269468" watchObservedRunningTime="2025-05-15 00:05:06.284456757 +0000 UTC m=+25.732976314" May 15 00:05:06.286276 containerd[1505]: time="2025-05-15T00:05:06.286187399Z" level=info msg="shim disconnected" id=9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2 namespace=k8s.io May 15 00:05:06.286276 containerd[1505]: time="2025-05-15T00:05:06.286267197Z" level=warning msg="cleaning up after shim disconnected" id=9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2 namespace=k8s.io May 15 00:05:06.286276 containerd[1505]: time="2025-05-15T00:05:06.286276495Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:06.959350 containerd[1505]: time="2025-05-15T00:05:06.959303546Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:05:06.997389 containerd[1505]: time="2025-05-15T00:05:06.997331256Z" level=info msg="CreateContainer within sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\"" May 15 00:05:06.998617 containerd[1505]: time="2025-05-15T00:05:06.997892218Z" level=info msg="StartContainer for \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\"" May 15 00:05:07.000581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2-rootfs.mount: Deactivated successfully. May 15 00:05:07.045397 systemd[1]: Started cri-containerd-54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4.scope - libcontainer container 54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4. May 15 00:05:07.081995 containerd[1505]: time="2025-05-15T00:05:07.081944720Z" level=info msg="StartContainer for \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\" returns successfully" May 15 00:05:07.228989 kubelet[2643]: I0515 00:05:07.227340 2643 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 00:05:07.266164 systemd[1]: Created slice kubepods-burstable-pod695ca6de_3899_4e38_9155_8da8d6b59ad3.slice - libcontainer container kubepods-burstable-pod695ca6de_3899_4e38_9155_8da8d6b59ad3.slice. May 15 00:05:07.275366 systemd[1]: Created slice kubepods-burstable-podada6bd42_1c23_410e_9141_ced21c6f40cf.slice - libcontainer container kubepods-burstable-podada6bd42_1c23_410e_9141_ced21c6f40cf.slice. May 15 00:05:07.360217 kubelet[2643]: I0515 00:05:07.360138 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ada6bd42-1c23-410e-9141-ced21c6f40cf-config-volume\") pod \"coredns-668d6bf9bc-k779j\" (UID: \"ada6bd42-1c23-410e-9141-ced21c6f40cf\") " pod="kube-system/coredns-668d6bf9bc-k779j" May 15 00:05:07.360217 kubelet[2643]: I0515 00:05:07.360225 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/695ca6de-3899-4e38-9155-8da8d6b59ad3-config-volume\") pod \"coredns-668d6bf9bc-qpn9q\" (UID: \"695ca6de-3899-4e38-9155-8da8d6b59ad3\") " pod="kube-system/coredns-668d6bf9bc-qpn9q" May 15 00:05:07.360729 kubelet[2643]: I0515 00:05:07.360249 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drxg6\" (UniqueName: \"kubernetes.io/projected/695ca6de-3899-4e38-9155-8da8d6b59ad3-kube-api-access-drxg6\") pod \"coredns-668d6bf9bc-qpn9q\" (UID: \"695ca6de-3899-4e38-9155-8da8d6b59ad3\") " pod="kube-system/coredns-668d6bf9bc-qpn9q" May 15 00:05:07.360729 kubelet[2643]: I0515 00:05:07.360275 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6mc\" (UniqueName: \"kubernetes.io/projected/ada6bd42-1c23-410e-9141-ced21c6f40cf-kube-api-access-vv6mc\") pod \"coredns-668d6bf9bc-k779j\" (UID: \"ada6bd42-1c23-410e-9141-ced21c6f40cf\") " pod="kube-system/coredns-668d6bf9bc-k779j" May 15 00:05:07.575618 containerd[1505]: time="2025-05-15T00:05:07.575553852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpn9q,Uid:695ca6de-3899-4e38-9155-8da8d6b59ad3,Namespace:kube-system,Attempt:0,}" May 15 00:05:07.579658 containerd[1505]: time="2025-05-15T00:05:07.579605327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k779j,Uid:ada6bd42-1c23-410e-9141-ced21c6f40cf,Namespace:kube-system,Attempt:0,}" May 15 00:05:07.983798 kubelet[2643]: I0515 00:05:07.983324 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nc2jx" podStartSLOduration=7.720953321 podStartE2EDuration="23.983298296s" podCreationTimestamp="2025-05-15 00:04:44 +0000 UTC" firstStartedPulling="2025-05-15 00:04:45.590677403 +0000 UTC m=+5.039196959" lastFinishedPulling="2025-05-15 00:05:01.853022378 +0000 UTC m=+21.301541934" observedRunningTime="2025-05-15 00:05:07.981055965 +0000 UTC m=+27.429575521" watchObservedRunningTime="2025-05-15 00:05:07.983298296 +0000 UTC m=+27.431817862" May 15 00:05:09.499353 systemd-networkd[1429]: cilium_host: Link UP May 15 00:05:09.499579 systemd-networkd[1429]: cilium_net: Link UP May 15 00:05:09.499841 systemd-networkd[1429]: cilium_net: Gained carrier May 15 00:05:09.500080 systemd-networkd[1429]: cilium_host: Gained carrier May 15 00:05:09.625143 systemd-networkd[1429]: cilium_vxlan: Link UP May 15 00:05:09.625159 systemd-networkd[1429]: cilium_vxlan: Gained carrier May 15 00:05:09.857214 kernel: NET: Registered PF_ALG protocol family May 15 00:05:10.158333 systemd-networkd[1429]: cilium_net: Gained IPv6LL May 15 00:05:10.542235 systemd-networkd[1429]: cilium_host: Gained IPv6LL May 15 00:05:10.592888 systemd-networkd[1429]: lxc_health: Link UP May 15 00:05:10.606923 systemd-networkd[1429]: lxc_health: Gained carrier May 15 00:05:10.734290 systemd-networkd[1429]: cilium_vxlan: Gained IPv6LL May 15 00:05:10.947465 systemd-networkd[1429]: lxcd399c6e8a484: Link UP May 15 00:05:10.951133 kernel: eth0: renamed from tmpbe782 May 15 00:05:10.958755 systemd-networkd[1429]: lxcd399c6e8a484: Gained carrier May 15 00:05:10.963435 systemd-networkd[1429]: lxc38d7d5e94cf6: Link UP May 15 00:05:10.975134 kernel: eth0: renamed from tmpc0243 May 15 00:05:10.980846 systemd-networkd[1429]: lxc38d7d5e94cf6: Gained carrier May 15 00:05:12.273267 systemd-networkd[1429]: lxc_health: Gained IPv6LL May 15 00:05:12.334272 systemd-networkd[1429]: lxc38d7d5e94cf6: Gained IPv6LL May 15 00:05:12.782276 systemd-networkd[1429]: lxcd399c6e8a484: Gained IPv6LL May 15 00:05:14.504481 containerd[1505]: time="2025-05-15T00:05:14.504161053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:05:14.504481 containerd[1505]: time="2025-05-15T00:05:14.504244206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:05:14.504481 containerd[1505]: time="2025-05-15T00:05:14.504258364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:14.504481 containerd[1505]: time="2025-05-15T00:05:14.504354885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:14.507752 containerd[1505]: time="2025-05-15T00:05:14.507254064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:05:14.507752 containerd[1505]: time="2025-05-15T00:05:14.507347438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:05:14.507752 containerd[1505]: time="2025-05-15T00:05:14.507370343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:14.507752 containerd[1505]: time="2025-05-15T00:05:14.507669462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:14.547270 systemd[1]: Started cri-containerd-be78295f3ab56467eef77bf67d369216543e0c42cff7031ef44ce283084dedff.scope - libcontainer container be78295f3ab56467eef77bf67d369216543e0c42cff7031ef44ce283084dedff. May 15 00:05:14.553618 systemd[1]: Started cri-containerd-c0243a3534b842c08b8b49f4db5e76b0101f5c5e6e62adab94a727acac6d37f4.scope - libcontainer container c0243a3534b842c08b8b49f4db5e76b0101f5c5e6e62adab94a727acac6d37f4. May 15 00:05:14.564578 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:05:14.573167 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:05:14.596354 containerd[1505]: time="2025-05-15T00:05:14.596309002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k779j,Uid:ada6bd42-1c23-410e-9141-ced21c6f40cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"be78295f3ab56467eef77bf67d369216543e0c42cff7031ef44ce283084dedff\"" May 15 00:05:14.599171 containerd[1505]: time="2025-05-15T00:05:14.599075890Z" level=info msg="CreateContainer within sandbox \"be78295f3ab56467eef77bf67d369216543e0c42cff7031ef44ce283084dedff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:05:14.606009 containerd[1505]: time="2025-05-15T00:05:14.605983994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpn9q,Uid:695ca6de-3899-4e38-9155-8da8d6b59ad3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0243a3534b842c08b8b49f4db5e76b0101f5c5e6e62adab94a727acac6d37f4\"" May 15 00:05:14.608447 containerd[1505]: time="2025-05-15T00:05:14.608418458Z" level=info msg="CreateContainer within sandbox \"c0243a3534b842c08b8b49f4db5e76b0101f5c5e6e62adab94a727acac6d37f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:05:14.707848 containerd[1505]: time="2025-05-15T00:05:14.707779632Z" level=info msg="CreateContainer within sandbox \"be78295f3ab56467eef77bf67d369216543e0c42cff7031ef44ce283084dedff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"265eccec58ef26ace07cd9caf84523e6bbec820525bad6c07313fb0349cf036b\"" May 15 00:05:14.708371 containerd[1505]: time="2025-05-15T00:05:14.708338242Z" level=info msg="StartContainer for \"265eccec58ef26ace07cd9caf84523e6bbec820525bad6c07313fb0349cf036b\"" May 15 00:05:14.710697 containerd[1505]: time="2025-05-15T00:05:14.710596889Z" level=info msg="CreateContainer within sandbox \"c0243a3534b842c08b8b49f4db5e76b0101f5c5e6e62adab94a727acac6d37f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9308036a54485c0315b660b7f23746293c88bb60d9d236cf5a0bded897b99cbd\"" May 15 00:05:14.711127 containerd[1505]: time="2025-05-15T00:05:14.711072927Z" level=info msg="StartContainer for \"9308036a54485c0315b660b7f23746293c88bb60d9d236cf5a0bded897b99cbd\"" May 15 00:05:14.742268 systemd[1]: Started cri-containerd-265eccec58ef26ace07cd9caf84523e6bbec820525bad6c07313fb0349cf036b.scope - libcontainer container 265eccec58ef26ace07cd9caf84523e6bbec820525bad6c07313fb0349cf036b. May 15 00:05:14.746059 systemd[1]: Started cri-containerd-9308036a54485c0315b660b7f23746293c88bb60d9d236cf5a0bded897b99cbd.scope - libcontainer container 9308036a54485c0315b660b7f23746293c88bb60d9d236cf5a0bded897b99cbd. May 15 00:05:14.780396 containerd[1505]: time="2025-05-15T00:05:14.780261053Z" level=info msg="StartContainer for \"265eccec58ef26ace07cd9caf84523e6bbec820525bad6c07313fb0349cf036b\" returns successfully" May 15 00:05:14.780396 containerd[1505]: time="2025-05-15T00:05:14.780261053Z" level=info msg="StartContainer for \"9308036a54485c0315b660b7f23746293c88bb60d9d236cf5a0bded897b99cbd\" returns successfully" May 15 00:05:15.002448 kubelet[2643]: I0515 00:05:15.002363 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k779j" podStartSLOduration=30.002336495 podStartE2EDuration="30.002336495s" podCreationTimestamp="2025-05-15 00:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:05:15.000711436 +0000 UTC m=+34.449230992" watchObservedRunningTime="2025-05-15 00:05:15.002336495 +0000 UTC m=+34.450856051" May 15 00:05:15.514458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3543928053.mount: Deactivated successfully. May 15 00:05:16.079716 kubelet[2643]: I0515 00:05:16.079642 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qpn9q" podStartSLOduration=31.079585151 podStartE2EDuration="31.079585151s" podCreationTimestamp="2025-05-15 00:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:05:15.018759203 +0000 UTC m=+34.467278779" watchObservedRunningTime="2025-05-15 00:05:16.079585151 +0000 UTC m=+35.528104707" May 15 00:05:19.695909 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:38912.service - OpenSSH per-connection server daemon (10.0.0.1:38912). May 15 00:05:19.749185 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 38912 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:19.751518 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:19.757017 systemd-logind[1495]: New session 8 of user core. May 15 00:05:19.764279 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:05:20.059168 sshd[4042]: Connection closed by 10.0.0.1 port 38912 May 15 00:05:20.059594 sshd-session[4040]: pam_unix(sshd:session): session closed for user core May 15 00:05:20.065149 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:38912.service: Deactivated successfully. May 15 00:05:20.067635 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:05:20.068774 systemd-logind[1495]: Session 8 logged out. Waiting for processes to exit. May 15 00:05:20.069964 systemd-logind[1495]: Removed session 8. May 15 00:05:25.082553 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:37464.service - OpenSSH per-connection server daemon (10.0.0.1:37464). May 15 00:05:25.120401 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 37464 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:25.122063 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:25.127204 systemd-logind[1495]: New session 9 of user core. May 15 00:05:25.134390 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:05:25.256148 sshd[4061]: Connection closed by 10.0.0.1 port 37464 May 15 00:05:25.256878 sshd-session[4059]: pam_unix(sshd:session): session closed for user core May 15 00:05:25.262648 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:37464.service: Deactivated successfully. May 15 00:05:25.265776 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:05:25.267053 systemd-logind[1495]: Session 9 logged out. Waiting for processes to exit. May 15 00:05:25.269026 systemd-logind[1495]: Removed session 9. May 15 00:05:30.271555 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:37470.service - OpenSSH per-connection server daemon (10.0.0.1:37470). May 15 00:05:30.342926 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 37470 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:30.344839 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:30.350176 systemd-logind[1495]: New session 10 of user core. May 15 00:05:30.360291 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:05:30.475521 sshd[4077]: Connection closed by 10.0.0.1 port 37470 May 15 00:05:30.475955 sshd-session[4075]: pam_unix(sshd:session): session closed for user core May 15 00:05:30.480863 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:37470.service: Deactivated successfully. May 15 00:05:30.483176 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:05:30.483945 systemd-logind[1495]: Session 10 logged out. Waiting for processes to exit. May 15 00:05:30.485036 systemd-logind[1495]: Removed session 10. May 15 00:05:35.497698 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:43874.service - OpenSSH per-connection server daemon (10.0.0.1:43874). May 15 00:05:35.536699 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 43874 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:35.538529 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:35.543448 systemd-logind[1495]: New session 11 of user core. May 15 00:05:35.550241 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:05:35.669673 sshd[4094]: Connection closed by 10.0.0.1 port 43874 May 15 00:05:35.670047 sshd-session[4092]: pam_unix(sshd:session): session closed for user core May 15 00:05:35.686682 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:43874.service: Deactivated successfully. May 15 00:05:35.689704 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:05:35.691961 systemd-logind[1495]: Session 11 logged out. Waiting for processes to exit. May 15 00:05:35.700688 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:43882.service - OpenSSH per-connection server daemon (10.0.0.1:43882). May 15 00:05:35.702005 systemd-logind[1495]: Removed session 11. May 15 00:05:35.740836 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 43882 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:35.742927 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:35.749225 systemd-logind[1495]: New session 12 of user core. May 15 00:05:35.757370 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:05:35.918606 sshd[4110]: Connection closed by 10.0.0.1 port 43882 May 15 00:05:35.919193 sshd-session[4107]: pam_unix(sshd:session): session closed for user core May 15 00:05:35.930873 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:43882.service: Deactivated successfully. May 15 00:05:35.933387 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:05:35.936506 systemd-logind[1495]: Session 12 logged out. Waiting for processes to exit. May 15 00:05:35.945166 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:43890.service - OpenSSH per-connection server daemon (10.0.0.1:43890). May 15 00:05:35.947295 systemd-logind[1495]: Removed session 12. May 15 00:05:35.995486 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 43890 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:35.997712 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:36.003901 systemd-logind[1495]: New session 13 of user core. May 15 00:05:36.022448 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:05:36.141314 sshd[4123]: Connection closed by 10.0.0.1 port 43890 May 15 00:05:36.141818 sshd-session[4120]: pam_unix(sshd:session): session closed for user core May 15 00:05:36.146703 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:43890.service: Deactivated successfully. May 15 00:05:36.149111 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:05:36.149900 systemd-logind[1495]: Session 13 logged out. Waiting for processes to exit. May 15 00:05:36.151120 systemd-logind[1495]: Removed session 13. May 15 00:05:41.157818 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:43898.service - OpenSSH per-connection server daemon (10.0.0.1:43898). May 15 00:05:41.201576 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 43898 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:41.203833 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:41.209985 systemd-logind[1495]: New session 14 of user core. May 15 00:05:41.220343 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:05:41.344744 sshd[4140]: Connection closed by 10.0.0.1 port 43898 May 15 00:05:41.345188 sshd-session[4138]: pam_unix(sshd:session): session closed for user core May 15 00:05:41.350233 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:43898.service: Deactivated successfully. May 15 00:05:41.353389 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:05:41.354500 systemd-logind[1495]: Session 14 logged out. Waiting for processes to exit. May 15 00:05:41.356011 systemd-logind[1495]: Removed session 14. May 15 00:05:46.376837 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:59888.service - OpenSSH per-connection server daemon (10.0.0.1:59888). May 15 00:05:46.415833 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 59888 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:46.417776 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:46.422396 systemd-logind[1495]: New session 15 of user core. May 15 00:05:46.436334 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:05:46.550714 sshd[4158]: Connection closed by 10.0.0.1 port 59888 May 15 00:05:46.551188 sshd-session[4156]: pam_unix(sshd:session): session closed for user core May 15 00:05:46.556072 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:59888.service: Deactivated successfully. May 15 00:05:46.558584 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:05:46.559408 systemd-logind[1495]: Session 15 logged out. Waiting for processes to exit. May 15 00:05:46.560420 systemd-logind[1495]: Removed session 15. May 15 00:05:51.564593 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:59896.service - OpenSSH per-connection server daemon (10.0.0.1:59896). May 15 00:05:51.612553 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 59896 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:51.614653 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:51.620672 systemd-logind[1495]: New session 16 of user core. May 15 00:05:51.631364 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:05:51.761320 sshd[4173]: Connection closed by 10.0.0.1 port 59896 May 15 00:05:51.761874 sshd-session[4171]: pam_unix(sshd:session): session closed for user core May 15 00:05:51.772607 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:59896.service: Deactivated successfully. May 15 00:05:51.775352 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:05:51.776471 systemd-logind[1495]: Session 16 logged out. Waiting for processes to exit. May 15 00:05:51.786665 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:59904.service - OpenSSH per-connection server daemon (10.0.0.1:59904). May 15 00:05:51.787939 systemd-logind[1495]: Removed session 16. May 15 00:05:51.832415 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 59904 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:51.834404 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:51.839733 systemd-logind[1495]: New session 17 of user core. May 15 00:05:51.848274 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:05:52.543328 sshd[4189]: Connection closed by 10.0.0.1 port 59904 May 15 00:05:52.543938 sshd-session[4186]: pam_unix(sshd:session): session closed for user core May 15 00:05:52.564315 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:59904.service: Deactivated successfully. May 15 00:05:52.567135 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:05:52.569075 systemd-logind[1495]: Session 17 logged out. Waiting for processes to exit. May 15 00:05:52.576709 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:59912.service - OpenSSH per-connection server daemon (10.0.0.1:59912). May 15 00:05:52.578049 systemd-logind[1495]: Removed session 17. May 15 00:05:52.624281 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 59912 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:52.626342 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:52.632029 systemd-logind[1495]: New session 18 of user core. May 15 00:05:52.640354 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:05:54.492052 sshd[4203]: Connection closed by 10.0.0.1 port 59912 May 15 00:05:54.492675 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 15 00:05:54.513970 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:59912.service: Deactivated successfully. May 15 00:05:54.518022 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:05:54.521383 systemd-logind[1495]: Session 18 logged out. Waiting for processes to exit. May 15 00:05:54.532170 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:36214.service - OpenSSH per-connection server daemon (10.0.0.1:36214). May 15 00:05:54.535304 systemd-logind[1495]: Removed session 18. May 15 00:05:54.573674 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 36214 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:54.575648 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:54.582232 systemd-logind[1495]: New session 19 of user core. May 15 00:05:54.592495 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:05:55.315200 sshd[4224]: Connection closed by 10.0.0.1 port 36214 May 15 00:05:55.315697 sshd-session[4221]: pam_unix(sshd:session): session closed for user core May 15 00:05:55.330608 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:36214.service: Deactivated successfully. May 15 00:05:55.332916 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:05:55.333769 systemd-logind[1495]: Session 19 logged out. Waiting for processes to exit. May 15 00:05:55.346377 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:36220.service - OpenSSH per-connection server daemon (10.0.0.1:36220). May 15 00:05:55.347366 systemd-logind[1495]: Removed session 19. May 15 00:05:55.385845 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 36220 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:05:55.387655 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:55.392793 systemd-logind[1495]: New session 20 of user core. May 15 00:05:55.401259 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:05:55.812728 sshd[4238]: Connection closed by 10.0.0.1 port 36220 May 15 00:05:55.813227 sshd-session[4235]: pam_unix(sshd:session): session closed for user core May 15 00:05:55.817526 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:36220.service: Deactivated successfully. May 15 00:05:55.820145 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:05:55.821019 systemd-logind[1495]: Session 20 logged out. Waiting for processes to exit. May 15 00:05:55.822223 systemd-logind[1495]: Removed session 20. May 15 00:06:00.837344 systemd[1]: Started sshd@20-10.0.0.106:22-10.0.0.1:36230.service - OpenSSH per-connection server daemon (10.0.0.1:36230). May 15 00:06:00.883104 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 36230 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:06:00.885060 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:00.891104 systemd-logind[1495]: New session 21 of user core. May 15 00:06:00.900549 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:06:01.033603 sshd[4253]: Connection closed by 10.0.0.1 port 36230 May 15 00:06:01.034074 sshd-session[4251]: pam_unix(sshd:session): session closed for user core May 15 00:06:01.039614 systemd[1]: sshd@20-10.0.0.106:22-10.0.0.1:36230.service: Deactivated successfully. May 15 00:06:01.041860 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:06:01.043063 systemd-logind[1495]: Session 21 logged out. Waiting for processes to exit. May 15 00:06:01.044505 systemd-logind[1495]: Removed session 21. May 15 00:06:06.049706 systemd[1]: Started sshd@21-10.0.0.106:22-10.0.0.1:47892.service - OpenSSH per-connection server daemon (10.0.0.1:47892). May 15 00:06:06.094619 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 47892 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:06:06.096499 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:06.101464 systemd-logind[1495]: New session 22 of user core. May 15 00:06:06.111275 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:06:06.234628 sshd[4271]: Connection closed by 10.0.0.1 port 47892 May 15 00:06:06.235110 sshd-session[4269]: pam_unix(sshd:session): session closed for user core May 15 00:06:06.240539 systemd[1]: sshd@21-10.0.0.106:22-10.0.0.1:47892.service: Deactivated successfully. May 15 00:06:06.242997 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:06:06.244305 systemd-logind[1495]: Session 22 logged out. Waiting for processes to exit. May 15 00:06:06.246611 systemd-logind[1495]: Removed session 22. May 15 00:06:11.249559 systemd[1]: Started sshd@22-10.0.0.106:22-10.0.0.1:47896.service - OpenSSH per-connection server daemon (10.0.0.1:47896). May 15 00:06:11.293137 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 47896 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:06:11.295017 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:11.299657 systemd-logind[1495]: New session 23 of user core. May 15 00:06:11.309274 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:06:11.427178 sshd[4286]: Connection closed by 10.0.0.1 port 47896 May 15 00:06:11.427679 sshd-session[4284]: pam_unix(sshd:session): session closed for user core May 15 00:06:11.432649 systemd[1]: sshd@22-10.0.0.106:22-10.0.0.1:47896.service: Deactivated successfully. May 15 00:06:11.435671 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:06:11.436562 systemd-logind[1495]: Session 23 logged out. Waiting for processes to exit. May 15 00:06:11.437654 systemd-logind[1495]: Removed session 23. May 15 00:06:16.450594 systemd[1]: Started sshd@23-10.0.0.106:22-10.0.0.1:59348.service - OpenSSH per-connection server daemon (10.0.0.1:59348). May 15 00:06:16.492594 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 59348 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:06:16.495181 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:16.501002 systemd-logind[1495]: New session 24 of user core. May 15 00:06:16.511350 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 00:06:16.640057 sshd[4303]: Connection closed by 10.0.0.1 port 59348 May 15 00:06:16.640522 sshd-session[4301]: pam_unix(sshd:session): session closed for user core May 15 00:06:16.645807 systemd[1]: sshd@23-10.0.0.106:22-10.0.0.1:59348.service: Deactivated successfully. May 15 00:06:16.648628 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:06:16.649503 systemd-logind[1495]: Session 24 logged out. Waiting for processes to exit. May 15 00:06:16.650834 systemd-logind[1495]: Removed session 24. May 15 00:06:21.654164 systemd[1]: Started sshd@24-10.0.0.106:22-10.0.0.1:59360.service - OpenSSH per-connection server daemon (10.0.0.1:59360). May 15 00:06:21.695503 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 59360 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:06:21.697187 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:21.702006 systemd-logind[1495]: New session 25 of user core. May 15 00:06:21.709272 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 00:06:21.824249 sshd[4319]: Connection closed by 10.0.0.1 port 59360 May 15 00:06:21.824769 sshd-session[4317]: pam_unix(sshd:session): session closed for user core May 15 00:06:21.838437 systemd[1]: sshd@24-10.0.0.106:22-10.0.0.1:59360.service: Deactivated successfully. May 15 00:06:21.840639 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:06:21.842426 systemd-logind[1495]: Session 25 logged out. Waiting for processes to exit. May 15 00:06:21.853396 systemd[1]: Started sshd@25-10.0.0.106:22-10.0.0.1:59362.service - OpenSSH per-connection server daemon (10.0.0.1:59362). May 15 00:06:21.854593 systemd-logind[1495]: Removed session 25. May 15 00:06:21.890803 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 59362 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:06:21.892798 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:21.898337 systemd-logind[1495]: New session 26 of user core. May 15 00:06:21.915393 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 00:06:23.691016 containerd[1505]: time="2025-05-15T00:06:23.690925133Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:06:23.693629 containerd[1505]: time="2025-05-15T00:06:23.693590048Z" level=info msg="StopContainer for \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\" with timeout 2 (s)" May 15 00:06:23.699103 containerd[1505]: time="2025-05-15T00:06:23.699032996Z" level=info msg="Stop container \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\" with signal terminated" May 15 00:06:23.706854 systemd-networkd[1429]: lxc_health: Link DOWN May 15 00:06:23.706865 systemd-networkd[1429]: lxc_health: Lost carrier May 15 00:06:23.730939 containerd[1505]: time="2025-05-15T00:06:23.730856431Z" level=info msg="StopContainer for \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\" with timeout 30 (s)" May 15 00:06:23.731351 systemd[1]: cri-containerd-54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4.scope: Deactivated successfully. May 15 00:06:23.733156 containerd[1505]: time="2025-05-15T00:06:23.731643673Z" level=info msg="Stop container \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\" with signal terminated" May 15 00:06:23.731904 systemd[1]: cri-containerd-54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4.scope: Consumed 7.431s CPU time, 125.2M memory peak, 224K read from disk, 13.3M written to disk. May 15 00:06:23.744683 systemd[1]: cri-containerd-0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e.scope: Deactivated successfully. May 15 00:06:23.760521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4-rootfs.mount: Deactivated successfully. May 15 00:06:23.773289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e-rootfs.mount: Deactivated successfully. May 15 00:06:23.791324 containerd[1505]: time="2025-05-15T00:06:23.791238846Z" level=info msg="shim disconnected" id=54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4 namespace=k8s.io May 15 00:06:23.791324 containerd[1505]: time="2025-05-15T00:06:23.791321506Z" level=warning msg="cleaning up after shim disconnected" id=54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4 namespace=k8s.io May 15 00:06:23.791324 containerd[1505]: time="2025-05-15T00:06:23.791335191Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:23.791600 containerd[1505]: time="2025-05-15T00:06:23.791472695Z" level=info msg="shim disconnected" id=0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e namespace=k8s.io May 15 00:06:23.791600 containerd[1505]: time="2025-05-15T00:06:23.791543843Z" level=warning msg="cleaning up after shim disconnected" id=0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e namespace=k8s.io May 15 00:06:23.791600 containerd[1505]: time="2025-05-15T00:06:23.791556617Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:23.821497 containerd[1505]: time="2025-05-15T00:06:23.821422204Z" level=info msg="StopContainer for \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\" returns successfully" May 15 00:06:23.821672 containerd[1505]: time="2025-05-15T00:06:23.821568815Z" level=info msg="StopContainer for \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\" returns successfully" May 15 00:06:23.825338 containerd[1505]: time="2025-05-15T00:06:23.825143748Z" level=info msg="StopPodSandbox for \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\"" May 15 00:06:23.826351 containerd[1505]: time="2025-05-15T00:06:23.826267414Z" level=info msg="StopPodSandbox for \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\"" May 15 00:06:23.838368 containerd[1505]: time="2025-05-15T00:06:23.825191068Z" level=info msg="Container to stop \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:06:23.842109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07-shm.mount: Deactivated successfully. May 15 00:06:23.846846 containerd[1505]: time="2025-05-15T00:06:23.826361055Z" level=info msg="Container to stop \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:06:23.846846 containerd[1505]: time="2025-05-15T00:06:23.846842328Z" level=info msg="Container to stop \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:06:23.847054 containerd[1505]: time="2025-05-15T00:06:23.846860704Z" level=info msg="Container to stop \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:06:23.847054 containerd[1505]: time="2025-05-15T00:06:23.846874010Z" level=info msg="Container to stop \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:06:23.847054 containerd[1505]: time="2025-05-15T00:06:23.846886413Z" level=info msg="Container to stop \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:06:23.847066 systemd[1]: cri-containerd-3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07.scope: Deactivated successfully. May 15 00:06:23.852184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741-shm.mount: Deactivated successfully. May 15 00:06:23.862326 systemd[1]: cri-containerd-bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741.scope: Deactivated successfully. May 15 00:06:23.891600 containerd[1505]: time="2025-05-15T00:06:23.891516231Z" level=info msg="shim disconnected" id=3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07 namespace=k8s.io May 15 00:06:23.891600 containerd[1505]: time="2025-05-15T00:06:23.891590633Z" level=warning msg="cleaning up after shim disconnected" id=3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07 namespace=k8s.io May 15 00:06:23.891600 containerd[1505]: time="2025-05-15T00:06:23.891607646Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:23.892147 containerd[1505]: time="2025-05-15T00:06:23.891569042Z" level=info msg="shim disconnected" id=bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741 namespace=k8s.io May 15 00:06:23.892239 containerd[1505]: time="2025-05-15T00:06:23.892214963Z" level=warning msg="cleaning up after shim disconnected" id=bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741 namespace=k8s.io May 15 00:06:23.892280 containerd[1505]: time="2025-05-15T00:06:23.892237446Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:23.910187 containerd[1505]: time="2025-05-15T00:06:23.910117035Z" level=info msg="TearDown network for sandbox \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\" successfully" May 15 00:06:23.910187 containerd[1505]: time="2025-05-15T00:06:23.910160158Z" level=info msg="StopPodSandbox for \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\" returns successfully" May 15 00:06:23.910815 containerd[1505]: time="2025-05-15T00:06:23.910781431Z" level=info msg="TearDown network for sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" successfully" May 15 00:06:23.910815 containerd[1505]: time="2025-05-15T00:06:23.910812090Z" level=info msg="StopPodSandbox for \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" returns successfully" May 15 00:06:23.958113 kubelet[2643]: I0515 00:06:23.957865 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mvhb\" (UniqueName: \"kubernetes.io/projected/e6da0ac1-727d-4ba9-9691-bc8d9332c446-kube-api-access-2mvhb\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.958113 kubelet[2643]: I0515 00:06:23.957948 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-run\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.958113 kubelet[2643]: I0515 00:06:23.957976 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-host-proc-sys-kernel\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.958113 kubelet[2643]: I0515 00:06:23.957995 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cni-path\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.958113 kubelet[2643]: I0515 00:06:23.958024 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2db0b230-560e-4513-a6ed-4816a80ace05-cilium-config-path\") pod \"2db0b230-560e-4513-a6ed-4816a80ace05\" (UID: \"2db0b230-560e-4513-a6ed-4816a80ace05\") " May 15 00:06:23.958113 kubelet[2643]: I0515 00:06:23.958040 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-host-proc-sys-net\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.958902 kubelet[2643]: I0515 00:06:23.958066 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6da0ac1-727d-4ba9-9691-bc8d9332c446-hubble-tls\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.958902 kubelet[2643]: I0515 00:06:23.958136 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-hostproc\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.958902 kubelet[2643]: I0515 00:06:23.958137 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.958902 kubelet[2643]: I0515 00:06:23.958180 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-etc-cni-netd\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.958902 kubelet[2643]: I0515 00:06:23.958239 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.958902 kubelet[2643]: I0515 00:06:23.958277 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-cgroup\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.959155 kubelet[2643]: I0515 00:06:23.958281 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.959155 kubelet[2643]: I0515 00:06:23.958313 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cni-path" (OuterVolumeSpecName: "cni-path") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.959155 kubelet[2643]: I0515 00:06:23.958312 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6da0ac1-727d-4ba9-9691-bc8d9332c446-clustermesh-secrets\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.959155 kubelet[2643]: I0515 00:06:23.958348 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-lib-modules\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.959155 kubelet[2643]: I0515 00:06:23.958368 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-xtables-lock\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.959339 kubelet[2643]: I0515 00:06:23.958392 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-config-path\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.959339 kubelet[2643]: I0515 00:06:23.958413 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-bpf-maps\") pod \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\" (UID: \"e6da0ac1-727d-4ba9-9691-bc8d9332c446\") " May 15 00:06:23.959339 kubelet[2643]: I0515 00:06:23.958443 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbb9q\" (UniqueName: \"kubernetes.io/projected/2db0b230-560e-4513-a6ed-4816a80ace05-kube-api-access-mbb9q\") pod \"2db0b230-560e-4513-a6ed-4816a80ace05\" (UID: \"2db0b230-560e-4513-a6ed-4816a80ace05\") " May 15 00:06:23.959339 kubelet[2643]: I0515 00:06:23.958491 2643 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 00:06:23.959339 kubelet[2643]: I0515 00:06:23.958508 2643 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 00:06:23.959339 kubelet[2643]: I0515 00:06:23.958525 2643 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 00:06:23.959339 kubelet[2643]: I0515 00:06:23.958537 2643 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 00:06:23.959747 kubelet[2643]: I0515 00:06:23.959218 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.959747 kubelet[2643]: I0515 00:06:23.959347 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-hostproc" (OuterVolumeSpecName: "hostproc") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.963289 kubelet[2643]: I0515 00:06:23.963226 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6da0ac1-727d-4ba9-9691-bc8d9332c446-kube-api-access-2mvhb" (OuterVolumeSpecName: "kube-api-access-2mvhb") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "kube-api-access-2mvhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:06:23.963396 kubelet[2643]: I0515 00:06:23.963285 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.963396 kubelet[2643]: I0515 00:06:23.963364 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.963396 kubelet[2643]: I0515 00:06:23.963389 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.966313 kubelet[2643]: I0515 00:06:23.966256 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:06:23.966711 kubelet[2643]: I0515 00:06:23.966660 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2db0b230-560e-4513-a6ed-4816a80ace05-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2db0b230-560e-4513-a6ed-4816a80ace05" (UID: "2db0b230-560e-4513-a6ed-4816a80ace05"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:06:23.967398 kubelet[2643]: I0515 00:06:23.967342 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6da0ac1-727d-4ba9-9691-bc8d9332c446-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 00:06:23.968394 kubelet[2643]: I0515 00:06:23.968347 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6da0ac1-727d-4ba9-9691-bc8d9332c446-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:06:23.968451 kubelet[2643]: I0515 00:06:23.968406 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db0b230-560e-4513-a6ed-4816a80ace05-kube-api-access-mbb9q" (OuterVolumeSpecName: "kube-api-access-mbb9q") pod "2db0b230-560e-4513-a6ed-4816a80ace05" (UID: "2db0b230-560e-4513-a6ed-4816a80ace05"). InnerVolumeSpecName "kube-api-access-mbb9q". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:06:23.969597 kubelet[2643]: I0515 00:06:23.969543 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e6da0ac1-727d-4ba9-9691-bc8d9332c446" (UID: "e6da0ac1-727d-4ba9-9691-bc8d9332c446"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:06:24.058855 kubelet[2643]: I0515 00:06:24.058769 2643 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6da0ac1-727d-4ba9-9691-bc8d9332c446-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.058855 kubelet[2643]: I0515 00:06:24.058823 2643 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.058855 kubelet[2643]: I0515 00:06:24.058836 2643 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.058855 kubelet[2643]: I0515 00:06:24.058846 2643 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6da0ac1-727d-4ba9-9691-bc8d9332c446-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.058855 kubelet[2643]: I0515 00:06:24.058863 2643 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.058855 kubelet[2643]: I0515 00:06:24.058875 2643 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.058855 kubelet[2643]: I0515 00:06:24.058883 2643 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6da0ac1-727d-4ba9-9691-bc8d9332c446-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.059329 kubelet[2643]: I0515 00:06:24.058896 2643 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.059329 kubelet[2643]: I0515 00:06:24.058907 2643 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mbb9q\" (UniqueName: \"kubernetes.io/projected/2db0b230-560e-4513-a6ed-4816a80ace05-kube-api-access-mbb9q\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.059329 kubelet[2643]: I0515 00:06:24.058919 2643 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2mvhb\" (UniqueName: \"kubernetes.io/projected/e6da0ac1-727d-4ba9-9691-bc8d9332c446-kube-api-access-2mvhb\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.059329 kubelet[2643]: I0515 00:06:24.058929 2643 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2db0b230-560e-4513-a6ed-4816a80ace05-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.059329 kubelet[2643]: I0515 00:06:24.058939 2643 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6da0ac1-727d-4ba9-9691-bc8d9332c446-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 00:06:24.126626 kubelet[2643]: I0515 00:06:24.126579 2643 scope.go:117] "RemoveContainer" containerID="54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4" May 15 00:06:24.133651 systemd[1]: Removed slice kubepods-burstable-pode6da0ac1_727d_4ba9_9691_bc8d9332c446.slice - libcontainer container kubepods-burstable-pode6da0ac1_727d_4ba9_9691_bc8d9332c446.slice. May 15 00:06:24.133772 systemd[1]: kubepods-burstable-pode6da0ac1_727d_4ba9_9691_bc8d9332c446.slice: Consumed 7.558s CPU time, 125.5M memory peak, 248K read from disk, 13.3M written to disk. May 15 00:06:24.135737 containerd[1505]: time="2025-05-15T00:06:24.135042473Z" level=info msg="RemoveContainer for \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\"" May 15 00:06:24.138759 systemd[1]: Removed slice kubepods-besteffort-pod2db0b230_560e_4513_a6ed_4816a80ace05.slice - libcontainer container kubepods-besteffort-pod2db0b230_560e_4513_a6ed_4816a80ace05.slice. May 15 00:06:24.141139 containerd[1505]: time="2025-05-15T00:06:24.141059216Z" level=info msg="RemoveContainer for \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\" returns successfully" May 15 00:06:24.141415 kubelet[2643]: I0515 00:06:24.141372 2643 scope.go:117] "RemoveContainer" containerID="9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2" May 15 00:06:24.143700 containerd[1505]: time="2025-05-15T00:06:24.143655131Z" level=info msg="RemoveContainer for \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\"" May 15 00:06:24.148641 containerd[1505]: time="2025-05-15T00:06:24.148259771Z" level=info msg="RemoveContainer for \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\" returns successfully" May 15 00:06:24.148815 kubelet[2643]: I0515 00:06:24.148626 2643 scope.go:117] "RemoveContainer" containerID="17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424" May 15 00:06:24.149930 containerd[1505]: time="2025-05-15T00:06:24.149874903Z" level=info msg="RemoveContainer for \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\"" May 15 00:06:24.155873 containerd[1505]: time="2025-05-15T00:06:24.155817322Z" level=info msg="RemoveContainer for \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\" returns successfully" May 15 00:06:24.156376 kubelet[2643]: I0515 00:06:24.156044 2643 scope.go:117] "RemoveContainer" containerID="7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15" May 15 00:06:24.157459 containerd[1505]: time="2025-05-15T00:06:24.157150362Z" level=info msg="RemoveContainer for \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\"" May 15 00:06:24.162233 containerd[1505]: time="2025-05-15T00:06:24.162113451Z" level=info msg="RemoveContainer for \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\" returns successfully" May 15 00:06:24.162793 kubelet[2643]: I0515 00:06:24.162468 2643 scope.go:117] "RemoveContainer" containerID="7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab" May 15 00:06:24.163824 containerd[1505]: time="2025-05-15T00:06:24.163757998Z" level=info msg="RemoveContainer for \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\"" May 15 00:06:24.168442 containerd[1505]: time="2025-05-15T00:06:24.168391365Z" level=info msg="RemoveContainer for \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\" returns successfully" May 15 00:06:24.168685 kubelet[2643]: I0515 00:06:24.168641 2643 scope.go:117] "RemoveContainer" containerID="54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4" May 15 00:06:24.169263 containerd[1505]: time="2025-05-15T00:06:24.169194226Z" level=error msg="ContainerStatus for \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\": not found" May 15 00:06:24.169446 kubelet[2643]: E0515 00:06:24.169412 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\": not found" containerID="54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4" May 15 00:06:24.169559 kubelet[2643]: I0515 00:06:24.169459 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4"} err="failed to get container status \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\": rpc error: code = NotFound desc = an error occurred when try to find container \"54ce2d2cb145821bf2c38effec44ece0f4e6e22016618567cd29e3ef77507cd4\": not found" May 15 00:06:24.169596 kubelet[2643]: I0515 00:06:24.169561 2643 scope.go:117] "RemoveContainer" containerID="9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2" May 15 00:06:24.169792 containerd[1505]: time="2025-05-15T00:06:24.169760323Z" level=error msg="ContainerStatus for \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\": not found" May 15 00:06:24.170011 kubelet[2643]: E0515 00:06:24.169969 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\": not found" containerID="9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2" May 15 00:06:24.170072 kubelet[2643]: I0515 00:06:24.170026 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2"} err="failed to get container status \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9eb635211100a8e52f43700cba4cf3d47cf66d8913b3e8399e3bcfb97317bde2\": not found" May 15 00:06:24.170072 kubelet[2643]: I0515 00:06:24.170066 2643 scope.go:117] "RemoveContainer" containerID="17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424" May 15 00:06:24.170470 containerd[1505]: time="2025-05-15T00:06:24.170420591Z" level=error msg="ContainerStatus for \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\": not found" May 15 00:06:24.170643 kubelet[2643]: E0515 00:06:24.170616 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\": not found" containerID="17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424" May 15 00:06:24.170706 kubelet[2643]: I0515 00:06:24.170645 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424"} err="failed to get container status \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\": rpc error: code = NotFound desc = an error occurred when try to find container \"17c4d809acd7f0c03bd5dfb3b222eb1cecb4fa328cbd30c1eeb43cdd34a0a424\": not found" May 15 00:06:24.170706 kubelet[2643]: I0515 00:06:24.170665 2643 scope.go:117] "RemoveContainer" containerID="7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15" May 15 00:06:24.170903 containerd[1505]: time="2025-05-15T00:06:24.170865325Z" level=error msg="ContainerStatus for \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\": not found" May 15 00:06:24.171033 kubelet[2643]: E0515 00:06:24.171008 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\": not found" containerID="7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15" May 15 00:06:24.171070 kubelet[2643]: I0515 00:06:24.171036 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15"} err="failed to get container status \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c49b12ad9df389d9f453f7023d0166da6aeaf7de22e149fe590e8535b980c15\": not found" May 15 00:06:24.171070 kubelet[2643]: I0515 00:06:24.171055 2643 scope.go:117] "RemoveContainer" containerID="7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab" May 15 00:06:24.171285 containerd[1505]: time="2025-05-15T00:06:24.171236658Z" level=error msg="ContainerStatus for \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\": not found" May 15 00:06:24.171384 kubelet[2643]: E0515 00:06:24.171358 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\": not found" containerID="7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab" May 15 00:06:24.171424 kubelet[2643]: I0515 00:06:24.171383 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab"} err="failed to get container status \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"7429919d3817669936b17b758b7bcd62e9e0fb1ca57c1469650ce3e9878a97ab\": not found" May 15 00:06:24.171424 kubelet[2643]: I0515 00:06:24.171405 2643 scope.go:117] "RemoveContainer" containerID="0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e" May 15 00:06:24.172328 containerd[1505]: time="2025-05-15T00:06:24.172293647Z" level=info msg="RemoveContainer for \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\"" May 15 00:06:24.177900 containerd[1505]: time="2025-05-15T00:06:24.177833303Z" level=info msg="RemoveContainer for \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\" returns successfully" May 15 00:06:24.178135 kubelet[2643]: I0515 00:06:24.177974 2643 scope.go:117] "RemoveContainer" containerID="0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e" May 15 00:06:24.178185 containerd[1505]: time="2025-05-15T00:06:24.178127920Z" level=error msg="ContainerStatus for \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\": not found" May 15 00:06:24.178276 kubelet[2643]: E0515 00:06:24.178241 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\": not found" containerID="0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e" May 15 00:06:24.178355 kubelet[2643]: I0515 00:06:24.178274 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e"} err="failed to get container status \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0be6c1bdd39e514050c70ff9506a57517bd3f5b46dfaa79cbae9eba8f293a41e\": not found" May 15 00:06:24.664960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07-rootfs.mount: Deactivated successfully. May 15 00:06:24.665128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741-rootfs.mount: Deactivated successfully. May 15 00:06:24.665216 systemd[1]: var-lib-kubelet-pods-2db0b230\x2d560e\x2d4513\x2da6ed\x2d4816a80ace05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmbb9q.mount: Deactivated successfully. May 15 00:06:24.665307 systemd[1]: var-lib-kubelet-pods-e6da0ac1\x2d727d\x2d4ba9\x2d9691\x2dbc8d9332c446-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2mvhb.mount: Deactivated successfully. May 15 00:06:24.665394 systemd[1]: var-lib-kubelet-pods-e6da0ac1\x2d727d\x2d4ba9\x2d9691\x2dbc8d9332c446-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:06:24.665483 systemd[1]: var-lib-kubelet-pods-e6da0ac1\x2d727d\x2d4ba9\x2d9691\x2dbc8d9332c446-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:06:24.693649 kubelet[2643]: I0515 00:06:24.693599 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2db0b230-560e-4513-a6ed-4816a80ace05" path="/var/lib/kubelet/pods/2db0b230-560e-4513-a6ed-4816a80ace05/volumes" May 15 00:06:24.694392 kubelet[2643]: I0515 00:06:24.694339 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6da0ac1-727d-4ba9-9691-bc8d9332c446" path="/var/lib/kubelet/pods/e6da0ac1-727d-4ba9-9691-bc8d9332c446/volumes" May 15 00:06:25.260540 sshd[4335]: Connection closed by 10.0.0.1 port 59362 May 15 00:06:25.260968 sshd-session[4332]: pam_unix(sshd:session): session closed for user core May 15 00:06:25.271299 systemd[1]: sshd@25-10.0.0.106:22-10.0.0.1:59362.service: Deactivated successfully. May 15 00:06:25.273484 systemd[1]: session-26.scope: Deactivated successfully. May 15 00:06:25.275326 systemd-logind[1495]: Session 26 logged out. Waiting for processes to exit. May 15 00:06:25.280394 systemd[1]: Started sshd@26-10.0.0.106:22-10.0.0.1:38728.service - OpenSSH per-connection server daemon (10.0.0.1:38728). May 15 00:06:25.281484 systemd-logind[1495]: Removed session 26. May 15 00:06:25.317591 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 38728 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:06:25.319301 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:25.324066 systemd-logind[1495]: New session 27 of user core. May 15 00:06:25.335454 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 00:06:25.750200 kubelet[2643]: E0515 00:06:25.750137 2643 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:06:25.999346 sshd[4499]: Connection closed by 10.0.0.1 port 38728 May 15 00:06:26.001729 sshd-session[4496]: pam_unix(sshd:session): session closed for user core May 15 00:06:26.019760 kubelet[2643]: I0515 00:06:26.019687 2643 memory_manager.go:355] "RemoveStaleState removing state" podUID="e6da0ac1-727d-4ba9-9691-bc8d9332c446" containerName="cilium-agent" May 15 00:06:26.019760 kubelet[2643]: I0515 00:06:26.019724 2643 memory_manager.go:355] "RemoveStaleState removing state" podUID="2db0b230-560e-4513-a6ed-4816a80ace05" containerName="cilium-operator" May 15 00:06:26.024134 systemd[1]: sshd@26-10.0.0.106:22-10.0.0.1:38728.service: Deactivated successfully. May 15 00:06:26.024755 kubelet[2643]: W0515 00:06:26.024463 2643 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 00:06:26.024755 kubelet[2643]: E0515 00:06:26.024530 2643 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 00:06:26.024755 kubelet[2643]: W0515 00:06:26.024611 2643 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 00:06:26.024755 kubelet[2643]: E0515 00:06:26.024632 2643 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 00:06:26.024755 kubelet[2643]: W0515 00:06:26.024690 2643 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 00:06:26.025622 kubelet[2643]: E0515 00:06:26.024709 2643 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 00:06:26.026912 kubelet[2643]: I0515 00:06:26.026832 2643 status_manager.go:890] "Failed to get status for pod" podUID="3f32ca6b-a41d-4309-b81a-0e02714a1dbd" pod="kube-system/cilium-m88n2" err="pods \"cilium-m88n2\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 15 00:06:26.027601 kubelet[2643]: W0515 00:06:26.027513 2643 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 00:06:26.027601 kubelet[2643]: E0515 00:06:26.027547 2643 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 00:06:26.028998 systemd[1]: session-27.scope: Deactivated successfully. May 15 00:06:26.030482 systemd-logind[1495]: Session 27 logged out. Waiting for processes to exit. May 15 00:06:26.035896 systemd-logind[1495]: Removed session 27. May 15 00:06:26.050256 systemd[1]: Started sshd@27-10.0.0.106:22-10.0.0.1:38742.service - OpenSSH per-connection server daemon (10.0.0.1:38742). May 15 00:06:26.056086 systemd[1]: Created slice kubepods-burstable-pod3f32ca6b_a41d_4309_b81a_0e02714a1dbd.slice - libcontainer container kubepods-burstable-pod3f32ca6b_a41d_4309_b81a_0e02714a1dbd.slice. May 15 00:06:26.071348 kubelet[2643]: I0515 00:06:26.071289 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-xtables-lock\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.071772 kubelet[2643]: I0515 00:06:26.071591 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-cilium-config-path\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073080 kubelet[2643]: I0515 00:06:26.072733 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-hubble-tls\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073080 kubelet[2643]: I0515 00:06:26.072772 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-clustermesh-secrets\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073080 kubelet[2643]: I0515 00:06:26.072794 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-cilium-run\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073080 kubelet[2643]: I0515 00:06:26.072815 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-cilium-cgroup\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073080 kubelet[2643]: I0515 00:06:26.072835 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-cni-path\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073080 kubelet[2643]: I0515 00:06:26.072869 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-bpf-maps\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073300 kubelet[2643]: I0515 00:06:26.072892 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-cilium-ipsec-secrets\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073300 kubelet[2643]: I0515 00:06:26.072912 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-host-proc-sys-kernel\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073300 kubelet[2643]: I0515 00:06:26.072936 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-lib-modules\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073300 kubelet[2643]: I0515 00:06:26.072956 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-hostproc\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073300 kubelet[2643]: I0515 00:06:26.072978 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-host-proc-sys-net\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073300 kubelet[2643]: I0515 00:06:26.073027 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-etc-cni-netd\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.073479 kubelet[2643]: I0515 00:06:26.073056 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgb9m\" (UniqueName: \"kubernetes.io/projected/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-kube-api-access-tgb9m\") pod \"cilium-m88n2\" (UID: \"3f32ca6b-a41d-4309-b81a-0e02714a1dbd\") " pod="kube-system/cilium-m88n2" May 15 00:06:26.093167 sshd[4512]: Accepted publickey for core from 10.0.0.1 port 38742 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:06:26.095554 sshd-session[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:26.103913 systemd-logind[1495]: New session 28 of user core. May 15 00:06:26.113182 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 00:06:26.169042 sshd[4516]: Connection closed by 10.0.0.1 port 38742 May 15 00:06:26.169568 sshd-session[4512]: pam_unix(sshd:session): session closed for user core May 15 00:06:26.188208 systemd[1]: sshd@27-10.0.0.106:22-10.0.0.1:38742.service: Deactivated successfully. May 15 00:06:26.191368 systemd[1]: session-28.scope: Deactivated successfully. May 15 00:06:26.193591 systemd-logind[1495]: Session 28 logged out. Waiting for processes to exit. May 15 00:06:26.202555 systemd[1]: Started sshd@28-10.0.0.106:22-10.0.0.1:38756.service - OpenSSH per-connection server daemon (10.0.0.1:38756). May 15 00:06:26.203968 systemd-logind[1495]: Removed session 28. May 15 00:06:26.243426 sshd[4524]: Accepted publickey for core from 10.0.0.1 port 38756 ssh2: RSA SHA256:Pqx89Dg+7GN5o/lXv4n120h9YNtXES8cENk1JyfmIpc May 15 00:06:26.245954 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:26.252645 systemd-logind[1495]: New session 29 of user core. May 15 00:06:26.263430 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 00:06:27.175014 kubelet[2643]: E0515 00:06:27.174942 2643 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 15 00:06:27.175518 kubelet[2643]: E0515 00:06:27.175058 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-cilium-config-path podName:3f32ca6b-a41d-4309-b81a-0e02714a1dbd nodeName:}" failed. No retries permitted until 2025-05-15 00:06:27.675036385 +0000 UTC m=+107.123555941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-cilium-config-path") pod "cilium-m88n2" (UID: "3f32ca6b-a41d-4309-b81a-0e02714a1dbd") : failed to sync configmap cache: timed out waiting for the condition May 15 00:06:27.183948 kubelet[2643]: E0515 00:06:27.183888 2643 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 15 00:06:27.183948 kubelet[2643]: E0515 00:06:27.183936 2643 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-m88n2: failed to sync secret cache: timed out waiting for the condition May 15 00:06:27.184115 kubelet[2643]: E0515 00:06:27.183996 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-hubble-tls podName:3f32ca6b-a41d-4309-b81a-0e02714a1dbd nodeName:}" failed. No retries permitted until 2025-05-15 00:06:27.683977756 +0000 UTC m=+107.132497312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-hubble-tls") pod "cilium-m88n2" (UID: "3f32ca6b-a41d-4309-b81a-0e02714a1dbd") : failed to sync secret cache: timed out waiting for the condition May 15 00:06:27.184115 kubelet[2643]: E0515 00:06:27.183899 2643 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 15 00:06:27.184115 kubelet[2643]: E0515 00:06:27.184066 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-cilium-ipsec-secrets podName:3f32ca6b-a41d-4309-b81a-0e02714a1dbd nodeName:}" failed. No retries permitted until 2025-05-15 00:06:27.684045366 +0000 UTC m=+107.132564922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/3f32ca6b-a41d-4309-b81a-0e02714a1dbd-cilium-ipsec-secrets") pod "cilium-m88n2" (UID: "3f32ca6b-a41d-4309-b81a-0e02714a1dbd") : failed to sync secret cache: timed out waiting for the condition May 15 00:06:27.861860 containerd[1505]: time="2025-05-15T00:06:27.861793069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m88n2,Uid:3f32ca6b-a41d-4309-b81a-0e02714a1dbd,Namespace:kube-system,Attempt:0,}" May 15 00:06:28.281701 containerd[1505]: time="2025-05-15T00:06:28.281416374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:06:28.281701 containerd[1505]: time="2025-05-15T00:06:28.281484474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:06:28.281701 containerd[1505]: time="2025-05-15T00:06:28.281497649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:28.281701 containerd[1505]: time="2025-05-15T00:06:28.281593304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:06:28.311407 systemd[1]: Started cri-containerd-1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959.scope - libcontainer container 1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959. May 15 00:06:28.335621 containerd[1505]: time="2025-05-15T00:06:28.335579743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m88n2,Uid:3f32ca6b-a41d-4309-b81a-0e02714a1dbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\"" May 15 00:06:28.338747 containerd[1505]: time="2025-05-15T00:06:28.338711352Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:06:28.691192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262215802.mount: Deactivated successfully. May 15 00:06:28.868641 containerd[1505]: time="2025-05-15T00:06:28.868570990Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1808837c55e9a432dc4f48767056e27509bdb744bf467bd9098a6c387c34128f\"" May 15 00:06:28.869409 containerd[1505]: time="2025-05-15T00:06:28.869380545Z" level=info msg="StartContainer for \"1808837c55e9a432dc4f48767056e27509bdb744bf467bd9098a6c387c34128f\"" May 15 00:06:28.904466 systemd[1]: Started cri-containerd-1808837c55e9a432dc4f48767056e27509bdb744bf467bd9098a6c387c34128f.scope - libcontainer container 1808837c55e9a432dc4f48767056e27509bdb744bf467bd9098a6c387c34128f. May 15 00:06:29.034810 systemd[1]: cri-containerd-1808837c55e9a432dc4f48767056e27509bdb744bf467bd9098a6c387c34128f.scope: Deactivated successfully. May 15 00:06:29.039699 containerd[1505]: time="2025-05-15T00:06:29.039630476Z" level=info msg="StartContainer for \"1808837c55e9a432dc4f48767056e27509bdb744bf467bd9098a6c387c34128f\" returns successfully" May 15 00:06:29.061133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1808837c55e9a432dc4f48767056e27509bdb744bf467bd9098a6c387c34128f-rootfs.mount: Deactivated successfully. May 15 00:06:29.237226 containerd[1505]: time="2025-05-15T00:06:29.237127394Z" level=info msg="shim disconnected" id=1808837c55e9a432dc4f48767056e27509bdb744bf467bd9098a6c387c34128f namespace=k8s.io May 15 00:06:29.237226 containerd[1505]: time="2025-05-15T00:06:29.237191818Z" level=warning msg="cleaning up after shim disconnected" id=1808837c55e9a432dc4f48767056e27509bdb744bf467bd9098a6c387c34128f namespace=k8s.io May 15 00:06:29.237226 containerd[1505]: time="2025-05-15T00:06:29.237202870Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:30.146483 containerd[1505]: time="2025-05-15T00:06:30.146422195Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:06:30.751787 kubelet[2643]: E0515 00:06:30.751735 2643 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:06:30.752465 containerd[1505]: time="2025-05-15T00:06:30.752182861Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"080661acf03332147dd7f2e5a8ebd864955f917f8e07eada73c77aeb30d481fd\"" May 15 00:06:30.752656 containerd[1505]: time="2025-05-15T00:06:30.752616083Z" level=info msg="StartContainer for \"080661acf03332147dd7f2e5a8ebd864955f917f8e07eada73c77aeb30d481fd\"" May 15 00:06:30.793407 systemd[1]: Started cri-containerd-080661acf03332147dd7f2e5a8ebd864955f917f8e07eada73c77aeb30d481fd.scope - libcontainer container 080661acf03332147dd7f2e5a8ebd864955f917f8e07eada73c77aeb30d481fd. May 15 00:06:30.834881 systemd[1]: cri-containerd-080661acf03332147dd7f2e5a8ebd864955f917f8e07eada73c77aeb30d481fd.scope: Deactivated successfully. May 15 00:06:30.901702 containerd[1505]: time="2025-05-15T00:06:30.901619221Z" level=info msg="StartContainer for \"080661acf03332147dd7f2e5a8ebd864955f917f8e07eada73c77aeb30d481fd\" returns successfully" May 15 00:06:30.942838 containerd[1505]: time="2025-05-15T00:06:30.942751772Z" level=info msg="shim disconnected" id=080661acf03332147dd7f2e5a8ebd864955f917f8e07eada73c77aeb30d481fd namespace=k8s.io May 15 00:06:30.942838 containerd[1505]: time="2025-05-15T00:06:30.942824442Z" level=warning msg="cleaning up after shim disconnected" id=080661acf03332147dd7f2e5a8ebd864955f917f8e07eada73c77aeb30d481fd namespace=k8s.io May 15 00:06:30.942838 containerd[1505]: time="2025-05-15T00:06:30.942838639Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:31.151340 containerd[1505]: time="2025-05-15T00:06:31.151271923Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:06:31.513774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-080661acf03332147dd7f2e5a8ebd864955f917f8e07eada73c77aeb30d481fd-rootfs.mount: Deactivated successfully. May 15 00:06:31.517179 containerd[1505]: time="2025-05-15T00:06:31.517119959Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"38654bbd6193ff5be48ee9b050b625b5118f008591a3a804c9ce8a89a1471a4f\"" May 15 00:06:31.517774 containerd[1505]: time="2025-05-15T00:06:31.517732196Z" level=info msg="StartContainer for \"38654bbd6193ff5be48ee9b050b625b5118f008591a3a804c9ce8a89a1471a4f\"" May 15 00:06:31.554490 systemd[1]: Started cri-containerd-38654bbd6193ff5be48ee9b050b625b5118f008591a3a804c9ce8a89a1471a4f.scope - libcontainer container 38654bbd6193ff5be48ee9b050b625b5118f008591a3a804c9ce8a89a1471a4f. May 15 00:06:31.595948 systemd[1]: cri-containerd-38654bbd6193ff5be48ee9b050b625b5118f008591a3a804c9ce8a89a1471a4f.scope: Deactivated successfully. May 15 00:06:31.609438 containerd[1505]: time="2025-05-15T00:06:31.609361753Z" level=info msg="StartContainer for \"38654bbd6193ff5be48ee9b050b625b5118f008591a3a804c9ce8a89a1471a4f\" returns successfully" May 15 00:06:31.634401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38654bbd6193ff5be48ee9b050b625b5118f008591a3a804c9ce8a89a1471a4f-rootfs.mount: Deactivated successfully. May 15 00:06:31.836958 containerd[1505]: time="2025-05-15T00:06:31.836877511Z" level=info msg="shim disconnected" id=38654bbd6193ff5be48ee9b050b625b5118f008591a3a804c9ce8a89a1471a4f namespace=k8s.io May 15 00:06:31.836958 containerd[1505]: time="2025-05-15T00:06:31.836947525Z" level=warning msg="cleaning up after shim disconnected" id=38654bbd6193ff5be48ee9b050b625b5118f008591a3a804c9ce8a89a1471a4f namespace=k8s.io May 15 00:06:31.836958 containerd[1505]: time="2025-05-15T00:06:31.836959910Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:32.155823 containerd[1505]: time="2025-05-15T00:06:32.155687423Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:06:32.396399 containerd[1505]: time="2025-05-15T00:06:32.396290839Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f67749b08dba6f235fc1067a4ea4fe393a04fc3dd88cb6c7bf0edb771e4eab3\"" May 15 00:06:32.397146 containerd[1505]: time="2025-05-15T00:06:32.397066521Z" level=info msg="StartContainer for \"7f67749b08dba6f235fc1067a4ea4fe393a04fc3dd88cb6c7bf0edb771e4eab3\"" May 15 00:06:32.440417 systemd[1]: Started cri-containerd-7f67749b08dba6f235fc1067a4ea4fe393a04fc3dd88cb6c7bf0edb771e4eab3.scope - libcontainer container 7f67749b08dba6f235fc1067a4ea4fe393a04fc3dd88cb6c7bf0edb771e4eab3. May 15 00:06:32.470487 systemd[1]: cri-containerd-7f67749b08dba6f235fc1067a4ea4fe393a04fc3dd88cb6c7bf0edb771e4eab3.scope: Deactivated successfully. May 15 00:06:32.476731 containerd[1505]: time="2025-05-15T00:06:32.476673196Z" level=info msg="StartContainer for \"7f67749b08dba6f235fc1067a4ea4fe393a04fc3dd88cb6c7bf0edb771e4eab3\" returns successfully" May 15 00:06:32.506058 containerd[1505]: time="2025-05-15T00:06:32.505956859Z" level=info msg="shim disconnected" id=7f67749b08dba6f235fc1067a4ea4fe393a04fc3dd88cb6c7bf0edb771e4eab3 namespace=k8s.io May 15 00:06:32.506058 containerd[1505]: time="2025-05-15T00:06:32.506046380Z" level=warning msg="cleaning up after shim disconnected" id=7f67749b08dba6f235fc1067a4ea4fe393a04fc3dd88cb6c7bf0edb771e4eab3 namespace=k8s.io May 15 00:06:32.506058 containerd[1505]: time="2025-05-15T00:06:32.506055919Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:06:32.513705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006665231.mount: Deactivated successfully. May 15 00:06:33.160204 containerd[1505]: time="2025-05-15T00:06:33.160139208Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:06:33.182269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730702234.mount: Deactivated successfully. May 15 00:06:33.186515 containerd[1505]: time="2025-05-15T00:06:33.186455371Z" level=info msg="CreateContainer within sandbox \"1493be452c43b9c991496c7cd39fe8b21124d804f516c937efbe7aeb976f1959\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f75f0352b824bbf6cef178c4f58089937c17aa2ba1288fe28f3e10bf8131a657\"" May 15 00:06:33.187757 containerd[1505]: time="2025-05-15T00:06:33.187682843Z" level=info msg="StartContainer for \"f75f0352b824bbf6cef178c4f58089937c17aa2ba1288fe28f3e10bf8131a657\"" May 15 00:06:33.226335 systemd[1]: Started cri-containerd-f75f0352b824bbf6cef178c4f58089937c17aa2ba1288fe28f3e10bf8131a657.scope - libcontainer container f75f0352b824bbf6cef178c4f58089937c17aa2ba1288fe28f3e10bf8131a657. May 15 00:06:33.252012 kubelet[2643]: I0515 00:06:33.251941 2643 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:06:33Z","lastTransitionTime":"2025-05-15T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:06:33.262686 containerd[1505]: time="2025-05-15T00:06:33.262628641Z" level=info msg="StartContainer for \"f75f0352b824bbf6cef178c4f58089937c17aa2ba1288fe28f3e10bf8131a657\" returns successfully" May 15 00:06:33.787179 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 00:06:34.181489 kubelet[2643]: I0515 00:06:34.181402 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m88n2" podStartSLOduration=8.181376636 podStartE2EDuration="8.181376636s" podCreationTimestamp="2025-05-15 00:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:06:34.180863809 +0000 UTC m=+113.629383365" watchObservedRunningTime="2025-05-15 00:06:34.181376636 +0000 UTC m=+113.629896192" May 15 00:06:36.939593 systemd[1]: run-containerd-runc-k8s.io-f75f0352b824bbf6cef178c4f58089937c17aa2ba1288fe28f3e10bf8131a657-runc.MevGtr.mount: Deactivated successfully. May 15 00:06:37.318555 systemd-networkd[1429]: lxc_health: Link UP May 15 00:06:37.328995 systemd-networkd[1429]: lxc_health: Gained carrier May 15 00:06:39.246408 systemd-networkd[1429]: lxc_health: Gained IPv6LL May 15 00:06:40.685717 containerd[1505]: time="2025-05-15T00:06:40.685663083Z" level=info msg="StopPodSandbox for \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\"" May 15 00:06:40.686233 containerd[1505]: time="2025-05-15T00:06:40.685775349Z" level=info msg="TearDown network for sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" successfully" May 15 00:06:40.686233 containerd[1505]: time="2025-05-15T00:06:40.685843781Z" level=info msg="StopPodSandbox for \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" returns successfully" May 15 00:06:40.686500 containerd[1505]: time="2025-05-15T00:06:40.686471310Z" level=info msg="RemovePodSandbox for \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\"" May 15 00:06:40.686540 containerd[1505]: time="2025-05-15T00:06:40.686514693Z" level=info msg="Forcibly stopping sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\"" May 15 00:06:40.686611 containerd[1505]: time="2025-05-15T00:06:40.686590509Z" level=info msg="TearDown network for sandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" successfully" May 15 00:06:40.727181 containerd[1505]: time="2025-05-15T00:06:40.727083798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:06:40.727465 containerd[1505]: time="2025-05-15T00:06:40.727210201Z" level=info msg="RemovePodSandbox \"bc10ca354887e9881110b09c2e3cbf80fbca5f4d511b2b69745096658cf24741\" returns successfully" May 15 00:06:40.728080 containerd[1505]: time="2025-05-15T00:06:40.727885632Z" level=info msg="StopPodSandbox for \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\"" May 15 00:06:40.728080 containerd[1505]: time="2025-05-15T00:06:40.727991036Z" level=info msg="TearDown network for sandbox \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\" successfully" May 15 00:06:40.728080 containerd[1505]: time="2025-05-15T00:06:40.728003790Z" level=info msg="StopPodSandbox for \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\" returns successfully" May 15 00:06:40.728553 containerd[1505]: time="2025-05-15T00:06:40.728509083Z" level=info msg="RemovePodSandbox for \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\"" May 15 00:06:40.728673 containerd[1505]: time="2025-05-15T00:06:40.728555022Z" level=info msg="Forcibly stopping sandbox \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\"" May 15 00:06:40.728711 containerd[1505]: time="2025-05-15T00:06:40.728652289Z" level=info msg="TearDown network for sandbox \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\" successfully" May 15 00:06:40.732737 containerd[1505]: time="2025-05-15T00:06:40.732680860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:06:40.732857 containerd[1505]: time="2025-05-15T00:06:40.732771936Z" level=info msg="RemovePodSandbox \"3db0202f642ccf5bcad0431aacb8fbf1eadf579e72911ec6de749e5adc74fd07\" returns successfully" May 15 00:06:43.441754 sshd[4527]: Connection closed by 10.0.0.1 port 38756 May 15 00:06:43.442312 sshd-session[4524]: pam_unix(sshd:session): session closed for user core May 15 00:06:43.447421 systemd[1]: sshd@28-10.0.0.106:22-10.0.0.1:38756.service: Deactivated successfully. May 15 00:06:43.449816 systemd[1]: session-29.scope: Deactivated successfully. May 15 00:06:43.450734 systemd-logind[1495]: Session 29 logged out. Waiting for processes to exit. May 15 00:06:43.451656 systemd-logind[1495]: Removed session 29.