May 16 05:23:19.037646 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 03:57:41 -00 2025 May 16 05:23:19.037670 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3d34a100811c7d0b0788e18c6e7d09762e715c8b4f61568827372c0312d26be3 May 16 05:23:19.037679 kernel: BIOS-provided physical RAM map: May 16 05:23:19.037686 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 05:23:19.037692 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 05:23:19.037699 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 05:23:19.037707 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 16 05:23:19.037715 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 16 05:23:19.037726 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 16 05:23:19.037732 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 16 05:23:19.037739 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 05:23:19.037746 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 05:23:19.037752 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 05:23:19.037759 kernel: NX (Execute Disable) protection: active May 16 05:23:19.037769 kernel: APIC: Static calls initialized May 16 05:23:19.037776 kernel: SMBIOS 2.8 present. May 16 05:23:19.037786 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 16 05:23:19.037793 kernel: DMI: Memory slots populated: 1/1 May 16 05:23:19.037800 kernel: Hypervisor detected: KVM May 16 05:23:19.037807 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 05:23:19.037815 kernel: kvm-clock: using sched offset of 4179342942 cycles May 16 05:23:19.037822 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 05:23:19.037830 kernel: tsc: Detected 2794.748 MHz processor May 16 05:23:19.037839 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 05:23:19.037847 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 05:23:19.037855 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 16 05:23:19.037862 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 05:23:19.037870 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 05:23:19.037877 kernel: Using GB pages for direct mapping May 16 05:23:19.037884 kernel: ACPI: Early table checksum verification disabled May 16 05:23:19.037964 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 16 05:23:19.037975 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:23:19.037987 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:23:19.037994 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:23:19.038001 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 16 05:23:19.038009 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:23:19.038016 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:23:19.038023 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:23:19.038031 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 05:23:19.038038 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 16 05:23:19.038051 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 16 05:23:19.038059 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 16 05:23:19.038066 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 16 05:23:19.038074 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 16 05:23:19.038081 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 16 05:23:19.038089 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 16 05:23:19.038098 kernel: No NUMA configuration found May 16 05:23:19.038105 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 16 05:23:19.038113 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 16 05:23:19.038121 kernel: Zone ranges: May 16 05:23:19.038128 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 05:23:19.038136 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 16 05:23:19.038143 kernel: Normal empty May 16 05:23:19.038150 kernel: Device empty May 16 05:23:19.038158 kernel: Movable zone start for each node May 16 05:23:19.038165 kernel: Early memory node ranges May 16 05:23:19.038175 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 05:23:19.038182 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 16 05:23:19.038190 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 16 05:23:19.038197 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 05:23:19.038205 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 05:23:19.038212 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 16 05:23:19.038219 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 05:23:19.038230 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 05:23:19.038237 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 05:23:19.038247 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 05:23:19.038255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 05:23:19.038264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 05:23:19.038272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 05:23:19.038279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 05:23:19.038287 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 05:23:19.038294 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 05:23:19.038302 kernel: TSC deadline timer available May 16 05:23:19.038309 kernel: CPU topo: Max. logical packages: 1 May 16 05:23:19.038319 kernel: CPU topo: Max. logical dies: 1 May 16 05:23:19.038326 kernel: CPU topo: Max. dies per package: 1 May 16 05:23:19.038333 kernel: CPU topo: Max. threads per core: 1 May 16 05:23:19.038341 kernel: CPU topo: Num. cores per package: 4 May 16 05:23:19.038348 kernel: CPU topo: Num. threads per package: 4 May 16 05:23:19.038355 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 16 05:23:19.038363 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 05:23:19.038370 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 05:23:19.038378 kernel: kvm-guest: setup PV sched yield May 16 05:23:19.038385 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 16 05:23:19.038395 kernel: Booting paravirtualized kernel on KVM May 16 05:23:19.038402 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 05:23:19.038410 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 05:23:19.038418 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 16 05:23:19.038425 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 16 05:23:19.038432 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 05:23:19.038440 kernel: kvm-guest: PV spinlocks enabled May 16 05:23:19.038447 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 05:23:19.038456 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3d34a100811c7d0b0788e18c6e7d09762e715c8b4f61568827372c0312d26be3 May 16 05:23:19.038466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 05:23:19.038473 kernel: random: crng init done May 16 05:23:19.038481 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 05:23:19.038488 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 05:23:19.038496 kernel: Fallback order for Node 0: 0 May 16 05:23:19.038503 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 16 05:23:19.038511 kernel: Policy zone: DMA32 May 16 05:23:19.038518 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 05:23:19.038528 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 05:23:19.038535 kernel: ftrace: allocating 40065 entries in 157 pages May 16 05:23:19.038542 kernel: ftrace: allocated 157 pages with 5 groups May 16 05:23:19.038550 kernel: Dynamic Preempt: voluntary May 16 05:23:19.038557 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 05:23:19.038565 kernel: rcu: RCU event tracing is enabled. May 16 05:23:19.038573 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 05:23:19.038581 kernel: Trampoline variant of Tasks RCU enabled. May 16 05:23:19.038591 kernel: Rude variant of Tasks RCU enabled. May 16 05:23:19.038601 kernel: Tracing variant of Tasks RCU enabled. May 16 05:23:19.038608 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 05:23:19.038616 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 05:23:19.038623 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 05:23:19.038631 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 05:23:19.038638 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 05:23:19.038646 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 05:23:19.038654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 05:23:19.038670 kernel: Console: colour VGA+ 80x25 May 16 05:23:19.038678 kernel: printk: legacy console [ttyS0] enabled May 16 05:23:19.038685 kernel: ACPI: Core revision 20240827 May 16 05:23:19.038693 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 05:23:19.038703 kernel: APIC: Switch to symmetric I/O mode setup May 16 05:23:19.038711 kernel: x2apic enabled May 16 05:23:19.038721 kernel: APIC: Switched APIC routing to: physical x2apic May 16 05:23:19.038729 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 05:23:19.038737 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 05:23:19.038747 kernel: kvm-guest: setup PV IPIs May 16 05:23:19.038755 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 05:23:19.038763 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 05:23:19.038771 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 05:23:19.038779 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 05:23:19.038787 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 05:23:19.038795 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 05:23:19.038802 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 05:23:19.038812 kernel: Spectre V2 : Mitigation: Retpolines May 16 05:23:19.038820 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 16 05:23:19.038828 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 16 05:23:19.038836 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 05:23:19.038844 kernel: RETBleed: Mitigation: untrained return thunk May 16 05:23:19.038852 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 05:23:19.038859 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 05:23:19.038867 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 05:23:19.038876 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 05:23:19.038886 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 05:23:19.038914 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 05:23:19.038922 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 05:23:19.038930 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 05:23:19.038938 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 05:23:19.038946 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 05:23:19.038953 kernel: Freeing SMP alternatives memory: 32K May 16 05:23:19.038962 kernel: pid_max: default: 32768 minimum: 301 May 16 05:23:19.038972 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 05:23:19.038980 kernel: landlock: Up and running. May 16 05:23:19.038987 kernel: SELinux: Initializing. May 16 05:23:19.038995 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 05:23:19.039005 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 05:23:19.039013 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 05:23:19.039021 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 05:23:19.039029 kernel: ... version: 0 May 16 05:23:19.039037 kernel: ... bit width: 48 May 16 05:23:19.039046 kernel: ... generic registers: 6 May 16 05:23:19.039054 kernel: ... value mask: 0000ffffffffffff May 16 05:23:19.039062 kernel: ... max period: 00007fffffffffff May 16 05:23:19.039070 kernel: ... fixed-purpose events: 0 May 16 05:23:19.039078 kernel: ... event mask: 000000000000003f May 16 05:23:19.039085 kernel: signal: max sigframe size: 1776 May 16 05:23:19.039093 kernel: rcu: Hierarchical SRCU implementation. May 16 05:23:19.039101 kernel: rcu: Max phase no-delay instances is 400. May 16 05:23:19.039109 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 05:23:19.039117 kernel: smp: Bringing up secondary CPUs ... May 16 05:23:19.039126 kernel: smpboot: x86: Booting SMP configuration: May 16 05:23:19.039134 kernel: .... node #0, CPUs: #1 #2 #3 May 16 05:23:19.039142 kernel: smp: Brought up 1 node, 4 CPUs May 16 05:23:19.039150 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 05:23:19.039158 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 136904K reserved, 0K cma-reserved) May 16 05:23:19.039166 kernel: devtmpfs: initialized May 16 05:23:19.039173 kernel: x86/mm: Memory block size: 128MB May 16 05:23:19.039181 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 05:23:19.039189 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 05:23:19.039199 kernel: pinctrl core: initialized pinctrl subsystem May 16 05:23:19.039207 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 05:23:19.039215 kernel: audit: initializing netlink subsys (disabled) May 16 05:23:19.039223 kernel: audit: type=2000 audit(1747372995.711:1): state=initialized audit_enabled=0 res=1 May 16 05:23:19.039231 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 05:23:19.039238 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 05:23:19.039246 kernel: cpuidle: using governor menu May 16 05:23:19.039254 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 05:23:19.039262 kernel: dca service started, version 1.12.1 May 16 05:23:19.039272 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 16 05:23:19.039280 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 16 05:23:19.039287 kernel: PCI: Using configuration type 1 for base access May 16 05:23:19.039295 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 05:23:19.039303 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 05:23:19.039311 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 05:23:19.039319 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 05:23:19.039327 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 05:23:19.039336 kernel: ACPI: Added _OSI(Module Device) May 16 05:23:19.039344 kernel: ACPI: Added _OSI(Processor Device) May 16 05:23:19.039352 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 05:23:19.039360 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 05:23:19.039367 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 05:23:19.039375 kernel: ACPI: Interpreter enabled May 16 05:23:19.039383 kernel: ACPI: PM: (supports S0 S3 S5) May 16 05:23:19.039390 kernel: ACPI: Using IOAPIC for interrupt routing May 16 05:23:19.039398 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 05:23:19.039406 kernel: PCI: Using E820 reservations for host bridge windows May 16 05:23:19.039416 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 05:23:19.039424 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 05:23:19.039605 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 05:23:19.039729 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 05:23:19.039849 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 05:23:19.039859 kernel: PCI host bridge to bus 0000:00 May 16 05:23:19.040028 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 05:23:19.040146 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 05:23:19.040258 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 05:23:19.040365 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 16 05:23:19.040472 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 05:23:19.040579 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 16 05:23:19.040686 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 05:23:19.040825 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 16 05:23:19.040982 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 16 05:23:19.041104 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 16 05:23:19.041233 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 16 05:23:19.041354 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 16 05:23:19.041472 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 05:23:19.041600 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 05:23:19.041726 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 16 05:23:19.041845 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 16 05:23:19.042026 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 16 05:23:19.042180 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 16 05:23:19.042305 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 16 05:23:19.042424 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 16 05:23:19.042543 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 16 05:23:19.042676 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 16 05:23:19.042797 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 16 05:23:19.042957 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 16 05:23:19.043082 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 16 05:23:19.043202 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 16 05:23:19.043328 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 16 05:23:19.043453 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 05:23:19.043579 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 16 05:23:19.043697 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 16 05:23:19.043813 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 16 05:23:19.043964 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 16 05:23:19.044087 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 16 05:23:19.044098 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 05:23:19.044110 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 05:23:19.044118 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 05:23:19.044126 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 05:23:19.044134 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 05:23:19.044142 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 05:23:19.044149 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 05:23:19.044157 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 05:23:19.044165 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 05:23:19.044172 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 05:23:19.044182 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 05:23:19.044190 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 05:23:19.044198 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 05:23:19.044205 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 05:23:19.044213 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 05:23:19.044221 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 05:23:19.044228 kernel: iommu: Default domain type: Translated May 16 05:23:19.044236 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 05:23:19.044244 kernel: PCI: Using ACPI for IRQ routing May 16 05:23:19.044254 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 05:23:19.044261 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 05:23:19.044269 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 16 05:23:19.044388 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 05:23:19.044505 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 05:23:19.044623 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 05:23:19.044633 kernel: vgaarb: loaded May 16 05:23:19.044641 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 05:23:19.044652 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 05:23:19.044660 kernel: clocksource: Switched to clocksource kvm-clock May 16 05:23:19.044667 kernel: VFS: Disk quotas dquot_6.6.0 May 16 05:23:19.044675 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 05:23:19.044683 kernel: pnp: PnP ACPI init May 16 05:23:19.044821 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 16 05:23:19.044833 kernel: pnp: PnP ACPI: found 6 devices May 16 05:23:19.044841 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 05:23:19.044853 kernel: NET: Registered PF_INET protocol family May 16 05:23:19.044860 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 05:23:19.044868 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 05:23:19.044876 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 05:23:19.044884 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 05:23:19.044916 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 05:23:19.044925 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 05:23:19.044933 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 05:23:19.044940 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 05:23:19.044951 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 05:23:19.044959 kernel: NET: Registered PF_XDP protocol family May 16 05:23:19.045073 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 05:23:19.045181 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 05:23:19.045289 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 05:23:19.045396 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 16 05:23:19.045504 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 16 05:23:19.045611 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 16 05:23:19.045621 kernel: PCI: CLS 0 bytes, default 64 May 16 05:23:19.045633 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 05:23:19.045641 kernel: Initialise system trusted keyrings May 16 05:23:19.045649 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 05:23:19.045657 kernel: Key type asymmetric registered May 16 05:23:19.045665 kernel: Asymmetric key parser 'x509' registered May 16 05:23:19.045672 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 05:23:19.045680 kernel: io scheduler mq-deadline registered May 16 05:23:19.045688 kernel: io scheduler kyber registered May 16 05:23:19.045696 kernel: io scheduler bfq registered May 16 05:23:19.045706 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 05:23:19.045714 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 05:23:19.045722 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 05:23:19.045730 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 05:23:19.045737 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 05:23:19.045745 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 05:23:19.045753 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 05:23:19.045761 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 05:23:19.045769 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 05:23:19.045944 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 05:23:19.045971 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 05:23:19.046092 kernel: rtc_cmos 00:04: registered as rtc0 May 16 05:23:19.046205 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T05:23:18 UTC (1747372998) May 16 05:23:19.046316 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 16 05:23:19.046327 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 05:23:19.046335 kernel: NET: Registered PF_INET6 protocol family May 16 05:23:19.046347 kernel: Segment Routing with IPv6 May 16 05:23:19.046355 kernel: In-situ OAM (IOAM) with IPv6 May 16 05:23:19.046362 kernel: NET: Registered PF_PACKET protocol family May 16 05:23:19.046370 kernel: Key type dns_resolver registered May 16 05:23:19.046378 kernel: IPI shorthand broadcast: enabled May 16 05:23:19.046386 kernel: sched_clock: Marking stable (2977002782, 122122818)->(3125180265, -26054665) May 16 05:23:19.046394 kernel: registered taskstats version 1 May 16 05:23:19.046402 kernel: hpet: Lost 2 RTC interrupts May 16 05:23:19.046409 kernel: Loading compiled-in X.509 certificates May 16 05:23:19.046417 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: bdcb483d7df54fb18e2c40c35e34935cfa928c44' May 16 05:23:19.046427 kernel: Demotion targets for Node 0: null May 16 05:23:19.046434 kernel: Key type .fscrypt registered May 16 05:23:19.046442 kernel: Key type fscrypt-provisioning registered May 16 05:23:19.046450 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 05:23:19.046458 kernel: ima: Allocated hash algorithm: sha1 May 16 05:23:19.046465 kernel: ima: No architecture policies found May 16 05:23:19.046473 kernel: clk: Disabling unused clocks May 16 05:23:19.046481 kernel: Warning: unable to open an initial console. May 16 05:23:19.046491 kernel: Freeing unused kernel image (initmem) memory: 54416K May 16 05:23:19.046499 kernel: Write protecting the kernel read-only data: 24576k May 16 05:23:19.046507 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 16 05:23:19.046515 kernel: Run /init as init process May 16 05:23:19.046522 kernel: with arguments: May 16 05:23:19.046530 kernel: /init May 16 05:23:19.046537 kernel: with environment: May 16 05:23:19.046545 kernel: HOME=/ May 16 05:23:19.046553 kernel: TERM=linux May 16 05:23:19.046562 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 05:23:19.046585 systemd[1]: Successfully made /usr/ read-only. May 16 05:23:19.046599 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 05:23:19.046608 systemd[1]: Detected virtualization kvm. May 16 05:23:19.046616 systemd[1]: Detected architecture x86-64. May 16 05:23:19.046626 systemd[1]: Running in initrd. May 16 05:23:19.046635 systemd[1]: No hostname configured, using default hostname. May 16 05:23:19.046645 systemd[1]: Hostname set to . May 16 05:23:19.046654 systemd[1]: Initializing machine ID from VM UUID. May 16 05:23:19.046662 systemd[1]: Queued start job for default target initrd.target. May 16 05:23:19.046671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 05:23:19.046679 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 05:23:19.046689 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 05:23:19.046697 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 05:23:19.046706 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 05:23:19.046717 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 05:23:19.046727 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 05:23:19.046736 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 05:23:19.046744 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 05:23:19.046753 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 05:23:19.046762 systemd[1]: Reached target paths.target - Path Units. May 16 05:23:19.046770 systemd[1]: Reached target slices.target - Slice Units. May 16 05:23:19.046781 systemd[1]: Reached target swap.target - Swaps. May 16 05:23:19.046789 systemd[1]: Reached target timers.target - Timer Units. May 16 05:23:19.046798 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 05:23:19.046806 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 05:23:19.046815 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 05:23:19.046823 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 05:23:19.046832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 05:23:19.046840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 05:23:19.046849 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 05:23:19.046860 systemd[1]: Reached target sockets.target - Socket Units. May 16 05:23:19.046868 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 05:23:19.046877 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 05:23:19.046888 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 05:23:19.046919 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 05:23:19.046930 systemd[1]: Starting systemd-fsck-usr.service... May 16 05:23:19.046939 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 05:23:19.046948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 05:23:19.046957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:23:19.046966 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 05:23:19.046975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 05:23:19.046986 systemd[1]: Finished systemd-fsck-usr.service. May 16 05:23:19.046995 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 05:23:19.047021 systemd-journald[220]: Collecting audit messages is disabled. May 16 05:23:19.047043 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 05:23:19.047052 systemd-journald[220]: Journal started May 16 05:23:19.047071 systemd-journald[220]: Runtime Journal (/run/log/journal/6fb945a55d074cbe9281c54422109374) is 6M, max 48.6M, 42.5M free. May 16 05:23:19.034993 systemd-modules-load[221]: Inserted module 'overlay' May 16 05:23:19.094400 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 05:23:19.094422 kernel: Bridge firewalling registered May 16 05:23:19.063785 systemd-modules-load[221]: Inserted module 'br_netfilter' May 16 05:23:19.097092 systemd[1]: Started systemd-journald.service - Journal Service. May 16 05:23:19.097499 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 05:23:19.099790 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:23:19.105224 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 05:23:19.127679 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 05:23:19.130854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 05:23:19.134576 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 05:23:19.144046 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 05:23:19.144879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 05:23:19.147499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 05:23:19.152886 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 05:23:19.156851 systemd-tmpfiles[249]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 05:23:19.170046 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 05:23:19.177854 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 05:23:19.184363 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3d34a100811c7d0b0788e18c6e7d09762e715c8b4f61568827372c0312d26be3 May 16 05:23:19.226323 systemd-resolved[269]: Positive Trust Anchors: May 16 05:23:19.226339 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 05:23:19.226368 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 05:23:19.228850 systemd-resolved[269]: Defaulting to hostname 'linux'. May 16 05:23:19.230044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 05:23:19.237077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 05:23:19.285943 kernel: SCSI subsystem initialized May 16 05:23:19.294943 kernel: Loading iSCSI transport class v2.0-870. May 16 05:23:19.305948 kernel: iscsi: registered transport (tcp) May 16 05:23:19.331165 kernel: iscsi: registered transport (qla4xxx) May 16 05:23:19.331249 kernel: QLogic iSCSI HBA Driver May 16 05:23:19.358181 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 05:23:19.373254 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 05:23:19.375073 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 05:23:19.450871 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 05:23:19.453349 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 05:23:19.520934 kernel: raid6: avx2x4 gen() 30717 MB/s May 16 05:23:19.537920 kernel: raid6: avx2x2 gen() 31461 MB/s May 16 05:23:19.555004 kernel: raid6: avx2x1 gen() 26002 MB/s May 16 05:23:19.555022 kernel: raid6: using algorithm avx2x2 gen() 31461 MB/s May 16 05:23:19.573011 kernel: raid6: .... xor() 19931 MB/s, rmw enabled May 16 05:23:19.573052 kernel: raid6: using avx2x2 recovery algorithm May 16 05:23:19.629932 kernel: xor: automatically using best checksumming function avx May 16 05:23:19.804945 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 05:23:19.814092 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 05:23:19.816591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 05:23:19.855657 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 16 05:23:19.861064 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 05:23:19.875000 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 05:23:19.904988 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation May 16 05:23:19.933679 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 05:23:19.937251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 05:23:20.012574 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 05:23:20.017738 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 05:23:20.058045 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 05:23:20.106438 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 16 05:23:20.106455 kernel: cryptd: max_cpu_qlen set to 1000 May 16 05:23:20.106471 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 05:23:20.106949 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 05:23:20.106967 kernel: GPT:9289727 != 19775487 May 16 05:23:20.106978 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 05:23:20.106989 kernel: GPT:9289727 != 19775487 May 16 05:23:20.106999 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 05:23:20.107009 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:23:20.094350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 05:23:20.108336 kernel: libata version 3.00 loaded. May 16 05:23:20.094474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:23:20.099492 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:23:20.105173 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:23:20.106974 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 05:23:20.116959 kernel: AES CTR mode by8 optimization enabled May 16 05:23:20.125928 kernel: ahci 0000:00:1f.2: version 3.0 May 16 05:23:20.157443 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 05:23:20.157465 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 16 05:23:20.157626 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 16 05:23:20.157767 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 05:23:20.157932 kernel: scsi host0: ahci May 16 05:23:20.158094 kernel: scsi host1: ahci May 16 05:23:20.158277 kernel: scsi host2: ahci May 16 05:23:20.158420 kernel: scsi host3: ahci May 16 05:23:20.158561 kernel: scsi host4: ahci May 16 05:23:20.158702 kernel: scsi host5: ahci May 16 05:23:20.158842 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 16 05:23:20.158853 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 16 05:23:20.158868 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 16 05:23:20.158881 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 16 05:23:20.158917 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 16 05:23:20.158929 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 16 05:23:20.171789 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 05:23:20.194826 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 05:23:20.197353 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 05:23:20.197827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:23:20.212739 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 05:23:20.222558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 05:23:20.225547 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 05:23:20.252319 disk-uuid[638]: Primary Header is updated. May 16 05:23:20.252319 disk-uuid[638]: Secondary Entries is updated. May 16 05:23:20.252319 disk-uuid[638]: Secondary Header is updated. May 16 05:23:20.256921 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:23:20.261937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:23:20.470933 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 05:23:20.471011 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 05:23:20.471924 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 05:23:20.472934 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 05:23:20.473928 kernel: ata3.00: applying bridge limits May 16 05:23:20.473951 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 05:23:20.474922 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 05:23:20.475921 kernel: ata3.00: configured for UDMA/100 May 16 05:23:20.475952 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 05:23:20.476921 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 05:23:20.534937 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 05:23:20.560965 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 05:23:20.561045 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 05:23:20.854141 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 05:23:20.856803 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 05:23:20.859239 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 05:23:20.861494 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 05:23:20.864327 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 05:23:20.896609 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 05:23:21.262933 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 05:23:21.263195 disk-uuid[639]: The operation has completed successfully. May 16 05:23:21.293614 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 05:23:21.293772 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 05:23:21.340646 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 05:23:21.365999 sh[668]: Success May 16 05:23:21.385876 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 05:23:21.385968 kernel: device-mapper: uevent: version 1.0.3 May 16 05:23:21.385987 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 05:23:21.394968 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 16 05:23:21.426575 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 05:23:21.429636 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 05:23:21.443874 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 05:23:21.452468 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 05:23:21.452498 kernel: BTRFS: device fsid 902d5020-5ef8-4867-9c12-521b17a28d91 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (680) May 16 05:23:21.452916 kernel: BTRFS info (device dm-0): first mount of filesystem 902d5020-5ef8-4867-9c12-521b17a28d91 May 16 05:23:21.455437 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 05:23:21.455451 kernel: BTRFS info (device dm-0): using free-space-tree May 16 05:23:21.460034 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 05:23:21.462252 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 05:23:21.462790 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 05:23:21.466364 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 05:23:21.468983 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 05:23:21.503587 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (713) May 16 05:23:21.503634 kernel: BTRFS info (device vda6): first mount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:23:21.503651 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 05:23:21.505130 kernel: BTRFS info (device vda6): using free-space-tree May 16 05:23:21.511828 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 05:23:21.513756 kernel: BTRFS info (device vda6): last unmount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:23:21.513572 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 05:23:21.589680 ignition[754]: Ignition 2.21.0 May 16 05:23:21.590348 ignition[754]: Stage: fetch-offline May 16 05:23:21.590409 ignition[754]: no configs at "/usr/lib/ignition/base.d" May 16 05:23:21.590425 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:23:21.590556 ignition[754]: parsed url from cmdline: "" May 16 05:23:21.590560 ignition[754]: no config URL provided May 16 05:23:21.590566 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" May 16 05:23:21.590575 ignition[754]: no config at "/usr/lib/ignition/user.ign" May 16 05:23:21.590609 ignition[754]: op(1): [started] loading QEMU firmware config module May 16 05:23:21.590615 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 05:23:21.600235 ignition[754]: op(1): [finished] loading QEMU firmware config module May 16 05:23:21.617708 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 05:23:21.623326 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 05:23:21.644762 ignition[754]: parsing config with SHA512: aa889df9af42b49c032ce634fa7d1e0a800bc05006ad7acc4cd2956867b837ed85e06e71a474ff3ebd9fa97faa702b3b51e4c8fb0c0612c919dd0e88ae937ea6 May 16 05:23:21.648369 unknown[754]: fetched base config from "system" May 16 05:23:21.648383 unknown[754]: fetched user config from "qemu" May 16 05:23:21.648720 ignition[754]: fetch-offline: fetch-offline passed May 16 05:23:21.648773 ignition[754]: Ignition finished successfully May 16 05:23:21.652048 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 05:23:21.667047 systemd-networkd[860]: lo: Link UP May 16 05:23:21.667059 systemd-networkd[860]: lo: Gained carrier May 16 05:23:21.668624 systemd-networkd[860]: Enumeration completed May 16 05:23:21.668719 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 05:23:21.669022 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:23:21.669026 systemd-networkd[860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 05:23:21.670615 systemd-networkd[860]: eth0: Link UP May 16 05:23:21.670619 systemd-networkd[860]: eth0: Gained carrier May 16 05:23:21.670627 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:23:21.671566 systemd[1]: Reached target network.target - Network. May 16 05:23:21.673937 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 05:23:21.674763 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 05:23:21.690943 systemd-networkd[860]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 05:23:21.710604 ignition[864]: Ignition 2.21.0 May 16 05:23:21.710628 ignition[864]: Stage: kargs May 16 05:23:21.710973 ignition[864]: no configs at "/usr/lib/ignition/base.d" May 16 05:23:21.710995 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:23:21.711871 ignition[864]: kargs: kargs passed May 16 05:23:21.711957 ignition[864]: Ignition finished successfully May 16 05:23:21.718945 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 05:23:21.722555 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 05:23:21.753537 ignition[873]: Ignition 2.21.0 May 16 05:23:21.753552 ignition[873]: Stage: disks May 16 05:23:21.753676 ignition[873]: no configs at "/usr/lib/ignition/base.d" May 16 05:23:21.753688 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:23:21.755390 ignition[873]: disks: disks passed May 16 05:23:21.755518 ignition[873]: Ignition finished successfully May 16 05:23:21.759355 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 05:23:21.760399 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 05:23:21.761953 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 05:23:21.762429 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 05:23:21.762763 systemd[1]: Reached target sysinit.target - System Initialization. May 16 05:23:21.763271 systemd[1]: Reached target basic.target - Basic System. May 16 05:23:21.764635 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 05:23:21.802750 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 05:23:21.810253 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 05:23:21.813441 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 05:23:21.926938 kernel: EXT4-fs (vda9): mounted filesystem c6031f04-b45d-4ec8-a78e-9b0eb2cfd779 r/w with ordered data mode. Quota mode: none. May 16 05:23:21.928155 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 05:23:21.930424 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 05:23:21.933731 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 05:23:21.936245 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 05:23:21.938169 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 05:23:21.938216 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 05:23:21.938237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 05:23:21.956260 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 05:23:21.959735 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 05:23:21.965147 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (891) May 16 05:23:21.965179 kernel: BTRFS info (device vda6): first mount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:23:21.965190 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 05:23:21.965201 kernel: BTRFS info (device vda6): using free-space-tree May 16 05:23:21.968933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 05:23:21.998585 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory May 16 05:23:22.004319 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory May 16 05:23:22.009422 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory May 16 05:23:22.014200 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory May 16 05:23:22.106234 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 05:23:22.107543 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 05:23:22.109517 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 05:23:22.134972 kernel: BTRFS info (device vda6): last unmount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:23:22.147046 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 05:23:22.163160 ignition[1005]: INFO : Ignition 2.21.0 May 16 05:23:22.163160 ignition[1005]: INFO : Stage: mount May 16 05:23:22.165671 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 05:23:22.166713 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:23:22.168868 ignition[1005]: INFO : mount: mount passed May 16 05:23:22.169689 ignition[1005]: INFO : Ignition finished successfully May 16 05:23:22.172726 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 05:23:22.174316 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 05:23:22.451623 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 05:23:22.453339 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 05:23:22.483862 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1017) May 16 05:23:22.483905 kernel: BTRFS info (device vda6): first mount of filesystem 537c4928-e0e3-497e-abf9-5cf4aa6e5693 May 16 05:23:22.483917 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 05:23:22.485387 kernel: BTRFS info (device vda6): using free-space-tree May 16 05:23:22.488862 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 05:23:22.516887 ignition[1034]: INFO : Ignition 2.21.0 May 16 05:23:22.516887 ignition[1034]: INFO : Stage: files May 16 05:23:22.516887 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 05:23:22.516887 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:23:22.522470 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping May 16 05:23:22.524326 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 05:23:22.524326 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 05:23:22.528859 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 05:23:22.530598 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 05:23:22.532460 unknown[1034]: wrote ssh authorized keys file for user: core May 16 05:23:22.533675 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 05:23:22.535102 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 05:23:22.537034 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 16 05:23:22.580987 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 05:23:22.688756 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 05:23:22.690838 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 05:23:22.690838 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 05:23:22.770063 systemd-networkd[860]: eth0: Gained IPv6LL May 16 05:23:23.030342 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 05:23:23.122252 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 05:23:23.122252 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 05:23:23.126084 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 05:23:23.126084 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 05:23:23.126084 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 05:23:23.126084 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 05:23:23.126084 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 05:23:23.126084 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 05:23:23.126084 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 05:23:23.138153 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 05:23:23.140025 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 05:23:23.141785 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 05:23:23.146550 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 05:23:23.146550 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 05:23:23.151231 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 16 05:23:23.800204 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 05:23:24.199828 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 05:23:24.199828 ignition[1034]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 05:23:24.204278 ignition[1034]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 05:23:24.223778 ignition[1034]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 05:23:24.223778 ignition[1034]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 05:23:24.223778 ignition[1034]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 05:23:24.228992 ignition[1034]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 05:23:24.228992 ignition[1034]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 05:23:24.228992 ignition[1034]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 05:23:24.228992 ignition[1034]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 05:23:24.245071 ignition[1034]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 05:23:24.248873 ignition[1034]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 05:23:24.250610 ignition[1034]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 05:23:24.250610 ignition[1034]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 05:23:24.250610 ignition[1034]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 05:23:24.250610 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 05:23:24.250610 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 05:23:24.250610 ignition[1034]: INFO : files: files passed May 16 05:23:24.250610 ignition[1034]: INFO : Ignition finished successfully May 16 05:23:24.258983 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 05:23:24.262424 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 05:23:24.265413 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 05:23:24.278558 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 05:23:24.278693 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 05:23:24.282640 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory May 16 05:23:24.286739 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 05:23:24.286739 initrd-setup-root-after-ignition[1065]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 05:23:24.289905 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 05:23:24.293588 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 05:23:24.294120 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 05:23:24.295199 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 05:23:24.362055 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 05:23:24.362180 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 05:23:24.364436 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 05:23:24.366419 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 05:23:24.368352 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 05:23:24.369168 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 05:23:24.408342 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 05:23:24.411997 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 05:23:24.436922 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 05:23:24.437274 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 05:23:24.437628 systemd[1]: Stopped target timers.target - Timer Units. May 16 05:23:24.437990 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 05:23:24.438098 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 05:23:24.444838 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 05:23:24.445448 systemd[1]: Stopped target basic.target - Basic System. May 16 05:23:24.445766 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 05:23:24.446447 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 05:23:24.446775 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 05:23:24.447285 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 05:23:24.447624 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 05:23:24.447969 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 05:23:24.460268 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 05:23:24.460827 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 05:23:24.461336 systemd[1]: Stopped target swap.target - Swaps. May 16 05:23:24.461767 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 05:23:24.461928 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 05:23:24.469406 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 05:23:24.470040 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 05:23:24.471194 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 05:23:24.473269 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 05:23:24.473942 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 05:23:24.474063 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 05:23:24.481199 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 05:23:24.481336 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 05:23:24.481755 systemd[1]: Stopped target paths.target - Path Units. May 16 05:23:24.482268 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 05:23:24.487616 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 05:23:24.489238 systemd[1]: Stopped target slices.target - Slice Units. May 16 05:23:24.489598 systemd[1]: Stopped target sockets.target - Socket Units. May 16 05:23:24.495221 systemd[1]: iscsid.socket: Deactivated successfully. May 16 05:23:24.495355 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 05:23:24.495832 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 05:23:24.495951 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 05:23:24.496471 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 05:23:24.496600 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 05:23:24.501335 systemd[1]: ignition-files.service: Deactivated successfully. May 16 05:23:24.501436 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 05:23:24.506400 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 05:23:24.510732 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 05:23:24.511849 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 05:23:24.512063 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 05:23:24.514704 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 05:23:24.514805 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 05:23:24.526385 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 05:23:24.526604 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 05:23:24.615869 ignition[1090]: INFO : Ignition 2.21.0 May 16 05:23:24.615869 ignition[1090]: INFO : Stage: umount May 16 05:23:24.618139 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 05:23:24.618139 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 05:23:24.620935 ignition[1090]: INFO : umount: umount passed May 16 05:23:24.620935 ignition[1090]: INFO : Ignition finished successfully May 16 05:23:24.621344 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 05:23:24.621491 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 05:23:24.623667 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 05:23:24.624221 systemd[1]: Stopped target network.target - Network. May 16 05:23:24.624990 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 05:23:24.625048 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 05:23:24.625404 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 05:23:24.625454 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 05:23:24.631433 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 05:23:24.631491 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 05:23:24.632194 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 05:23:24.632244 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 05:23:24.634757 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 05:23:24.636639 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 05:23:24.641463 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 05:23:24.641613 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 05:23:24.645967 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 05:23:24.646366 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 05:23:24.646434 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 05:23:24.649824 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 05:23:24.655993 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 05:23:24.656116 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 05:23:24.660013 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 05:23:24.661353 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 05:23:24.662922 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 05:23:24.662966 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 05:23:24.695652 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 05:23:24.697458 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 05:23:24.697519 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 05:23:24.700206 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 05:23:24.700257 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 05:23:24.703131 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 05:23:24.703176 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 05:23:24.703711 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 05:23:24.705133 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 05:23:24.720215 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 05:23:24.720351 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 05:23:24.729710 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 05:23:24.729980 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 05:23:24.730615 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 05:23:24.730667 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 05:23:24.733351 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 05:23:24.733391 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 05:23:24.735325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 05:23:24.735380 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 05:23:24.739043 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 05:23:24.739094 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 05:23:24.741757 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 05:23:24.741809 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 05:23:24.745522 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 05:23:24.745924 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 05:23:24.745980 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 05:23:24.750109 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 05:23:24.750159 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 05:23:24.753321 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 05:23:24.753373 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 05:23:24.756924 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 05:23:24.756978 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 05:23:24.757450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 05:23:24.757493 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:23:24.778131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 05:23:24.778246 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 05:23:25.295182 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 05:23:25.295322 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 05:23:25.296421 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 05:23:25.297735 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 05:23:25.297804 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 05:23:25.299346 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 05:23:25.325077 systemd[1]: Switching root. May 16 05:23:25.370628 systemd-journald[220]: Journal stopped May 16 05:23:26.909375 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 16 05:23:26.909448 kernel: SELinux: policy capability network_peer_controls=1 May 16 05:23:26.909463 kernel: SELinux: policy capability open_perms=1 May 16 05:23:26.909475 kernel: SELinux: policy capability extended_socket_class=1 May 16 05:23:26.909487 kernel: SELinux: policy capability always_check_network=0 May 16 05:23:26.909499 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 05:23:26.909515 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 05:23:26.909532 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 05:23:26.909544 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 05:23:26.909556 kernel: SELinux: policy capability userspace_initial_context=0 May 16 05:23:26.909568 kernel: audit: type=1403 audit(1747373005.988:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 05:23:26.909581 systemd[1]: Successfully loaded SELinux policy in 58.887ms. May 16 05:23:26.909612 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.854ms. May 16 05:23:26.909626 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 05:23:26.909639 systemd[1]: Detected virtualization kvm. May 16 05:23:26.909654 systemd[1]: Detected architecture x86-64. May 16 05:23:26.909667 systemd[1]: Detected first boot. May 16 05:23:26.909680 systemd[1]: Initializing machine ID from VM UUID. May 16 05:23:26.909693 zram_generator::config[1135]: No configuration found. May 16 05:23:26.909707 kernel: Guest personality initialized and is inactive May 16 05:23:26.909724 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 05:23:26.909737 kernel: Initialized host personality May 16 05:23:26.909748 kernel: NET: Registered PF_VSOCK protocol family May 16 05:23:26.909763 systemd[1]: Populated /etc with preset unit settings. May 16 05:23:26.909777 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 05:23:26.909790 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 05:23:26.909802 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 05:23:26.909816 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 05:23:26.909829 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 05:23:26.909848 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 05:23:26.909861 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 05:23:26.909874 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 05:23:26.909892 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 05:23:26.909929 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 05:23:26.909942 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 05:23:26.909954 systemd[1]: Created slice user.slice - User and Session Slice. May 16 05:23:26.909967 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 05:23:26.909980 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 05:23:26.909994 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 05:23:26.910013 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 05:23:26.910026 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 05:23:26.910043 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 05:23:26.910056 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 05:23:26.910069 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 05:23:26.910082 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 05:23:26.910098 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 05:23:26.910111 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 05:23:26.910123 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 05:23:26.910139 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 05:23:26.910152 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 05:23:26.910165 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 05:23:26.910177 systemd[1]: Reached target slices.target - Slice Units. May 16 05:23:26.910190 systemd[1]: Reached target swap.target - Swaps. May 16 05:23:26.910202 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 05:23:26.910217 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 05:23:26.910229 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 05:23:26.910242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 05:23:26.910254 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 05:23:26.910270 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 05:23:26.910282 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 05:23:26.910295 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 05:23:26.910308 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 05:23:26.910326 systemd[1]: Mounting media.mount - External Media Directory... May 16 05:23:26.910339 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:23:26.910352 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 05:23:26.910364 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 05:23:26.910379 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 05:23:26.910392 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 05:23:26.910405 systemd[1]: Reached target machines.target - Containers. May 16 05:23:26.910418 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 05:23:26.910431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:23:26.910444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 05:23:26.910456 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 05:23:26.910469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:23:26.910482 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 05:23:26.910498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:23:26.910511 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 05:23:26.910523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:23:26.910536 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 05:23:26.910549 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 05:23:26.910562 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 05:23:26.910574 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 05:23:26.910587 systemd[1]: Stopped systemd-fsck-usr.service. May 16 05:23:26.910602 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:23:26.910615 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 05:23:26.910628 kernel: loop: module loaded May 16 05:23:26.910640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 05:23:26.910652 kernel: fuse: init (API version 7.41) May 16 05:23:26.910664 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 05:23:26.910677 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 05:23:26.910690 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 05:23:26.910703 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 05:23:26.910718 systemd[1]: verity-setup.service: Deactivated successfully. May 16 05:23:26.910731 systemd[1]: Stopped verity-setup.service. May 16 05:23:26.910744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:23:26.910757 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 05:23:26.910771 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 05:23:26.910786 systemd[1]: Mounted media.mount - External Media Directory. May 16 05:23:26.910800 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 05:23:26.910815 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 05:23:26.910828 kernel: ACPI: bus type drm_connector registered May 16 05:23:26.910846 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 05:23:26.910884 systemd-journald[1213]: Collecting audit messages is disabled. May 16 05:23:26.910922 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 05:23:26.910936 systemd-journald[1213]: Journal started May 16 05:23:26.910959 systemd-journald[1213]: Runtime Journal (/run/log/journal/6fb945a55d074cbe9281c54422109374) is 6M, max 48.6M, 42.5M free. May 16 05:23:26.641250 systemd[1]: Queued start job for default target multi-user.target. May 16 05:23:26.663446 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 05:23:26.664041 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 05:23:26.913160 systemd[1]: Started systemd-journald.service - Journal Service. May 16 05:23:26.914448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 05:23:26.916974 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 05:23:26.917331 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 05:23:26.918859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:23:26.919229 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:23:26.920685 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 05:23:26.920924 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 05:23:26.922301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:23:26.922516 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:23:26.924051 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 05:23:26.924258 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 05:23:26.925693 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:23:26.925927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:23:26.927366 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 05:23:26.928799 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 05:23:26.930388 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 05:23:26.932015 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 05:23:26.949806 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 05:23:26.952651 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 05:23:26.954925 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 05:23:26.956242 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 05:23:26.956271 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 05:23:26.958295 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 05:23:26.961094 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 05:23:26.962319 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:23:27.190030 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 05:23:27.195021 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 05:23:27.196463 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 05:23:27.198389 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 05:23:27.199584 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 05:23:27.204432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 05:23:27.207180 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 05:23:27.214843 systemd-journald[1213]: Time spent on flushing to /var/log/journal/6fb945a55d074cbe9281c54422109374 is 27.393ms for 981 entries. May 16 05:23:27.214843 systemd-journald[1213]: System Journal (/var/log/journal/6fb945a55d074cbe9281c54422109374) is 8M, max 195.6M, 187.6M free. May 16 05:23:27.278958 systemd-journald[1213]: Received client request to flush runtime journal. May 16 05:23:27.279040 kernel: loop0: detected capacity change from 0 to 146240 May 16 05:23:27.218165 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 05:23:27.221430 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 05:23:27.223747 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 05:23:27.235220 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 05:23:27.236877 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 05:23:27.244123 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 05:23:27.263235 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 05:23:27.272245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 05:23:27.282802 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 05:23:27.286193 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. May 16 05:23:27.286210 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. May 16 05:23:27.289951 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 05:23:27.294031 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 05:23:27.296116 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 05:23:27.300138 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 05:23:27.314929 kernel: loop1: detected capacity change from 0 to 113872 May 16 05:23:27.337998 kernel: loop2: detected capacity change from 0 to 229808 May 16 05:23:27.345008 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 05:23:27.348337 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 05:23:27.387935 kernel: loop3: detected capacity change from 0 to 146240 May 16 05:23:27.392164 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 16 05:23:27.392186 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 16 05:23:27.404072 kernel: loop4: detected capacity change from 0 to 113872 May 16 05:23:27.412921 kernel: loop5: detected capacity change from 0 to 229808 May 16 05:23:27.415440 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 05:23:27.423478 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 05:23:27.424218 (sd-merge)[1279]: Merged extensions into '/usr'. May 16 05:23:27.428998 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... May 16 05:23:27.429015 systemd[1]: Reloading... May 16 05:23:27.506798 zram_generator::config[1311]: No configuration found. May 16 05:23:27.602442 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:23:27.656462 ldconfig[1249]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 05:23:27.685291 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 05:23:27.685755 systemd[1]: Reloading finished in 256 ms. May 16 05:23:27.713666 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 05:23:27.715198 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 05:23:27.732338 systemd[1]: Starting ensure-sysext.service... May 16 05:23:27.734219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 05:23:27.746778 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... May 16 05:23:27.746795 systemd[1]: Reloading... May 16 05:23:27.763258 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 05:23:27.763297 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 05:23:27.763705 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 05:23:27.764105 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 05:23:27.765442 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 05:23:27.765950 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 16 05:23:27.766089 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 16 05:23:27.771349 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 16 05:23:27.771365 systemd-tmpfiles[1344]: Skipping /boot May 16 05:23:27.791962 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 16 05:23:27.791982 systemd-tmpfiles[1344]: Skipping /boot May 16 05:23:27.823938 zram_generator::config[1371]: No configuration found. May 16 05:23:27.925917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:23:28.008751 systemd[1]: Reloading finished in 261 ms. May 16 05:23:28.033799 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 05:23:28.056031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 05:23:28.066094 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 05:23:28.068631 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 05:23:28.079352 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 05:23:28.083106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 05:23:28.088129 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 05:23:28.090540 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 05:23:28.095995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:23:28.096175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:23:28.101154 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:23:28.104782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:23:28.111999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:23:28.113350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:23:28.113451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:23:28.116979 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 05:23:28.120031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:23:28.122323 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 05:23:28.126188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:23:28.126447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:23:28.128157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:23:28.134349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:23:28.137628 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:23:28.137872 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:23:28.138935 systemd-udevd[1415]: Using default interface naming scheme 'v255'. May 16 05:23:28.150807 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 05:23:28.156312 augenrules[1444]: No rules May 16 05:23:28.158845 systemd[1]: audit-rules.service: Deactivated successfully. May 16 05:23:28.159156 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 05:23:28.161653 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:23:28.161985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 05:23:28.163812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 05:23:28.166672 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 05:23:28.173287 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 05:23:28.177077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 05:23:28.178265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 05:23:28.178383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 05:23:28.179949 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 05:23:28.181109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 05:23:28.182325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 05:23:28.187533 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 05:23:28.189973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 05:23:28.190202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 05:23:28.192252 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 05:23:28.193779 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 05:23:28.194990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 05:23:28.196636 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 05:23:28.197142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 05:23:28.207429 systemd[1]: Finished ensure-sysext.service. May 16 05:23:28.214699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 05:23:28.222097 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 05:23:28.242023 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 05:23:28.249468 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 05:23:28.250589 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 05:23:28.250673 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 05:23:28.253664 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 05:23:28.254799 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 05:23:28.332715 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 05:23:28.351575 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 05:23:28.359025 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 05:23:28.371921 kernel: mousedev: PS/2 mouse device common for all mice May 16 05:23:28.383916 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 05:23:28.385836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 16 05:23:28.393050 kernel: ACPI: button: Power Button [PWRF] May 16 05:23:28.409921 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 05:23:28.410222 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 05:23:28.482761 systemd-networkd[1493]: lo: Link UP May 16 05:23:28.482774 systemd-networkd[1493]: lo: Gained carrier May 16 05:23:28.484445 systemd-networkd[1493]: Enumeration completed May 16 05:23:28.484540 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 05:23:28.486275 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:23:28.486280 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 05:23:28.486862 systemd-networkd[1493]: eth0: Link UP May 16 05:23:28.487054 systemd-networkd[1493]: eth0: Gained carrier May 16 05:23:28.487068 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 05:23:28.488571 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 05:23:28.493680 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 05:23:28.499982 systemd-networkd[1493]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 05:23:28.518134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 05:23:28.531622 systemd-resolved[1413]: Positive Trust Anchors: May 16 05:23:28.531642 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 05:23:28.531674 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 05:23:28.533814 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 05:23:28.542866 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 05:23:28.546956 systemd-resolved[1413]: Defaulting to hostname 'linux'. May 16 05:23:28.552006 systemd-timesyncd[1495]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 05:23:28.552322 systemd-timesyncd[1495]: Initial clock synchronization to Fri 2025-05-16 05:23:28.754915 UTC. May 16 05:23:28.556145 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 05:23:28.559034 systemd[1]: Reached target network.target - Network. May 16 05:23:28.562040 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 05:23:28.563376 systemd[1]: Reached target time-set.target - System Time Set. May 16 05:23:28.607589 kernel: kvm_amd: TSC scaling supported May 16 05:23:28.607639 kernel: kvm_amd: Nested Virtualization enabled May 16 05:23:28.607663 kernel: kvm_amd: Nested Paging enabled May 16 05:23:28.608138 kernel: kvm_amd: LBR virtualization supported May 16 05:23:28.609489 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 05:23:28.609514 kernel: kvm_amd: Virtual GIF supported May 16 05:23:28.644927 kernel: EDAC MC: Ver: 3.0.0 May 16 05:23:28.676045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 05:23:28.677599 systemd[1]: Reached target sysinit.target - System Initialization. May 16 05:23:28.678833 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 05:23:28.680143 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 05:23:28.681431 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 16 05:23:28.682789 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 05:23:28.684123 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 05:23:28.685479 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 05:23:28.686761 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 05:23:28.686801 systemd[1]: Reached target paths.target - Path Units. May 16 05:23:28.687879 systemd[1]: Reached target timers.target - Timer Units. May 16 05:23:28.689988 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 05:23:28.693330 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 05:23:28.697421 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 05:23:28.698992 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 05:23:28.700309 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 05:23:28.704246 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 05:23:28.705846 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 05:23:28.708166 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 05:23:28.710243 systemd[1]: Reached target sockets.target - Socket Units. May 16 05:23:28.711405 systemd[1]: Reached target basic.target - Basic System. May 16 05:23:28.712606 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 05:23:28.712645 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 05:23:28.714281 systemd[1]: Starting containerd.service - containerd container runtime... May 16 05:23:28.716612 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 05:23:28.718809 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 05:23:28.721075 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 05:23:28.724429 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 05:23:28.726584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 05:23:28.741039 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 16 05:23:28.744206 jq[1542]: false May 16 05:23:28.744024 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 05:23:28.746638 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 05:23:28.749416 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 05:23:28.751988 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 05:23:28.753964 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing passwd entry cache May 16 05:23:28.754213 oslogin_cache_refresh[1544]: Refreshing passwd entry cache May 16 05:23:28.758471 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 05:23:28.761102 extend-filesystems[1543]: Found loop3 May 16 05:23:28.761102 extend-filesystems[1543]: Found loop4 May 16 05:23:28.761102 extend-filesystems[1543]: Found loop5 May 16 05:23:28.761102 extend-filesystems[1543]: Found sr0 May 16 05:23:28.761102 extend-filesystems[1543]: Found vda May 16 05:23:28.761102 extend-filesystems[1543]: Found vda1 May 16 05:23:28.761102 extend-filesystems[1543]: Found vda2 May 16 05:23:28.761102 extend-filesystems[1543]: Found vda3 May 16 05:23:28.761102 extend-filesystems[1543]: Found usr May 16 05:23:28.761102 extend-filesystems[1543]: Found vda4 May 16 05:23:28.761102 extend-filesystems[1543]: Found vda6 May 16 05:23:28.761102 extend-filesystems[1543]: Found vda7 May 16 05:23:28.761102 extend-filesystems[1543]: Found vda9 May 16 05:23:28.761102 extend-filesystems[1543]: Checking size of /dev/vda9 May 16 05:23:28.760781 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 05:23:28.764870 oslogin_cache_refresh[1544]: Failure getting users, quitting May 16 05:23:28.785662 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting users, quitting May 16 05:23:28.785662 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 05:23:28.785662 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing group entry cache May 16 05:23:28.785662 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting groups, quitting May 16 05:23:28.785662 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 05:23:28.761550 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 05:23:28.764891 oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 05:23:28.764069 systemd[1]: Starting update-engine.service - Update Engine... May 16 05:23:28.764958 oslogin_cache_refresh[1544]: Refreshing group entry cache May 16 05:23:28.766071 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 05:23:28.769851 oslogin_cache_refresh[1544]: Failure getting groups, quitting May 16 05:23:28.768408 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 05:23:28.769861 oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 05:23:28.769196 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 05:23:28.769433 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 05:23:28.774627 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 16 05:23:28.779355 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 16 05:23:28.781720 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 05:23:28.782040 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 05:23:28.791607 jq[1554]: true May 16 05:23:28.796530 extend-filesystems[1543]: Resized partition /dev/vda9 May 16 05:23:28.813467 extend-filesystems[1574]: resize2fs 1.47.2 (1-Jan-2025) May 16 05:23:28.819862 jq[1571]: true May 16 05:23:28.824943 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 05:23:28.826502 update_engine[1552]: I20250516 05:23:28.826078 1552 main.cc:92] Flatcar Update Engine starting May 16 05:23:28.827096 systemd[1]: motdgen.service: Deactivated successfully. May 16 05:23:28.827390 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 05:23:28.836364 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 05:23:28.844383 tar[1556]: linux-amd64/LICENSE May 16 05:23:28.846053 tar[1556]: linux-amd64/helm May 16 05:23:28.859098 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 05:23:28.861027 dbus-daemon[1540]: [system] SELinux support is enabled May 16 05:23:28.861215 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 05:23:28.884332 update_engine[1552]: I20250516 05:23:28.868806 1552 update_check_scheduler.cc:74] Next update check in 6m51s May 16 05:23:28.867114 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 05:23:28.867140 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 05:23:28.868647 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 05:23:28.868661 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 05:23:28.870350 systemd[1]: Started update-engine.service - Update Engine. May 16 05:23:28.873118 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 05:23:28.889908 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 05:23:28.889908 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 05:23:28.889908 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 05:23:28.898374 extend-filesystems[1543]: Resized filesystem in /dev/vda9 May 16 05:23:28.889968 systemd-logind[1550]: Watching system buttons on /dev/input/event2 (Power Button) May 16 05:23:28.889988 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 05:23:28.891039 systemd-logind[1550]: New seat seat0. May 16 05:23:28.891229 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 05:23:28.892375 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 05:23:28.900386 systemd[1]: Started systemd-logind.service - User Login Management. May 16 05:23:28.910964 bash[1597]: Updated "/home/core/.ssh/authorized_keys" May 16 05:23:28.914951 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 05:23:28.918635 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 05:23:28.951018 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 05:23:29.144794 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 05:23:29.172836 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 05:23:29.179629 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 05:23:29.198091 systemd[1]: issuegen.service: Deactivated successfully. May 16 05:23:29.198374 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 05:23:29.201538 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 05:23:29.274199 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 05:23:29.279979 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 05:23:29.286334 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 05:23:29.287894 systemd[1]: Reached target getty.target - Login Prompts. May 16 05:23:29.337656 containerd[1566]: time="2025-05-16T05:23:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 05:23:29.339212 containerd[1566]: time="2025-05-16T05:23:29.339153334Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 05:23:29.352983 containerd[1566]: time="2025-05-16T05:23:29.352928031Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.309µs" May 16 05:23:29.352983 containerd[1566]: time="2025-05-16T05:23:29.352968831Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 05:23:29.353081 containerd[1566]: time="2025-05-16T05:23:29.352989987Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 05:23:29.353234 containerd[1566]: time="2025-05-16T05:23:29.353203609Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 05:23:29.353234 containerd[1566]: time="2025-05-16T05:23:29.353226953Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 05:23:29.353276 containerd[1566]: time="2025-05-16T05:23:29.353254190Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 05:23:29.353365 containerd[1566]: time="2025-05-16T05:23:29.353337847Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 05:23:29.353365 containerd[1566]: time="2025-05-16T05:23:29.353354657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 05:23:29.353700 containerd[1566]: time="2025-05-16T05:23:29.353669330Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 05:23:29.353700 containerd[1566]: time="2025-05-16T05:23:29.353688142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 05:23:29.353744 containerd[1566]: time="2025-05-16T05:23:29.353699824Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 05:23:29.353744 containerd[1566]: time="2025-05-16T05:23:29.353708672Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 05:23:29.353845 containerd[1566]: time="2025-05-16T05:23:29.353818570Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 05:23:29.354117 containerd[1566]: time="2025-05-16T05:23:29.354086955Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 05:23:29.354142 containerd[1566]: time="2025-05-16T05:23:29.354123132Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 05:23:29.354142 containerd[1566]: time="2025-05-16T05:23:29.354134578Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 05:23:29.354192 containerd[1566]: time="2025-05-16T05:23:29.354166717Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 05:23:29.354458 containerd[1566]: time="2025-05-16T05:23:29.354409313Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 05:23:29.354562 containerd[1566]: time="2025-05-16T05:23:29.354533945Z" level=info msg="metadata content store policy set" policy=shared May 16 05:23:29.361097 containerd[1566]: time="2025-05-16T05:23:29.361056321Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 05:23:29.361145 containerd[1566]: time="2025-05-16T05:23:29.361113788Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 05:23:29.361145 containerd[1566]: time="2025-05-16T05:23:29.361131049Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 05:23:29.361199 containerd[1566]: time="2025-05-16T05:23:29.361146389Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 05:23:29.361199 containerd[1566]: time="2025-05-16T05:23:29.361162140Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 05:23:29.361199 containerd[1566]: time="2025-05-16T05:23:29.361172272Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 05:23:29.361199 containerd[1566]: time="2025-05-16T05:23:29.361185196Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 05:23:29.361199 containerd[1566]: time="2025-05-16T05:23:29.361200999Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 05:23:29.361300 containerd[1566]: time="2025-05-16T05:23:29.361212826Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 05:23:29.361300 containerd[1566]: time="2025-05-16T05:23:29.361223500Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 05:23:29.361300 containerd[1566]: time="2025-05-16T05:23:29.361232584Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 05:23:29.361300 containerd[1566]: time="2025-05-16T05:23:29.361244986Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 05:23:29.361414 containerd[1566]: time="2025-05-16T05:23:29.361385317Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 05:23:29.361414 containerd[1566]: time="2025-05-16T05:23:29.361411795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 05:23:29.361458 containerd[1566]: time="2025-05-16T05:23:29.361441396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 05:23:29.361458 containerd[1566]: time="2025-05-16T05:23:29.361453818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 05:23:29.361509 containerd[1566]: time="2025-05-16T05:23:29.361465254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 05:23:29.361509 containerd[1566]: time="2025-05-16T05:23:29.361475929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 05:23:29.361509 containerd[1566]: time="2025-05-16T05:23:29.361486821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 05:23:29.361509 containerd[1566]: time="2025-05-16T05:23:29.361499715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 05:23:29.361603 containerd[1566]: time="2025-05-16T05:23:29.361510935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 05:23:29.361603 containerd[1566]: time="2025-05-16T05:23:29.361521939Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 05:23:29.361603 containerd[1566]: time="2025-05-16T05:23:29.361531967Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 05:23:29.361663 containerd[1566]: time="2025-05-16T05:23:29.361613456Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 05:23:29.361663 containerd[1566]: time="2025-05-16T05:23:29.361627481Z" level=info msg="Start snapshots syncer" May 16 05:23:29.361663 containerd[1566]: time="2025-05-16T05:23:29.361659631Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 05:23:29.361995 containerd[1566]: time="2025-05-16T05:23:29.361950682Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 05:23:29.362335 containerd[1566]: time="2025-05-16T05:23:29.362306218Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 05:23:29.363594 containerd[1566]: time="2025-05-16T05:23:29.363571155Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 05:23:29.363789 containerd[1566]: time="2025-05-16T05:23:29.363754774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 05:23:29.363831 containerd[1566]: time="2025-05-16T05:23:29.363802572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 05:23:29.363831 containerd[1566]: time="2025-05-16T05:23:29.363814357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 05:23:29.363831 containerd[1566]: time="2025-05-16T05:23:29.363824375Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 05:23:29.363925 containerd[1566]: time="2025-05-16T05:23:29.363837866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 05:23:29.363925 containerd[1566]: time="2025-05-16T05:23:29.363856967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 05:23:29.363925 containerd[1566]: time="2025-05-16T05:23:29.363867713Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 05:23:29.363925 containerd[1566]: time="2025-05-16T05:23:29.363890277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 05:23:29.363925 containerd[1566]: time="2025-05-16T05:23:29.363900645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 05:23:29.363925 containerd[1566]: time="2025-05-16T05:23:29.363911135Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 05:23:29.366101 containerd[1566]: time="2025-05-16T05:23:29.366066693Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 05:23:29.366101 containerd[1566]: time="2025-05-16T05:23:29.366096809Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 05:23:29.366151 containerd[1566]: time="2025-05-16T05:23:29.366107093Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 05:23:29.366151 containerd[1566]: time="2025-05-16T05:23:29.366116844Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 05:23:29.366151 containerd[1566]: time="2025-05-16T05:23:29.366125465Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 05:23:29.366151 containerd[1566]: time="2025-05-16T05:23:29.366134774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 05:23:29.366151 containerd[1566]: time="2025-05-16T05:23:29.366145542Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 05:23:29.366265 containerd[1566]: time="2025-05-16T05:23:29.366173037Z" level=info msg="runtime interface created" May 16 05:23:29.366265 containerd[1566]: time="2025-05-16T05:23:29.366180249Z" level=info msg="created NRI interface" May 16 05:23:29.366265 containerd[1566]: time="2025-05-16T05:23:29.366188315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 05:23:29.366265 containerd[1566]: time="2025-05-16T05:23:29.366199514Z" level=info msg="Connect containerd service" May 16 05:23:29.366265 containerd[1566]: time="2025-05-16T05:23:29.366224277Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 05:23:29.367164 containerd[1566]: time="2025-05-16T05:23:29.367112544Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525132063Z" level=info msg="Start subscribing containerd event" May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525210449Z" level=info msg="Start recovering state" May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525348756Z" level=info msg="Start event monitor" May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525370199Z" level=info msg="Start cni network conf syncer for default" May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525377166Z" level=info msg="Start streaming server" May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525400808Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525408350Z" level=info msg="runtime interface starting up..." May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525414915Z" level=info msg="starting plugins..." May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525431889Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525469874Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525542640Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 05:23:29.525761 containerd[1566]: time="2025-05-16T05:23:29.525613689Z" level=info msg="containerd successfully booted in 0.222968s" May 16 05:23:29.526084 systemd[1]: Started containerd.service - containerd container runtime. May 16 05:23:29.564090 tar[1556]: linux-amd64/README.md May 16 05:23:29.595893 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 05:23:29.811058 systemd-networkd[1493]: eth0: Gained IPv6LL May 16 05:23:29.815043 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 05:23:29.817294 systemd[1]: Reached target network-online.target - Network is Online. May 16 05:23:29.820362 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 05:23:29.823297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:23:29.826137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 05:23:29.868223 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 05:23:29.901371 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 05:23:29.901691 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 05:23:29.903572 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 05:23:30.993070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:23:30.994853 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 05:23:30.996344 systemd[1]: Startup finished in 3.176s (kernel) + 7.222s (initrd) + 5.064s (userspace) = 15.463s. May 16 05:23:31.024129 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 05:23:31.737497 kubelet[1669]: E0516 05:23:31.737417 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 05:23:31.741719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 05:23:31.741973 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 05:23:31.742385 systemd[1]: kubelet.service: Consumed 1.652s CPU time, 267.3M memory peak. May 16 05:23:32.500396 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 05:23:32.501774 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:57680.service - OpenSSH per-connection server daemon (10.0.0.1:57680). May 16 05:23:32.552914 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 57680 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:23:32.555204 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:23:32.562317 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 05:23:32.563448 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 05:23:32.570009 systemd-logind[1550]: New session 1 of user core. May 16 05:23:32.598203 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 05:23:32.601537 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 05:23:32.622466 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 05:23:32.624952 systemd-logind[1550]: New session c1 of user core. May 16 05:23:32.779738 systemd[1687]: Queued start job for default target default.target. May 16 05:23:32.799215 systemd[1687]: Created slice app.slice - User Application Slice. May 16 05:23:32.799241 systemd[1687]: Reached target paths.target - Paths. May 16 05:23:32.799281 systemd[1687]: Reached target timers.target - Timers. May 16 05:23:32.800910 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 05:23:32.813633 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 05:23:32.813765 systemd[1687]: Reached target sockets.target - Sockets. May 16 05:23:32.813808 systemd[1687]: Reached target basic.target - Basic System. May 16 05:23:32.813849 systemd[1687]: Reached target default.target - Main User Target. May 16 05:23:32.813880 systemd[1687]: Startup finished in 181ms. May 16 05:23:32.814580 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 05:23:32.816351 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 05:23:32.885847 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:57682.service - OpenSSH per-connection server daemon (10.0.0.1:57682). May 16 05:23:32.938868 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 57682 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:23:32.940312 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:23:32.944642 systemd-logind[1550]: New session 2 of user core. May 16 05:23:32.954049 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 05:23:33.008791 sshd[1700]: Connection closed by 10.0.0.1 port 57682 May 16 05:23:33.009179 sshd-session[1698]: pam_unix(sshd:session): session closed for user core May 16 05:23:33.020426 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:57682.service: Deactivated successfully. May 16 05:23:33.022118 systemd[1]: session-2.scope: Deactivated successfully. May 16 05:23:33.022825 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. May 16 05:23:33.025641 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:57692.service - OpenSSH per-connection server daemon (10.0.0.1:57692). May 16 05:23:33.026202 systemd-logind[1550]: Removed session 2. May 16 05:23:33.076875 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 57692 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:23:33.078322 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:23:33.082733 systemd-logind[1550]: New session 3 of user core. May 16 05:23:33.098029 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 05:23:33.146633 sshd[1708]: Connection closed by 10.0.0.1 port 57692 May 16 05:23:33.146896 sshd-session[1706]: pam_unix(sshd:session): session closed for user core May 16 05:23:33.158511 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:57692.service: Deactivated successfully. May 16 05:23:33.160343 systemd[1]: session-3.scope: Deactivated successfully. May 16 05:23:33.161027 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. May 16 05:23:33.163970 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:57704.service - OpenSSH per-connection server daemon (10.0.0.1:57704). May 16 05:23:33.164494 systemd-logind[1550]: Removed session 3. May 16 05:23:33.217343 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 57704 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:23:33.218665 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:23:33.222621 systemd-logind[1550]: New session 4 of user core. May 16 05:23:33.237031 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 05:23:33.290648 sshd[1716]: Connection closed by 10.0.0.1 port 57704 May 16 05:23:33.290954 sshd-session[1714]: pam_unix(sshd:session): session closed for user core May 16 05:23:33.301468 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:57704.service: Deactivated successfully. May 16 05:23:33.303266 systemd[1]: session-4.scope: Deactivated successfully. May 16 05:23:33.303951 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. May 16 05:23:33.306554 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:57708.service - OpenSSH per-connection server daemon (10.0.0.1:57708). May 16 05:23:33.307219 systemd-logind[1550]: Removed session 4. May 16 05:23:33.359392 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 57708 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:23:33.360691 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:23:33.364746 systemd-logind[1550]: New session 5 of user core. May 16 05:23:33.378058 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 05:23:33.435791 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 05:23:33.436148 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:23:33.453553 sudo[1725]: pam_unix(sudo:session): session closed for user root May 16 05:23:33.455107 sshd[1724]: Connection closed by 10.0.0.1 port 57708 May 16 05:23:33.455415 sshd-session[1722]: pam_unix(sshd:session): session closed for user core May 16 05:23:33.468724 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:57708.service: Deactivated successfully. May 16 05:23:33.470624 systemd[1]: session-5.scope: Deactivated successfully. May 16 05:23:33.471335 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. May 16 05:23:33.474506 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:57724.service - OpenSSH per-connection server daemon (10.0.0.1:57724). May 16 05:23:33.475097 systemd-logind[1550]: Removed session 5. May 16 05:23:33.523327 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 57724 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:23:33.524725 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:23:33.528862 systemd-logind[1550]: New session 6 of user core. May 16 05:23:33.538030 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 05:23:33.592835 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 05:23:33.593179 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:23:33.600766 sudo[1735]: pam_unix(sudo:session): session closed for user root May 16 05:23:33.606688 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 05:23:33.607017 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:23:33.616217 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 05:23:33.663889 augenrules[1757]: No rules May 16 05:23:33.664881 systemd[1]: audit-rules.service: Deactivated successfully. May 16 05:23:33.665179 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 05:23:33.666368 sudo[1734]: pam_unix(sudo:session): session closed for user root May 16 05:23:33.667899 sshd[1733]: Connection closed by 10.0.0.1 port 57724 May 16 05:23:33.668229 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 16 05:23:33.680483 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:57724.service: Deactivated successfully. May 16 05:23:33.682458 systemd[1]: session-6.scope: Deactivated successfully. May 16 05:23:33.683223 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. May 16 05:23:33.686323 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:57732.service - OpenSSH per-connection server daemon (10.0.0.1:57732). May 16 05:23:33.686904 systemd-logind[1550]: Removed session 6. May 16 05:23:33.736934 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 57732 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:23:33.738204 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:23:33.742541 systemd-logind[1550]: New session 7 of user core. May 16 05:23:33.752051 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 05:23:33.805772 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 05:23:33.806116 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:23:34.562847 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 05:23:34.583338 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 05:23:35.228081 dockerd[1790]: time="2025-05-16T05:23:35.227991989Z" level=info msg="Starting up" May 16 05:23:35.230355 dockerd[1790]: time="2025-05-16T05:23:35.230288829Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 05:23:35.963918 dockerd[1790]: time="2025-05-16T05:23:35.963840806Z" level=info msg="Loading containers: start." May 16 05:23:35.973963 kernel: Initializing XFRM netlink socket May 16 05:23:36.242647 systemd-networkd[1493]: docker0: Link UP May 16 05:23:36.248051 dockerd[1790]: time="2025-05-16T05:23:36.247997734Z" level=info msg="Loading containers: done." May 16 05:23:36.266319 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck617574758-merged.mount: Deactivated successfully. May 16 05:23:36.266893 dockerd[1790]: time="2025-05-16T05:23:36.266867264Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 05:23:36.266969 dockerd[1790]: time="2025-05-16T05:23:36.266957871Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 05:23:36.267104 dockerd[1790]: time="2025-05-16T05:23:36.267081854Z" level=info msg="Initializing buildkit" May 16 05:23:36.299784 dockerd[1790]: time="2025-05-16T05:23:36.299723091Z" level=info msg="Completed buildkit initialization" May 16 05:23:36.304354 dockerd[1790]: time="2025-05-16T05:23:36.304321223Z" level=info msg="Daemon has completed initialization" May 16 05:23:36.304560 dockerd[1790]: time="2025-05-16T05:23:36.304500010Z" level=info msg="API listen on /run/docker.sock" May 16 05:23:36.304543 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 05:23:36.928085 containerd[1566]: time="2025-05-16T05:23:36.928020502Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 16 05:23:37.721359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892787257.mount: Deactivated successfully. May 16 05:23:38.683065 containerd[1566]: time="2025-05-16T05:23:38.683002293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:38.683651 containerd[1566]: time="2025-05-16T05:23:38.683586160Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 16 05:23:38.684641 containerd[1566]: time="2025-05-16T05:23:38.684596026Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:38.687269 containerd[1566]: time="2025-05-16T05:23:38.687213546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:38.688058 containerd[1566]: time="2025-05-16T05:23:38.687994239Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.759913746s" May 16 05:23:38.688058 containerd[1566]: time="2025-05-16T05:23:38.688050422Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 16 05:23:38.688739 containerd[1566]: time="2025-05-16T05:23:38.688703010Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 16 05:23:39.840060 containerd[1566]: time="2025-05-16T05:23:39.839996281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:39.840761 containerd[1566]: time="2025-05-16T05:23:39.840694289Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 16 05:23:39.841764 containerd[1566]: time="2025-05-16T05:23:39.841731303Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:39.844265 containerd[1566]: time="2025-05-16T05:23:39.844211892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:39.845062 containerd[1566]: time="2025-05-16T05:23:39.845026558Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.156292218s" May 16 05:23:39.845108 containerd[1566]: time="2025-05-16T05:23:39.845061880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 16 05:23:39.845744 containerd[1566]: time="2025-05-16T05:23:39.845704940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 16 05:23:41.153311 containerd[1566]: time="2025-05-16T05:23:41.153234894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:41.154068 containerd[1566]: time="2025-05-16T05:23:41.154002918Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 16 05:23:41.155204 containerd[1566]: time="2025-05-16T05:23:41.155165216Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:41.157659 containerd[1566]: time="2025-05-16T05:23:41.157597212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:41.158506 containerd[1566]: time="2025-05-16T05:23:41.158468467Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.312737264s" May 16 05:23:41.158506 containerd[1566]: time="2025-05-16T05:23:41.158500864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 16 05:23:41.159145 containerd[1566]: time="2025-05-16T05:23:41.159017825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 16 05:23:41.833401 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 05:23:41.835047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:23:42.086637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:23:42.091200 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 05:23:42.182863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963406858.mount: Deactivated successfully. May 16 05:23:42.332987 kubelet[2075]: E0516 05:23:42.332890 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 05:23:42.340794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 05:23:42.341029 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 05:23:42.341453 systemd[1]: kubelet.service: Consumed 282ms CPU time, 109.2M memory peak. May 16 05:23:43.719323 containerd[1566]: time="2025-05-16T05:23:43.719245383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:43.746035 containerd[1566]: time="2025-05-16T05:23:43.745878336Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 16 05:23:43.780164 containerd[1566]: time="2025-05-16T05:23:43.780099949Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:43.807811 containerd[1566]: time="2025-05-16T05:23:43.807526205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:43.808648 containerd[1566]: time="2025-05-16T05:23:43.808607971Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 2.649559604s" May 16 05:23:43.808705 containerd[1566]: time="2025-05-16T05:23:43.808651191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 16 05:23:43.810195 containerd[1566]: time="2025-05-16T05:23:43.810152265Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 16 05:23:45.374764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164315155.mount: Deactivated successfully. May 16 05:23:46.405084 containerd[1566]: time="2025-05-16T05:23:46.405000682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:46.405945 containerd[1566]: time="2025-05-16T05:23:46.405865978Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 16 05:23:46.407082 containerd[1566]: time="2025-05-16T05:23:46.407035664Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:46.410558 containerd[1566]: time="2025-05-16T05:23:46.410513865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:46.411583 containerd[1566]: time="2025-05-16T05:23:46.411540859Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.601355131s" May 16 05:23:46.411583 containerd[1566]: time="2025-05-16T05:23:46.411577092Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 16 05:23:46.412183 containerd[1566]: time="2025-05-16T05:23:46.412155376Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 05:23:46.887366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590149823.mount: Deactivated successfully. May 16 05:23:46.894090 containerd[1566]: time="2025-05-16T05:23:46.894056135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:23:46.894874 containerd[1566]: time="2025-05-16T05:23:46.894834630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 05:23:46.895980 containerd[1566]: time="2025-05-16T05:23:46.895926858Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:23:46.898115 containerd[1566]: time="2025-05-16T05:23:46.898073543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:23:46.898817 containerd[1566]: time="2025-05-16T05:23:46.898779894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 486.591359ms" May 16 05:23:46.898865 containerd[1566]: time="2025-05-16T05:23:46.898818036Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 05:23:46.899299 containerd[1566]: time="2025-05-16T05:23:46.899263953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 16 05:23:49.021926 containerd[1566]: time="2025-05-16T05:23:49.021829512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:49.022742 containerd[1566]: time="2025-05-16T05:23:49.022716408Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 16 05:23:49.024187 containerd[1566]: time="2025-05-16T05:23:49.024150474Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:49.029264 containerd[1566]: time="2025-05-16T05:23:49.029211681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:23:49.030666 containerd[1566]: time="2025-05-16T05:23:49.030638250Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.131340354s" May 16 05:23:49.030721 containerd[1566]: time="2025-05-16T05:23:49.030681406Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 16 05:23:51.623097 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:23:51.623393 systemd[1]: kubelet.service: Consumed 282ms CPU time, 109.2M memory peak. May 16 05:23:51.626071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:23:51.656363 systemd[1]: Reload requested from client PID 2185 ('systemctl') (unit session-7.scope)... May 16 05:23:51.656395 systemd[1]: Reloading... May 16 05:23:51.733925 zram_generator::config[2232]: No configuration found. May 16 05:23:52.039088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:23:52.160665 systemd[1]: Reloading finished in 503 ms. May 16 05:23:52.231543 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 05:23:52.231644 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 05:23:52.231966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:23:52.232010 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.2M memory peak. May 16 05:23:52.233681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:23:52.417631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:23:52.430204 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 05:23:52.618675 kubelet[2276]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:23:52.618675 kubelet[2276]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 05:23:52.618675 kubelet[2276]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:23:52.620331 kubelet[2276]: I0516 05:23:52.618726 2276 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 05:23:52.987071 kubelet[2276]: I0516 05:23:52.987027 2276 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 05:23:52.987071 kubelet[2276]: I0516 05:23:52.987057 2276 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 05:23:52.987287 kubelet[2276]: I0516 05:23:52.987265 2276 server.go:956] "Client rotation is on, will bootstrap in background" May 16 05:23:53.014623 kubelet[2276]: I0516 05:23:53.014569 2276 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:23:53.015210 kubelet[2276]: E0516 05:23:53.014962 2276 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 16 05:23:53.024070 kubelet[2276]: I0516 05:23:53.024023 2276 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 05:23:53.030437 kubelet[2276]: I0516 05:23:53.030403 2276 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 05:23:53.030686 kubelet[2276]: I0516 05:23:53.030653 2276 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 05:23:53.030845 kubelet[2276]: I0516 05:23:53.030678 2276 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 05:23:53.030965 kubelet[2276]: I0516 05:23:53.030850 2276 topology_manager.go:138] "Creating topology manager with none policy" May 16 05:23:53.030965 kubelet[2276]: I0516 05:23:53.030862 2276 container_manager_linux.go:303] "Creating device plugin manager" May 16 05:23:53.031060 kubelet[2276]: I0516 05:23:53.031042 2276 state_mem.go:36] "Initialized new in-memory state store" May 16 05:23:53.033596 kubelet[2276]: I0516 05:23:53.033568 2276 kubelet.go:480] "Attempting to sync node with API server" May 16 05:23:53.033596 kubelet[2276]: I0516 05:23:53.033589 2276 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 05:23:53.033702 kubelet[2276]: I0516 05:23:53.033619 2276 kubelet.go:386] "Adding apiserver pod source" May 16 05:23:53.033702 kubelet[2276]: I0516 05:23:53.033642 2276 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 05:23:53.041348 kubelet[2276]: I0516 05:23:53.041201 2276 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 05:23:53.041660 kubelet[2276]: I0516 05:23:53.041634 2276 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 05:23:53.043025 kubelet[2276]: E0516 05:23:53.042985 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 05:23:53.043025 kubelet[2276]: E0516 05:23:53.042984 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 05:23:53.043974 kubelet[2276]: W0516 05:23:53.043945 2276 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 05:23:53.046980 kubelet[2276]: I0516 05:23:53.046956 2276 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 05:23:53.047032 kubelet[2276]: I0516 05:23:53.047013 2276 server.go:1289] "Started kubelet" May 16 05:23:53.049729 kubelet[2276]: I0516 05:23:53.049697 2276 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 05:23:53.050187 kubelet[2276]: I0516 05:23:53.050121 2276 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 05:23:53.050490 kubelet[2276]: I0516 05:23:53.050458 2276 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 05:23:53.050532 kubelet[2276]: I0516 05:23:53.050514 2276 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 05:23:53.052190 kubelet[2276]: I0516 05:23:53.051413 2276 server.go:317] "Adding debug handlers to kubelet server" May 16 05:23:53.052190 kubelet[2276]: E0516 05:23:53.051695 2276 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 05:23:53.052190 kubelet[2276]: I0516 05:23:53.051961 2276 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 05:23:53.052358 kubelet[2276]: E0516 05:23:53.051051 2276 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fea83fcdcb3bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 05:23:53.046979519 +0000 UTC m=+0.612760495,LastTimestamp:2025-05-16 05:23:53.046979519 +0000 UTC m=+0.612760495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 05:23:53.053042 kubelet[2276]: E0516 05:23:53.053025 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:53.053147 kubelet[2276]: I0516 05:23:53.053137 2276 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 05:23:53.053434 kubelet[2276]: I0516 05:23:53.053422 2276 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 05:23:53.053581 kubelet[2276]: I0516 05:23:53.053553 2276 reconciler.go:26] "Reconciler: start to sync state" May 16 05:23:53.053852 kubelet[2276]: I0516 05:23:53.053820 2276 factory.go:223] Registration of the systemd container factory successfully May 16 05:23:53.053959 kubelet[2276]: I0516 05:23:53.053930 2276 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 05:23:53.054224 kubelet[2276]: E0516 05:23:53.054203 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 05:23:53.054848 kubelet[2276]: E0516 05:23:53.054824 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" May 16 05:23:53.057347 kubelet[2276]: I0516 05:23:53.057329 2276 factory.go:223] Registration of the containerd container factory successfully May 16 05:23:53.085123 kubelet[2276]: I0516 05:23:53.085073 2276 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 05:23:53.085438 kubelet[2276]: I0516 05:23:53.085413 2276 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 05:23:53.085438 kubelet[2276]: I0516 05:23:53.085432 2276 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 05:23:53.085501 kubelet[2276]: I0516 05:23:53.085446 2276 state_mem.go:36] "Initialized new in-memory state store" May 16 05:23:53.086445 kubelet[2276]: I0516 05:23:53.086401 2276 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 05:23:53.086445 kubelet[2276]: I0516 05:23:53.086431 2276 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 05:23:53.086526 kubelet[2276]: I0516 05:23:53.086455 2276 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 05:23:53.086526 kubelet[2276]: I0516 05:23:53.086462 2276 kubelet.go:2436] "Starting kubelet main sync loop" May 16 05:23:53.086526 kubelet[2276]: E0516 05:23:53.086499 2276 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 05:23:53.087175 kubelet[2276]: E0516 05:23:53.087134 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 05:23:53.154313 kubelet[2276]: E0516 05:23:53.154273 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:53.187549 kubelet[2276]: E0516 05:23:53.187499 2276 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:23:53.255029 kubelet[2276]: E0516 05:23:53.254881 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:53.255443 kubelet[2276]: E0516 05:23:53.255399 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" May 16 05:23:53.356049 kubelet[2276]: E0516 05:23:53.356013 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:53.388287 kubelet[2276]: E0516 05:23:53.388228 2276 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:23:53.456889 kubelet[2276]: E0516 05:23:53.456849 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:53.557045 kubelet[2276]: E0516 05:23:53.556923 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:53.656782 kubelet[2276]: E0516 05:23:53.656719 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" May 16 05:23:53.657787 kubelet[2276]: E0516 05:23:53.657751 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:53.758456 kubelet[2276]: E0516 05:23:53.758423 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:53.788617 kubelet[2276]: E0516 05:23:53.788587 2276 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:23:53.859206 kubelet[2276]: E0516 05:23:53.859101 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:53.957160 kubelet[2276]: E0516 05:23:53.957116 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 05:23:53.959412 kubelet[2276]: E0516 05:23:53.959368 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:54.059848 kubelet[2276]: E0516 05:23:54.059811 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:54.081514 kubelet[2276]: E0516 05:23:54.081458 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 05:23:54.160714 kubelet[2276]: E0516 05:23:54.160562 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:54.261254 kubelet[2276]: E0516 05:23:54.261193 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:54.277831 kubelet[2276]: E0516 05:23:54.277793 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 05:23:54.361565 kubelet[2276]: E0516 05:23:54.361499 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:54.457469 kubelet[2276]: E0516 05:23:54.457406 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" May 16 05:23:54.462660 kubelet[2276]: E0516 05:23:54.462608 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:54.517653 kubelet[2276]: E0516 05:23:54.517594 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 05:23:54.563628 kubelet[2276]: E0516 05:23:54.563591 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:54.588743 kubelet[2276]: E0516 05:23:54.588713 2276 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:23:54.644459 kubelet[2276]: I0516 05:23:54.644413 2276 policy_none.go:49] "None policy: Start" May 16 05:23:54.644459 kubelet[2276]: I0516 05:23:54.644461 2276 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 05:23:54.644541 kubelet[2276]: I0516 05:23:54.644479 2276 state_mem.go:35] "Initializing new in-memory state store" May 16 05:23:54.652694 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 05:23:54.663893 kubelet[2276]: E0516 05:23:54.663845 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:23:54.667812 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 05:23:54.670998 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 05:23:54.686797 kubelet[2276]: E0516 05:23:54.686775 2276 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 05:23:54.687052 kubelet[2276]: I0516 05:23:54.687039 2276 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 05:23:54.687378 kubelet[2276]: I0516 05:23:54.687056 2276 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 05:23:54.687378 kubelet[2276]: I0516 05:23:54.687284 2276 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 05:23:54.688191 kubelet[2276]: E0516 05:23:54.688155 2276 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 05:23:54.688191 kubelet[2276]: E0516 05:23:54.688192 2276 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 05:23:54.788663 kubelet[2276]: I0516 05:23:54.788556 2276 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:23:54.788960 kubelet[2276]: E0516 05:23:54.788931 2276 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 16 05:23:54.989828 kubelet[2276]: I0516 05:23:54.989812 2276 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:23:54.990093 kubelet[2276]: E0516 05:23:54.990065 2276 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 16 05:23:55.096723 kubelet[2276]: E0516 05:23:55.096592 2276 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 16 05:23:55.322803 kubelet[2276]: E0516 05:23:55.322648 2276 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fea83fcdcb3bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 05:23:53.046979519 +0000 UTC m=+0.612760495,LastTimestamp:2025-05-16 05:23:53.046979519 +0000 UTC m=+0.612760495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 05:23:55.392341 kubelet[2276]: I0516 05:23:55.392249 2276 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:23:55.392601 kubelet[2276]: E0516 05:23:55.392562 2276 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 16 05:23:56.058707 kubelet[2276]: E0516 05:23:56.058640 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="3.2s" May 16 05:23:56.193977 kubelet[2276]: I0516 05:23:56.193927 2276 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:23:56.194406 kubelet[2276]: E0516 05:23:56.194341 2276 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 16 05:23:56.255164 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 16 05:23:56.273798 kubelet[2276]: I0516 05:23:56.273718 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 05:23:56.273798 kubelet[2276]: I0516 05:23:56.273767 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:23:56.273798 kubelet[2276]: I0516 05:23:56.273789 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:23:56.274009 kubelet[2276]: I0516 05:23:56.273823 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:23:56.274009 kubelet[2276]: I0516 05:23:56.273840 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:23:56.274009 kubelet[2276]: I0516 05:23:56.273855 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:23:56.274812 kubelet[2276]: E0516 05:23:56.274774 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:23:56.281058 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 16 05:23:56.282749 kubelet[2276]: E0516 05:23:56.282715 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:23:56.294963 systemd[1]: Created slice kubepods-burstable-pod68018c61ca0b832366fa183ef8833944.slice - libcontainer container kubepods-burstable-pod68018c61ca0b832366fa183ef8833944.slice. May 16 05:23:56.296794 kubelet[2276]: E0516 05:23:56.296768 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:23:56.374532 kubelet[2276]: I0516 05:23:56.374416 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68018c61ca0b832366fa183ef8833944-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"68018c61ca0b832366fa183ef8833944\") " pod="kube-system/kube-apiserver-localhost" May 16 05:23:56.374532 kubelet[2276]: I0516 05:23:56.374509 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68018c61ca0b832366fa183ef8833944-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"68018c61ca0b832366fa183ef8833944\") " pod="kube-system/kube-apiserver-localhost" May 16 05:23:56.374676 kubelet[2276]: I0516 05:23:56.374545 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68018c61ca0b832366fa183ef8833944-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"68018c61ca0b832366fa183ef8833944\") " pod="kube-system/kube-apiserver-localhost" May 16 05:23:56.495422 kubelet[2276]: E0516 05:23:56.495376 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 05:23:56.575837 kubelet[2276]: E0516 05:23:56.575772 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:56.576598 containerd[1566]: time="2025-05-16T05:23:56.576550325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 16 05:23:56.583945 kubelet[2276]: E0516 05:23:56.583884 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:56.584432 containerd[1566]: time="2025-05-16T05:23:56.584391201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 16 05:23:56.597819 kubelet[2276]: E0516 05:23:56.597775 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:56.598316 containerd[1566]: time="2025-05-16T05:23:56.598267015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:68018c61ca0b832366fa183ef8833944,Namespace:kube-system,Attempt:0,}" May 16 05:23:56.607770 containerd[1566]: time="2025-05-16T05:23:56.607706376Z" level=info msg="connecting to shim 677e672f06ed59d3c1922d0ddbbb9638aba41550104083715c3723f917cfd9cb" address="unix:///run/containerd/s/2731ba0edb8b333d2598419b6603b2707aa1ceec36efca60495f577c5355f92d" namespace=k8s.io protocol=ttrpc version=3 May 16 05:23:56.619642 containerd[1566]: time="2025-05-16T05:23:56.619564078Z" level=info msg="connecting to shim 624dc3a5fd6a791b3e266f0889a59b26ce8c6f12a71a21e99b9d761f1ecf009c" address="unix:///run/containerd/s/9ba0ab9824cff2a436ce15f2ebe7dc52a82b6c02e82f44acf32dc56f4459bcad" namespace=k8s.io protocol=ttrpc version=3 May 16 05:23:56.636011 containerd[1566]: time="2025-05-16T05:23:56.635733944Z" level=info msg="connecting to shim 1ad46628390e4ac0edd5799c1762e7095212f764db45dab4bc894137735aa735" address="unix:///run/containerd/s/9ed7923cea2d4427a600615a8134c6b63f75ca64bc74b4e865cb371516b75481" namespace=k8s.io protocol=ttrpc version=3 May 16 05:23:56.653133 systemd[1]: Started cri-containerd-624dc3a5fd6a791b3e266f0889a59b26ce8c6f12a71a21e99b9d761f1ecf009c.scope - libcontainer container 624dc3a5fd6a791b3e266f0889a59b26ce8c6f12a71a21e99b9d761f1ecf009c. May 16 05:23:56.654776 systemd[1]: Started cri-containerd-677e672f06ed59d3c1922d0ddbbb9638aba41550104083715c3723f917cfd9cb.scope - libcontainer container 677e672f06ed59d3c1922d0ddbbb9638aba41550104083715c3723f917cfd9cb. May 16 05:23:56.659285 systemd[1]: Started cri-containerd-1ad46628390e4ac0edd5799c1762e7095212f764db45dab4bc894137735aa735.scope - libcontainer container 1ad46628390e4ac0edd5799c1762e7095212f764db45dab4bc894137735aa735. May 16 05:23:56.707680 containerd[1566]: time="2025-05-16T05:23:56.707631104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"677e672f06ed59d3c1922d0ddbbb9638aba41550104083715c3723f917cfd9cb\"" May 16 05:23:56.709669 kubelet[2276]: E0516 05:23:56.709103 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:56.712735 containerd[1566]: time="2025-05-16T05:23:56.712691199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"624dc3a5fd6a791b3e266f0889a59b26ce8c6f12a71a21e99b9d761f1ecf009c\"" May 16 05:23:56.713633 kubelet[2276]: E0516 05:23:56.713616 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:56.715323 containerd[1566]: time="2025-05-16T05:23:56.715293111Z" level=info msg="CreateContainer within sandbox \"677e672f06ed59d3c1922d0ddbbb9638aba41550104083715c3723f917cfd9cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 05:23:56.715674 containerd[1566]: time="2025-05-16T05:23:56.715624221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:68018c61ca0b832366fa183ef8833944,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ad46628390e4ac0edd5799c1762e7095212f764db45dab4bc894137735aa735\"" May 16 05:23:56.716345 kubelet[2276]: E0516 05:23:56.716284 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:56.718842 containerd[1566]: time="2025-05-16T05:23:56.718788147Z" level=info msg="CreateContainer within sandbox \"624dc3a5fd6a791b3e266f0889a59b26ce8c6f12a71a21e99b9d761f1ecf009c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 05:23:56.721699 containerd[1566]: time="2025-05-16T05:23:56.721650959Z" level=info msg="CreateContainer within sandbox \"1ad46628390e4ac0edd5799c1762e7095212f764db45dab4bc894137735aa735\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 05:23:56.728078 containerd[1566]: time="2025-05-16T05:23:56.728037570Z" level=info msg="Container 76a158f2940c469a4cc5f393e9c2b802ec638b5cf503d15a3ed3be8b8d268841: CDI devices from CRI Config.CDIDevices: []" May 16 05:23:56.737723 containerd[1566]: time="2025-05-16T05:23:56.737680425Z" level=info msg="Container a42dd55d50d5a84583615b917861783ec32d069b4d75392ebdd92f91980c655a: CDI devices from CRI Config.CDIDevices: []" May 16 05:23:56.738109 containerd[1566]: time="2025-05-16T05:23:56.738081114Z" level=info msg="CreateContainer within sandbox \"677e672f06ed59d3c1922d0ddbbb9638aba41550104083715c3723f917cfd9cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76a158f2940c469a4cc5f393e9c2b802ec638b5cf503d15a3ed3be8b8d268841\"" May 16 05:23:56.738888 containerd[1566]: time="2025-05-16T05:23:56.738743545Z" level=info msg="StartContainer for \"76a158f2940c469a4cc5f393e9c2b802ec638b5cf503d15a3ed3be8b8d268841\"" May 16 05:23:56.740016 containerd[1566]: time="2025-05-16T05:23:56.739984311Z" level=info msg="connecting to shim 76a158f2940c469a4cc5f393e9c2b802ec638b5cf503d15a3ed3be8b8d268841" address="unix:///run/containerd/s/2731ba0edb8b333d2598419b6603b2707aa1ceec36efca60495f577c5355f92d" protocol=ttrpc version=3 May 16 05:23:56.740527 containerd[1566]: time="2025-05-16T05:23:56.740495985Z" level=info msg="Container 1ab3045787547bce68c637d2a6d6a039b4f1fa0b011cbb17775b95d3e7da5cfc: CDI devices from CRI Config.CDIDevices: []" May 16 05:23:56.749707 containerd[1566]: time="2025-05-16T05:23:56.749662777Z" level=info msg="CreateContainer within sandbox \"624dc3a5fd6a791b3e266f0889a59b26ce8c6f12a71a21e99b9d761f1ecf009c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a42dd55d50d5a84583615b917861783ec32d069b4d75392ebdd92f91980c655a\"" May 16 05:23:56.750442 containerd[1566]: time="2025-05-16T05:23:56.750384641Z" level=info msg="StartContainer for \"a42dd55d50d5a84583615b917861783ec32d069b4d75392ebdd92f91980c655a\"" May 16 05:23:56.751719 containerd[1566]: time="2025-05-16T05:23:56.751370893Z" level=info msg="connecting to shim a42dd55d50d5a84583615b917861783ec32d069b4d75392ebdd92f91980c655a" address="unix:///run/containerd/s/9ba0ab9824cff2a436ce15f2ebe7dc52a82b6c02e82f44acf32dc56f4459bcad" protocol=ttrpc version=3 May 16 05:23:56.751719 containerd[1566]: time="2025-05-16T05:23:56.751427278Z" level=info msg="CreateContainer within sandbox \"1ad46628390e4ac0edd5799c1762e7095212f764db45dab4bc894137735aa735\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ab3045787547bce68c637d2a6d6a039b4f1fa0b011cbb17775b95d3e7da5cfc\"" May 16 05:23:56.752053 containerd[1566]: time="2025-05-16T05:23:56.752034888Z" level=info msg="StartContainer for \"1ab3045787547bce68c637d2a6d6a039b4f1fa0b011cbb17775b95d3e7da5cfc\"" May 16 05:23:56.753342 containerd[1566]: time="2025-05-16T05:23:56.753322736Z" level=info msg="connecting to shim 1ab3045787547bce68c637d2a6d6a039b4f1fa0b011cbb17775b95d3e7da5cfc" address="unix:///run/containerd/s/9ed7923cea2d4427a600615a8134c6b63f75ca64bc74b4e865cb371516b75481" protocol=ttrpc version=3 May 16 05:23:56.764295 systemd[1]: Started cri-containerd-76a158f2940c469a4cc5f393e9c2b802ec638b5cf503d15a3ed3be8b8d268841.scope - libcontainer container 76a158f2940c469a4cc5f393e9c2b802ec638b5cf503d15a3ed3be8b8d268841. May 16 05:23:56.768614 systemd[1]: Started cri-containerd-a42dd55d50d5a84583615b917861783ec32d069b4d75392ebdd92f91980c655a.scope - libcontainer container a42dd55d50d5a84583615b917861783ec32d069b4d75392ebdd92f91980c655a. May 16 05:23:56.772597 systemd[1]: Started cri-containerd-1ab3045787547bce68c637d2a6d6a039b4f1fa0b011cbb17775b95d3e7da5cfc.scope - libcontainer container 1ab3045787547bce68c637d2a6d6a039b4f1fa0b011cbb17775b95d3e7da5cfc. May 16 05:23:56.777030 kubelet[2276]: E0516 05:23:56.776523 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 05:23:56.823134 containerd[1566]: time="2025-05-16T05:23:56.822392335Z" level=info msg="StartContainer for \"76a158f2940c469a4cc5f393e9c2b802ec638b5cf503d15a3ed3be8b8d268841\" returns successfully" May 16 05:23:56.834977 containerd[1566]: time="2025-05-16T05:23:56.834924589Z" level=info msg="StartContainer for \"a42dd55d50d5a84583615b917861783ec32d069b4d75392ebdd92f91980c655a\" returns successfully" May 16 05:23:56.838244 containerd[1566]: time="2025-05-16T05:23:56.838211229Z" level=info msg="StartContainer for \"1ab3045787547bce68c637d2a6d6a039b4f1fa0b011cbb17775b95d3e7da5cfc\" returns successfully" May 16 05:23:57.098531 kubelet[2276]: E0516 05:23:57.098491 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:23:57.098948 kubelet[2276]: E0516 05:23:57.098618 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:57.100911 kubelet[2276]: E0516 05:23:57.100554 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:23:57.100911 kubelet[2276]: E0516 05:23:57.100636 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:57.102639 kubelet[2276]: E0516 05:23:57.102613 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:23:57.102723 kubelet[2276]: E0516 05:23:57.102702 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:57.797430 kubelet[2276]: I0516 05:23:57.797392 2276 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:23:58.108479 kubelet[2276]: E0516 05:23:58.108339 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:23:58.108479 kubelet[2276]: E0516 05:23:58.108462 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:58.108960 kubelet[2276]: E0516 05:23:58.108600 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:23:58.108960 kubelet[2276]: E0516 05:23:58.108723 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 05:23:58.108960 kubelet[2276]: E0516 05:23:58.108742 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:58.108960 kubelet[2276]: E0516 05:23:58.108800 2276 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:23:58.388535 kubelet[2276]: I0516 05:23:58.387727 2276 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 05:23:58.454309 kubelet[2276]: I0516 05:23:58.454255 2276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:23:58.459401 kubelet[2276]: E0516 05:23:58.459366 2276 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 05:23:58.459401 kubelet[2276]: I0516 05:23:58.459388 2276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:23:58.460565 kubelet[2276]: E0516 05:23:58.460515 2276 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 05:23:58.460565 kubelet[2276]: I0516 05:23:58.460548 2276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 05:23:58.462166 kubelet[2276]: E0516 05:23:58.462142 2276 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 05:23:59.044183 kubelet[2276]: I0516 05:23:59.044145 2276 apiserver.go:52] "Watching apiserver" May 16 05:23:59.053606 kubelet[2276]: I0516 05:23:59.053576 2276 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 05:24:00.360778 systemd[1]: Reload requested from client PID 2560 ('systemctl') (unit session-7.scope)... May 16 05:24:00.360796 systemd[1]: Reloading... May 16 05:24:00.436928 zram_generator::config[2607]: No configuration found. May 16 05:24:00.520413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:24:00.648079 systemd[1]: Reloading finished in 286 ms. May 16 05:24:00.675792 kubelet[2276]: I0516 05:24:00.675738 2276 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:24:00.675906 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:24:00.697112 systemd[1]: kubelet.service: Deactivated successfully. May 16 05:24:00.697427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:24:00.697477 systemd[1]: kubelet.service: Consumed 1.151s CPU time, 132.4M memory peak. May 16 05:24:00.699337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:24:00.913597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:24:00.930566 (kubelet)[2648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 05:24:00.967129 kubelet[2648]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:24:00.967525 kubelet[2648]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 05:24:00.967525 kubelet[2648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:24:00.967596 kubelet[2648]: I0516 05:24:00.967569 2648 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 05:24:00.973474 kubelet[2648]: I0516 05:24:00.973445 2648 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 05:24:00.973474 kubelet[2648]: I0516 05:24:00.973465 2648 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 05:24:00.975129 kubelet[2648]: I0516 05:24:00.973949 2648 server.go:956] "Client rotation is on, will bootstrap in background" May 16 05:24:00.975720 kubelet[2648]: I0516 05:24:00.975697 2648 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 16 05:24:00.977578 kubelet[2648]: I0516 05:24:00.977550 2648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:24:00.981088 kubelet[2648]: I0516 05:24:00.981069 2648 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 05:24:00.986891 kubelet[2648]: I0516 05:24:00.986867 2648 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 05:24:00.987122 kubelet[2648]: I0516 05:24:00.987090 2648 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 05:24:00.987278 kubelet[2648]: I0516 05:24:00.987117 2648 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 05:24:00.987355 kubelet[2648]: I0516 05:24:00.987285 2648 topology_manager.go:138] "Creating topology manager with none policy" May 16 05:24:00.987355 kubelet[2648]: I0516 05:24:00.987293 2648 container_manager_linux.go:303] "Creating device plugin manager" May 16 05:24:00.987355 kubelet[2648]: I0516 05:24:00.987335 2648 state_mem.go:36] "Initialized new in-memory state store" May 16 05:24:00.987533 kubelet[2648]: I0516 05:24:00.987515 2648 kubelet.go:480] "Attempting to sync node with API server" May 16 05:24:00.987558 kubelet[2648]: I0516 05:24:00.987533 2648 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 05:24:00.987558 kubelet[2648]: I0516 05:24:00.987554 2648 kubelet.go:386] "Adding apiserver pod source" May 16 05:24:00.987602 kubelet[2648]: I0516 05:24:00.987568 2648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 05:24:00.989142 kubelet[2648]: I0516 05:24:00.989112 2648 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 05:24:00.990090 kubelet[2648]: I0516 05:24:00.989996 2648 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 05:24:00.996577 kubelet[2648]: I0516 05:24:00.996539 2648 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 05:24:00.998604 kubelet[2648]: I0516 05:24:00.998361 2648 server.go:1289] "Started kubelet" May 16 05:24:00.999876 kubelet[2648]: I0516 05:24:00.999804 2648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 05:24:01.000415 kubelet[2648]: I0516 05:24:00.998892 2648 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 05:24:01.001920 kubelet[2648]: I0516 05:24:01.001874 2648 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 05:24:01.003427 kubelet[2648]: I0516 05:24:01.003391 2648 server.go:317] "Adding debug handlers to kubelet server" May 16 05:24:01.004488 kubelet[2648]: I0516 05:24:01.003487 2648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 05:24:01.004488 kubelet[2648]: I0516 05:24:01.003800 2648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 05:24:01.006096 kubelet[2648]: I0516 05:24:01.006068 2648 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 05:24:01.006381 kubelet[2648]: E0516 05:24:01.006349 2648 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:24:01.006596 kubelet[2648]: I0516 05:24:01.006563 2648 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 05:24:01.007021 kubelet[2648]: I0516 05:24:01.006695 2648 reconciler.go:26] "Reconciler: start to sync state" May 16 05:24:01.009068 kubelet[2648]: I0516 05:24:01.009016 2648 factory.go:223] Registration of the systemd container factory successfully May 16 05:24:01.009273 kubelet[2648]: I0516 05:24:01.009247 2648 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 05:24:01.010674 kubelet[2648]: I0516 05:24:01.010656 2648 factory.go:223] Registration of the containerd container factory successfully May 16 05:24:01.010935 kubelet[2648]: E0516 05:24:01.010881 2648 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 05:24:01.021723 kubelet[2648]: I0516 05:24:01.021664 2648 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 05:24:01.023345 kubelet[2648]: I0516 05:24:01.023327 2648 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 05:24:01.024801 kubelet[2648]: I0516 05:24:01.023537 2648 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 05:24:01.024801 kubelet[2648]: I0516 05:24:01.023568 2648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 05:24:01.024801 kubelet[2648]: I0516 05:24:01.023576 2648 kubelet.go:2436] "Starting kubelet main sync loop" May 16 05:24:01.024801 kubelet[2648]: E0516 05:24:01.023616 2648 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 05:24:01.046592 kubelet[2648]: I0516 05:24:01.046554 2648 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 05:24:01.046592 kubelet[2648]: I0516 05:24:01.046579 2648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 05:24:01.046592 kubelet[2648]: I0516 05:24:01.046601 2648 state_mem.go:36] "Initialized new in-memory state store" May 16 05:24:01.046772 kubelet[2648]: I0516 05:24:01.046725 2648 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 05:24:01.046772 kubelet[2648]: I0516 05:24:01.046737 2648 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 05:24:01.046772 kubelet[2648]: I0516 05:24:01.046752 2648 policy_none.go:49] "None policy: Start" May 16 05:24:01.046772 kubelet[2648]: I0516 05:24:01.046761 2648 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 05:24:01.046772 kubelet[2648]: I0516 05:24:01.046770 2648 state_mem.go:35] "Initializing new in-memory state store" May 16 05:24:01.046881 kubelet[2648]: I0516 05:24:01.046852 2648 state_mem.go:75] "Updated machine memory state" May 16 05:24:01.050981 kubelet[2648]: E0516 05:24:01.050955 2648 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 05:24:01.051140 kubelet[2648]: I0516 05:24:01.051119 2648 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 05:24:01.051180 kubelet[2648]: I0516 05:24:01.051138 2648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 05:24:01.051320 kubelet[2648]: I0516 05:24:01.051290 2648 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 05:24:01.052280 kubelet[2648]: E0516 05:24:01.052261 2648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 05:24:01.124783 kubelet[2648]: I0516 05:24:01.124723 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:24:01.124783 kubelet[2648]: I0516 05:24:01.124766 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:24:01.124989 kubelet[2648]: I0516 05:24:01.124836 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 05:24:01.155997 kubelet[2648]: I0516 05:24:01.155951 2648 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 05:24:01.161298 kubelet[2648]: I0516 05:24:01.161272 2648 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 05:24:01.161390 kubelet[2648]: I0516 05:24:01.161332 2648 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 05:24:01.308136 kubelet[2648]: I0516 05:24:01.308103 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:24:01.308136 kubelet[2648]: I0516 05:24:01.308131 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:24:01.308336 kubelet[2648]: I0516 05:24:01.308150 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:24:01.308336 kubelet[2648]: I0516 05:24:01.308167 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 05:24:01.308336 kubelet[2648]: I0516 05:24:01.308191 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:24:01.308336 kubelet[2648]: I0516 05:24:01.308244 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:24:01.308336 kubelet[2648]: I0516 05:24:01.308276 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68018c61ca0b832366fa183ef8833944-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"68018c61ca0b832366fa183ef8833944\") " pod="kube-system/kube-apiserver-localhost" May 16 05:24:01.308458 kubelet[2648]: I0516 05:24:01.308298 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68018c61ca0b832366fa183ef8833944-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"68018c61ca0b832366fa183ef8833944\") " pod="kube-system/kube-apiserver-localhost" May 16 05:24:01.308458 kubelet[2648]: I0516 05:24:01.308319 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68018c61ca0b832366fa183ef8833944-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"68018c61ca0b832366fa183ef8833944\") " pod="kube-system/kube-apiserver-localhost" May 16 05:24:01.362587 sudo[2689]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 05:24:01.362941 sudo[2689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 05:24:01.429416 kubelet[2648]: E0516 05:24:01.429383 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:01.431466 kubelet[2648]: E0516 05:24:01.431433 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:01.431515 kubelet[2648]: E0516 05:24:01.431480 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:01.817640 sudo[2689]: pam_unix(sudo:session): session closed for user root May 16 05:24:01.988062 kubelet[2648]: I0516 05:24:01.988019 2648 apiserver.go:52] "Watching apiserver" May 16 05:24:02.007655 kubelet[2648]: I0516 05:24:02.007619 2648 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 05:24:02.034747 kubelet[2648]: E0516 05:24:02.034641 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:02.035242 kubelet[2648]: I0516 05:24:02.035210 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 05:24:02.035494 kubelet[2648]: I0516 05:24:02.035483 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 05:24:02.130613 kubelet[2648]: E0516 05:24:02.129316 2648 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 05:24:02.130613 kubelet[2648]: E0516 05:24:02.129494 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:02.130816 kubelet[2648]: E0516 05:24:02.130679 2648 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 05:24:02.130816 kubelet[2648]: E0516 05:24:02.130810 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:02.131645 kubelet[2648]: I0516 05:24:02.131591 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.131568069 podStartE2EDuration="1.131568069s" podCreationTimestamp="2025-05-16 05:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:24:02.129155631 +0000 UTC m=+1.193462084" watchObservedRunningTime="2025-05-16 05:24:02.131568069 +0000 UTC m=+1.195874522" May 16 05:24:02.150940 kubelet[2648]: I0516 05:24:02.150833 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.150809448 podStartE2EDuration="1.150809448s" podCreationTimestamp="2025-05-16 05:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:24:02.139263924 +0000 UTC m=+1.203570377" watchObservedRunningTime="2025-05-16 05:24:02.150809448 +0000 UTC m=+1.215115891" May 16 05:24:02.151614 kubelet[2648]: I0516 05:24:02.151278 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.151267906 podStartE2EDuration="1.151267906s" podCreationTimestamp="2025-05-16 05:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:24:02.151255347 +0000 UTC m=+1.215561800" watchObservedRunningTime="2025-05-16 05:24:02.151267906 +0000 UTC m=+1.215574349" May 16 05:24:03.036226 kubelet[2648]: E0516 05:24:03.036181 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:03.037360 kubelet[2648]: E0516 05:24:03.036675 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:03.065361 sudo[1769]: pam_unix(sudo:session): session closed for user root May 16 05:24:03.066697 sshd[1768]: Connection closed by 10.0.0.1 port 57732 May 16 05:24:03.067142 sshd-session[1766]: pam_unix(sshd:session): session closed for user core May 16 05:24:03.071607 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:57732.service: Deactivated successfully. May 16 05:24:03.073959 systemd[1]: session-7.scope: Deactivated successfully. May 16 05:24:03.074185 systemd[1]: session-7.scope: Consumed 5.196s CPU time, 260.9M memory peak. May 16 05:24:03.075602 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. May 16 05:24:03.076850 systemd-logind[1550]: Removed session 7. May 16 05:24:06.651793 kubelet[2648]: E0516 05:24:06.651757 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:06.771096 kubelet[2648]: E0516 05:24:06.771062 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:07.006436 kubelet[2648]: I0516 05:24:07.006407 2648 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 05:24:07.006750 containerd[1566]: time="2025-05-16T05:24:07.006715706Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 05:24:07.007095 kubelet[2648]: I0516 05:24:07.006869 2648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 05:24:07.041754 kubelet[2648]: E0516 05:24:07.041699 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:07.041866 kubelet[2648]: E0516 05:24:07.041763 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:07.962403 systemd[1]: Created slice kubepods-besteffort-pod583c7eda_da76_47ce_8550_4995ce57acb1.slice - libcontainer container kubepods-besteffort-pod583c7eda_da76_47ce_8550_4995ce57acb1.slice. May 16 05:24:07.973580 systemd[1]: Created slice kubepods-burstable-pod980596e2_010b_4b7b_ac96_02194acc7e8a.slice - libcontainer container kubepods-burstable-pod980596e2_010b_4b7b_ac96_02194acc7e8a.slice. May 16 05:24:08.052062 kubelet[2648]: I0516 05:24:08.052021 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-host-proc-sys-net\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052062 kubelet[2648]: I0516 05:24:08.052059 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7ht\" (UniqueName: \"kubernetes.io/projected/583c7eda-da76-47ce-8550-4995ce57acb1-kube-api-access-gb7ht\") pod \"kube-proxy-mfrzn\" (UID: \"583c7eda-da76-47ce-8550-4995ce57acb1\") " pod="kube-system/kube-proxy-mfrzn" May 16 05:24:08.052513 kubelet[2648]: I0516 05:24:08.052078 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-run\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052513 kubelet[2648]: I0516 05:24:08.052094 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/583c7eda-da76-47ce-8550-4995ce57acb1-xtables-lock\") pod \"kube-proxy-mfrzn\" (UID: \"583c7eda-da76-47ce-8550-4995ce57acb1\") " pod="kube-system/kube-proxy-mfrzn" May 16 05:24:08.052513 kubelet[2648]: I0516 05:24:08.052109 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/583c7eda-da76-47ce-8550-4995ce57acb1-lib-modules\") pod \"kube-proxy-mfrzn\" (UID: \"583c7eda-da76-47ce-8550-4995ce57acb1\") " pod="kube-system/kube-proxy-mfrzn" May 16 05:24:08.052513 kubelet[2648]: I0516 05:24:08.052122 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cni-path\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052513 kubelet[2648]: I0516 05:24:08.052135 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-etc-cni-netd\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052513 kubelet[2648]: I0516 05:24:08.052150 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-xtables-lock\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052655 kubelet[2648]: I0516 05:24:08.052163 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-bpf-maps\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052655 kubelet[2648]: I0516 05:24:08.052176 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-lib-modules\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052655 kubelet[2648]: I0516 05:24:08.052192 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/980596e2-010b-4b7b-ac96-02194acc7e8a-clustermesh-secrets\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052655 kubelet[2648]: I0516 05:24:08.052206 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-config-path\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052655 kubelet[2648]: I0516 05:24:08.052233 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-host-proc-sys-kernel\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052655 kubelet[2648]: I0516 05:24:08.052247 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/980596e2-010b-4b7b-ac96-02194acc7e8a-hubble-tls\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052802 kubelet[2648]: I0516 05:24:08.052260 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbhgf\" (UniqueName: \"kubernetes.io/projected/980596e2-010b-4b7b-ac96-02194acc7e8a-kube-api-access-bbhgf\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052802 kubelet[2648]: I0516 05:24:08.052274 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/583c7eda-da76-47ce-8550-4995ce57acb1-kube-proxy\") pod \"kube-proxy-mfrzn\" (UID: \"583c7eda-da76-47ce-8550-4995ce57acb1\") " pod="kube-system/kube-proxy-mfrzn" May 16 05:24:08.052802 kubelet[2648]: I0516 05:24:08.052287 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-hostproc\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.052802 kubelet[2648]: I0516 05:24:08.052333 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-cgroup\") pod \"cilium-l624s\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " pod="kube-system/cilium-l624s" May 16 05:24:08.106876 systemd[1]: Created slice kubepods-besteffort-podb970f8ae_a417_4783_88a7_907fedd6bf99.slice - libcontainer container kubepods-besteffort-podb970f8ae_a417_4783_88a7_907fedd6bf99.slice. May 16 05:24:08.154078 kubelet[2648]: I0516 05:24:08.153498 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4462\" (UniqueName: \"kubernetes.io/projected/b970f8ae-a417-4783-88a7-907fedd6bf99-kube-api-access-r4462\") pod \"cilium-operator-6c4d7847fc-x7str\" (UID: \"b970f8ae-a417-4783-88a7-907fedd6bf99\") " pod="kube-system/cilium-operator-6c4d7847fc-x7str" May 16 05:24:08.154078 kubelet[2648]: I0516 05:24:08.153585 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b970f8ae-a417-4783-88a7-907fedd6bf99-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-x7str\" (UID: \"b970f8ae-a417-4783-88a7-907fedd6bf99\") " pod="kube-system/cilium-operator-6c4d7847fc-x7str" May 16 05:24:08.270999 kubelet[2648]: E0516 05:24:08.270910 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:08.272018 containerd[1566]: time="2025-05-16T05:24:08.271968391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mfrzn,Uid:583c7eda-da76-47ce-8550-4995ce57acb1,Namespace:kube-system,Attempt:0,}" May 16 05:24:08.277402 kubelet[2648]: E0516 05:24:08.277368 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:08.277911 containerd[1566]: time="2025-05-16T05:24:08.277859104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l624s,Uid:980596e2-010b-4b7b-ac96-02194acc7e8a,Namespace:kube-system,Attempt:0,}" May 16 05:24:08.322924 containerd[1566]: time="2025-05-16T05:24:08.322750623Z" level=info msg="connecting to shim a62bdf8008721727038c5fc617fea48fa477689295dffe5432f81d82dbc3a7d3" address="unix:///run/containerd/s/76c3990402e7b54fe9182262476511577da317d5307a9b0dadb4b235249e1439" namespace=k8s.io protocol=ttrpc version=3 May 16 05:24:08.323698 containerd[1566]: time="2025-05-16T05:24:08.323638043Z" level=info msg="connecting to shim 2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f" address="unix:///run/containerd/s/fbb9f69deb992cbcefabeca289900d3ba57b77189e8076009f6dfb8c41d58af5" namespace=k8s.io protocol=ttrpc version=3 May 16 05:24:08.378032 systemd[1]: Started cri-containerd-2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f.scope - libcontainer container 2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f. May 16 05:24:08.380091 systemd[1]: Started cri-containerd-a62bdf8008721727038c5fc617fea48fa477689295dffe5432f81d82dbc3a7d3.scope - libcontainer container a62bdf8008721727038c5fc617fea48fa477689295dffe5432f81d82dbc3a7d3. May 16 05:24:08.409165 containerd[1566]: time="2025-05-16T05:24:08.409122335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l624s,Uid:980596e2-010b-4b7b-ac96-02194acc7e8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\"" May 16 05:24:08.409634 kubelet[2648]: E0516 05:24:08.409609 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:08.410092 containerd[1566]: time="2025-05-16T05:24:08.410063993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-x7str,Uid:b970f8ae-a417-4783-88a7-907fedd6bf99,Namespace:kube-system,Attempt:0,}" May 16 05:24:08.411164 kubelet[2648]: E0516 05:24:08.411128 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:08.412994 containerd[1566]: time="2025-05-16T05:24:08.412959417Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 05:24:08.419527 containerd[1566]: time="2025-05-16T05:24:08.419478624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mfrzn,Uid:583c7eda-da76-47ce-8550-4995ce57acb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a62bdf8008721727038c5fc617fea48fa477689295dffe5432f81d82dbc3a7d3\"" May 16 05:24:08.420297 kubelet[2648]: E0516 05:24:08.420273 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:08.424715 containerd[1566]: time="2025-05-16T05:24:08.424678179Z" level=info msg="CreateContainer within sandbox \"a62bdf8008721727038c5fc617fea48fa477689295dffe5432f81d82dbc3a7d3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 05:24:08.436677 containerd[1566]: time="2025-05-16T05:24:08.436627739Z" level=info msg="Container 78df9ee8c6d4f9db0fa4c02d7bfad52bf3905deecc4e532e109283065712ce66: CDI devices from CRI Config.CDIDevices: []" May 16 05:24:08.437786 containerd[1566]: time="2025-05-16T05:24:08.437733580Z" level=info msg="connecting to shim e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1" address="unix:///run/containerd/s/e3f84d7d9080ded07fc3a823c42c2fbf718688ab117d406d1f3d6c1a5c582156" namespace=k8s.io protocol=ttrpc version=3 May 16 05:24:08.448713 containerd[1566]: time="2025-05-16T05:24:08.448664525Z" level=info msg="CreateContainer within sandbox \"a62bdf8008721727038c5fc617fea48fa477689295dffe5432f81d82dbc3a7d3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"78df9ee8c6d4f9db0fa4c02d7bfad52bf3905deecc4e532e109283065712ce66\"" May 16 05:24:08.449827 containerd[1566]: time="2025-05-16T05:24:08.449794167Z" level=info msg="StartContainer for \"78df9ee8c6d4f9db0fa4c02d7bfad52bf3905deecc4e532e109283065712ce66\"" May 16 05:24:08.453930 containerd[1566]: time="2025-05-16T05:24:08.452755805Z" level=info msg="connecting to shim 78df9ee8c6d4f9db0fa4c02d7bfad52bf3905deecc4e532e109283065712ce66" address="unix:///run/containerd/s/76c3990402e7b54fe9182262476511577da317d5307a9b0dadb4b235249e1439" protocol=ttrpc version=3 May 16 05:24:08.465064 systemd[1]: Started cri-containerd-e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1.scope - libcontainer container e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1. May 16 05:24:08.473030 systemd[1]: Started cri-containerd-78df9ee8c6d4f9db0fa4c02d7bfad52bf3905deecc4e532e109283065712ce66.scope - libcontainer container 78df9ee8c6d4f9db0fa4c02d7bfad52bf3905deecc4e532e109283065712ce66. May 16 05:24:08.510635 containerd[1566]: time="2025-05-16T05:24:08.510550034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-x7str,Uid:b970f8ae-a417-4783-88a7-907fedd6bf99,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1\"" May 16 05:24:08.511715 kubelet[2648]: E0516 05:24:08.511684 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:08.522706 containerd[1566]: time="2025-05-16T05:24:08.522596453Z" level=info msg="StartContainer for \"78df9ee8c6d4f9db0fa4c02d7bfad52bf3905deecc4e532e109283065712ce66\" returns successfully" May 16 05:24:08.839138 kubelet[2648]: E0516 05:24:08.838956 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:09.047815 kubelet[2648]: E0516 05:24:09.047769 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:09.047815 kubelet[2648]: E0516 05:24:09.047819 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:09.059484 kubelet[2648]: I0516 05:24:09.059368 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mfrzn" podStartSLOduration=2.059298577 podStartE2EDuration="2.059298577s" podCreationTimestamp="2025-05-16 05:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:24:09.058275228 +0000 UTC m=+8.122581681" watchObservedRunningTime="2025-05-16 05:24:09.059298577 +0000 UTC m=+8.123605030" May 16 05:24:10.050678 kubelet[2648]: E0516 05:24:10.050628 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:13.303876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882951448.mount: Deactivated successfully. May 16 05:24:14.533672 update_engine[1552]: I20250516 05:24:14.533593 1552 update_attempter.cc:509] Updating boot flags... May 16 05:24:22.310062 containerd[1566]: time="2025-05-16T05:24:22.309982903Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:24:22.310782 containerd[1566]: time="2025-05-16T05:24:22.310758888Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 05:24:22.311927 containerd[1566]: time="2025-05-16T05:24:22.311885137Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:24:22.313152 containerd[1566]: time="2025-05-16T05:24:22.313121188Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.90012344s" May 16 05:24:22.313152 containerd[1566]: time="2025-05-16T05:24:22.313150547Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 05:24:22.314265 containerd[1566]: time="2025-05-16T05:24:22.314055370Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 05:24:22.317571 containerd[1566]: time="2025-05-16T05:24:22.317528390Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 05:24:22.325929 containerd[1566]: time="2025-05-16T05:24:22.325867756Z" level=info msg="Container 1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a: CDI devices from CRI Config.CDIDevices: []" May 16 05:24:22.331914 containerd[1566]: time="2025-05-16T05:24:22.331866424Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\"" May 16 05:24:22.332318 containerd[1566]: time="2025-05-16T05:24:22.332282562Z" level=info msg="StartContainer for \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\"" May 16 05:24:22.333296 containerd[1566]: time="2025-05-16T05:24:22.333270704Z" level=info msg="connecting to shim 1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a" address="unix:///run/containerd/s/fbb9f69deb992cbcefabeca289900d3ba57b77189e8076009f6dfb8c41d58af5" protocol=ttrpc version=3 May 16 05:24:22.359084 systemd[1]: Started cri-containerd-1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a.scope - libcontainer container 1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a. May 16 05:24:22.389854 containerd[1566]: time="2025-05-16T05:24:22.389800855Z" level=info msg="StartContainer for \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\" returns successfully" May 16 05:24:22.399688 systemd[1]: cri-containerd-1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a.scope: Deactivated successfully. May 16 05:24:22.402388 containerd[1566]: time="2025-05-16T05:24:22.402227478Z" level=info msg="received exit event container_id:\"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\" id:\"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\" pid:3089 exited_at:{seconds:1747373062 nanos:401576295}" May 16 05:24:22.402388 containerd[1566]: time="2025-05-16T05:24:22.402368672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\" id:\"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\" pid:3089 exited_at:{seconds:1747373062 nanos:401576295}" May 16 05:24:22.426390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a-rootfs.mount: Deactivated successfully. May 16 05:24:23.075118 kubelet[2648]: E0516 05:24:23.075078 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:23.080743 containerd[1566]: time="2025-05-16T05:24:23.080687620Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 05:24:23.090359 containerd[1566]: time="2025-05-16T05:24:23.090321869Z" level=info msg="Container 53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc: CDI devices from CRI Config.CDIDevices: []" May 16 05:24:23.097017 containerd[1566]: time="2025-05-16T05:24:23.096993654Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\"" May 16 05:24:23.097543 containerd[1566]: time="2025-05-16T05:24:23.097498569Z" level=info msg="StartContainer for \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\"" May 16 05:24:23.098645 containerd[1566]: time="2025-05-16T05:24:23.098600563Z" level=info msg="connecting to shim 53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc" address="unix:///run/containerd/s/fbb9f69deb992cbcefabeca289900d3ba57b77189e8076009f6dfb8c41d58af5" protocol=ttrpc version=3 May 16 05:24:23.120078 systemd[1]: Started cri-containerd-53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc.scope - libcontainer container 53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc. May 16 05:24:23.149788 containerd[1566]: time="2025-05-16T05:24:23.149745001Z" level=info msg="StartContainer for \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\" returns successfully" May 16 05:24:23.164842 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 05:24:23.165099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 05:24:23.165329 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 05:24:23.167224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 05:24:23.169299 systemd[1]: cri-containerd-53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc.scope: Deactivated successfully. May 16 05:24:23.171127 containerd[1566]: time="2025-05-16T05:24:23.171092355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\" id:\"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\" pid:3137 exited_at:{seconds:1747373063 nanos:170439503}" May 16 05:24:23.171202 containerd[1566]: time="2025-05-16T05:24:23.171146434Z" level=info msg="received exit event container_id:\"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\" id:\"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\" pid:3137 exited_at:{seconds:1747373063 nanos:170439503}" May 16 05:24:23.187995 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 05:24:23.894742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714334316.mount: Deactivated successfully. May 16 05:24:24.079167 kubelet[2648]: E0516 05:24:24.079130 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:24.084028 containerd[1566]: time="2025-05-16T05:24:24.083935784Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 05:24:24.095240 containerd[1566]: time="2025-05-16T05:24:24.095185376Z" level=info msg="Container 7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636: CDI devices from CRI Config.CDIDevices: []" May 16 05:24:24.099870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344819659.mount: Deactivated successfully. May 16 05:24:24.108032 containerd[1566]: time="2025-05-16T05:24:24.107996187Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\"" May 16 05:24:24.108618 containerd[1566]: time="2025-05-16T05:24:24.108580057Z" level=info msg="StartContainer for \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\"" May 16 05:24:24.110066 containerd[1566]: time="2025-05-16T05:24:24.110037748Z" level=info msg="connecting to shim 7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636" address="unix:///run/containerd/s/fbb9f69deb992cbcefabeca289900d3ba57b77189e8076009f6dfb8c41d58af5" protocol=ttrpc version=3 May 16 05:24:24.137069 systemd[1]: Started cri-containerd-7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636.scope - libcontainer container 7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636. May 16 05:24:24.184284 systemd[1]: cri-containerd-7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636.scope: Deactivated successfully. May 16 05:24:24.188562 containerd[1566]: time="2025-05-16T05:24:24.188518111Z" level=info msg="received exit event container_id:\"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\" id:\"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\" pid:3192 exited_at:{seconds:1747373064 nanos:186161829}" May 16 05:24:24.188701 containerd[1566]: time="2025-05-16T05:24:24.188595977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\" id:\"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\" pid:3192 exited_at:{seconds:1747373064 nanos:186161829}" May 16 05:24:24.189352 containerd[1566]: time="2025-05-16T05:24:24.189319507Z" level=info msg="StartContainer for \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\" returns successfully" May 16 05:24:24.562964 containerd[1566]: time="2025-05-16T05:24:24.562839445Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:24:24.563646 containerd[1566]: time="2025-05-16T05:24:24.563619848Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 05:24:24.564917 containerd[1566]: time="2025-05-16T05:24:24.564872709Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:24:24.565890 containerd[1566]: time="2025-05-16T05:24:24.565862853Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.251764124s" May 16 05:24:24.565956 containerd[1566]: time="2025-05-16T05:24:24.565891259Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 05:24:24.570206 containerd[1566]: time="2025-05-16T05:24:24.570161115Z" level=info msg="CreateContainer within sandbox \"e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 05:24:24.577200 containerd[1566]: time="2025-05-16T05:24:24.577166694Z" level=info msg="Container 0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d: CDI devices from CRI Config.CDIDevices: []" May 16 05:24:24.584075 containerd[1566]: time="2025-05-16T05:24:24.584043555Z" level=info msg="CreateContainer within sandbox \"e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\"" May 16 05:24:24.584694 containerd[1566]: time="2025-05-16T05:24:24.584494538Z" level=info msg="StartContainer for \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\"" May 16 05:24:24.585270 containerd[1566]: time="2025-05-16T05:24:24.585211014Z" level=info msg="connecting to shim 0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d" address="unix:///run/containerd/s/e3f84d7d9080ded07fc3a823c42c2fbf718688ab117d406d1f3d6c1a5c582156" protocol=ttrpc version=3 May 16 05:24:24.617075 systemd[1]: Started cri-containerd-0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d.scope - libcontainer container 0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d. May 16 05:24:24.650030 containerd[1566]: time="2025-05-16T05:24:24.649917285Z" level=info msg="StartContainer for \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" returns successfully" May 16 05:24:25.081316 kubelet[2648]: E0516 05:24:25.081276 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:25.084372 kubelet[2648]: E0516 05:24:25.084351 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:25.166936 containerd[1566]: time="2025-05-16T05:24:25.166870596Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 05:24:25.401583 kubelet[2648]: I0516 05:24:25.401442 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-x7str" podStartSLOduration=1.348054815 podStartE2EDuration="17.401323695s" podCreationTimestamp="2025-05-16 05:24:08 +0000 UTC" firstStartedPulling="2025-05-16 05:24:08.513179546 +0000 UTC m=+7.577486009" lastFinishedPulling="2025-05-16 05:24:24.566448446 +0000 UTC m=+23.630754889" observedRunningTime="2025-05-16 05:24:25.400447203 +0000 UTC m=+24.464753646" watchObservedRunningTime="2025-05-16 05:24:25.401323695 +0000 UTC m=+24.465630148" May 16 05:24:25.434451 containerd[1566]: time="2025-05-16T05:24:25.434402979Z" level=info msg="Container f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9: CDI devices from CRI Config.CDIDevices: []" May 16 05:24:25.438065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673864684.mount: Deactivated successfully. May 16 05:24:25.669312 containerd[1566]: time="2025-05-16T05:24:25.669177531Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\"" May 16 05:24:25.670094 containerd[1566]: time="2025-05-16T05:24:25.670056517Z" level=info msg="StartContainer for \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\"" May 16 05:24:25.671083 containerd[1566]: time="2025-05-16T05:24:25.671013160Z" level=info msg="connecting to shim f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9" address="unix:///run/containerd/s/fbb9f69deb992cbcefabeca289900d3ba57b77189e8076009f6dfb8c41d58af5" protocol=ttrpc version=3 May 16 05:24:25.697031 systemd[1]: Started cri-containerd-f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9.scope - libcontainer container f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9. May 16 05:24:25.726831 systemd[1]: cri-containerd-f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9.scope: Deactivated successfully. May 16 05:24:25.727420 containerd[1566]: time="2025-05-16T05:24:25.727379868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\" id:\"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\" pid:3274 exited_at:{seconds:1747373065 nanos:727063757}" May 16 05:24:25.729477 containerd[1566]: time="2025-05-16T05:24:25.729444265Z" level=info msg="received exit event container_id:\"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\" id:\"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\" pid:3274 exited_at:{seconds:1747373065 nanos:727063757}" May 16 05:24:25.736749 containerd[1566]: time="2025-05-16T05:24:25.736696319Z" level=info msg="StartContainer for \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\" returns successfully" May 16 05:24:25.749212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9-rootfs.mount: Deactivated successfully. May 16 05:24:26.088805 kubelet[2648]: E0516 05:24:26.088766 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:26.089357 kubelet[2648]: E0516 05:24:26.088836 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:26.094389 containerd[1566]: time="2025-05-16T05:24:26.094342420Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 05:24:26.108200 containerd[1566]: time="2025-05-16T05:24:26.108163237Z" level=info msg="Container ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611: CDI devices from CRI Config.CDIDevices: []" May 16 05:24:26.115387 containerd[1566]: time="2025-05-16T05:24:26.115344946Z" level=info msg="CreateContainer within sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\"" May 16 05:24:26.115906 containerd[1566]: time="2025-05-16T05:24:26.115865924Z" level=info msg="StartContainer for \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\"" May 16 05:24:26.117070 containerd[1566]: time="2025-05-16T05:24:26.117023713Z" level=info msg="connecting to shim ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611" address="unix:///run/containerd/s/fbb9f69deb992cbcefabeca289900d3ba57b77189e8076009f6dfb8c41d58af5" protocol=ttrpc version=3 May 16 05:24:26.136029 systemd[1]: Started cri-containerd-ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611.scope - libcontainer container ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611. May 16 05:24:26.176361 containerd[1566]: time="2025-05-16T05:24:26.176316795Z" level=info msg="StartContainer for \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" returns successfully" May 16 05:24:26.244675 containerd[1566]: time="2025-05-16T05:24:26.244602376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" id:\"724643f31a39177c2d5f808a1e4745c93a2f36c4c9a8cdde22a71d028f9f5c91\" pid:3344 exited_at:{seconds:1747373066 nanos:244247508}" May 16 05:24:26.292277 kubelet[2648]: I0516 05:24:26.292245 2648 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 05:24:26.344418 systemd[1]: Created slice kubepods-burstable-pod67d123d6_f405_48c0_bd25_cc64ee6e2b1c.slice - libcontainer container kubepods-burstable-pod67d123d6_f405_48c0_bd25_cc64ee6e2b1c.slice. May 16 05:24:26.356346 systemd[1]: Created slice kubepods-burstable-pod351bfa21_d079_40f3_8f78_f73c573b2302.slice - libcontainer container kubepods-burstable-pod351bfa21_d079_40f3_8f78_f73c573b2302.slice. May 16 05:24:26.373303 kubelet[2648]: I0516 05:24:26.373251 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9hp\" (UniqueName: \"kubernetes.io/projected/67d123d6-f405-48c0-bd25-cc64ee6e2b1c-kube-api-access-gq9hp\") pod \"coredns-674b8bbfcf-nkw6t\" (UID: \"67d123d6-f405-48c0-bd25-cc64ee6e2b1c\") " pod="kube-system/coredns-674b8bbfcf-nkw6t" May 16 05:24:26.373303 kubelet[2648]: I0516 05:24:26.373293 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/351bfa21-d079-40f3-8f78-f73c573b2302-config-volume\") pod \"coredns-674b8bbfcf-7vw6r\" (UID: \"351bfa21-d079-40f3-8f78-f73c573b2302\") " pod="kube-system/coredns-674b8bbfcf-7vw6r" May 16 05:24:26.373303 kubelet[2648]: I0516 05:24:26.373313 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92sgx\" (UniqueName: \"kubernetes.io/projected/351bfa21-d079-40f3-8f78-f73c573b2302-kube-api-access-92sgx\") pod \"coredns-674b8bbfcf-7vw6r\" (UID: \"351bfa21-d079-40f3-8f78-f73c573b2302\") " pod="kube-system/coredns-674b8bbfcf-7vw6r" May 16 05:24:26.373523 kubelet[2648]: I0516 05:24:26.373361 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67d123d6-f405-48c0-bd25-cc64ee6e2b1c-config-volume\") pod \"coredns-674b8bbfcf-nkw6t\" (UID: \"67d123d6-f405-48c0-bd25-cc64ee6e2b1c\") " pod="kube-system/coredns-674b8bbfcf-nkw6t" May 16 05:24:26.470011 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:44916.service - OpenSSH per-connection server daemon (10.0.0.1:44916). May 16 05:24:26.527947 sshd[3376]: Accepted publickey for core from 10.0.0.1 port 44916 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:26.529417 sshd-session[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:26.534607 systemd-logind[1550]: New session 8 of user core. May 16 05:24:26.541070 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 05:24:26.654824 kubelet[2648]: E0516 05:24:26.653661 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:26.657861 containerd[1566]: time="2025-05-16T05:24:26.655712397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nkw6t,Uid:67d123d6-f405-48c0-bd25-cc64ee6e2b1c,Namespace:kube-system,Attempt:0,}" May 16 05:24:26.660834 kubelet[2648]: E0516 05:24:26.660795 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:26.663369 containerd[1566]: time="2025-05-16T05:24:26.663164165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7vw6r,Uid:351bfa21-d079-40f3-8f78-f73c573b2302,Namespace:kube-system,Attempt:0,}" May 16 05:24:26.684946 sshd[3380]: Connection closed by 10.0.0.1 port 44916 May 16 05:24:26.685143 sshd-session[3376]: pam_unix(sshd:session): session closed for user core May 16 05:24:26.693859 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:44916.service: Deactivated successfully. May 16 05:24:26.696986 systemd[1]: session-8.scope: Deactivated successfully. May 16 05:24:26.699047 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. May 16 05:24:26.703361 systemd-logind[1550]: Removed session 8. May 16 05:24:27.094834 kubelet[2648]: E0516 05:24:27.094790 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:28.096926 kubelet[2648]: E0516 05:24:28.096874 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:28.459567 systemd-networkd[1493]: cilium_host: Link UP May 16 05:24:28.459733 systemd-networkd[1493]: cilium_net: Link UP May 16 05:24:28.459948 systemd-networkd[1493]: cilium_net: Gained carrier May 16 05:24:28.460155 systemd-networkd[1493]: cilium_host: Gained carrier May 16 05:24:28.569224 systemd-networkd[1493]: cilium_vxlan: Link UP May 16 05:24:28.569239 systemd-networkd[1493]: cilium_vxlan: Gained carrier May 16 05:24:28.788934 kernel: NET: Registered PF_ALG protocol family May 16 05:24:29.010183 systemd-networkd[1493]: cilium_host: Gained IPv6LL May 16 05:24:29.098870 kubelet[2648]: E0516 05:24:29.098821 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:29.266143 systemd-networkd[1493]: cilium_net: Gained IPv6LL May 16 05:24:29.475512 systemd-networkd[1493]: lxc_health: Link UP May 16 05:24:29.475808 systemd-networkd[1493]: lxc_health: Gained carrier May 16 05:24:29.749572 systemd-networkd[1493]: lxc753478ecad1c: Link UP May 16 05:24:29.755926 kernel: eth0: renamed from tmp86eea May 16 05:24:29.757573 systemd-networkd[1493]: lxc753478ecad1c: Gained carrier May 16 05:24:29.796788 systemd-networkd[1493]: lxcf2d5b4f600e2: Link UP May 16 05:24:29.798934 kernel: eth0: renamed from tmp128d2 May 16 05:24:29.803268 systemd-networkd[1493]: lxcf2d5b4f600e2: Gained carrier May 16 05:24:30.227131 systemd-networkd[1493]: cilium_vxlan: Gained IPv6LL May 16 05:24:30.280147 kubelet[2648]: E0516 05:24:30.280110 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:30.356659 kubelet[2648]: I0516 05:24:30.356476 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l624s" podStartSLOduration=9.454827548 podStartE2EDuration="23.356453122s" podCreationTimestamp="2025-05-16 05:24:07 +0000 UTC" firstStartedPulling="2025-05-16 05:24:08.412284504 +0000 UTC m=+7.476590957" lastFinishedPulling="2025-05-16 05:24:22.313910078 +0000 UTC m=+21.378216531" observedRunningTime="2025-05-16 05:24:27.108366815 +0000 UTC m=+26.172673268" watchObservedRunningTime="2025-05-16 05:24:30.356453122 +0000 UTC m=+29.420759575" May 16 05:24:31.122173 systemd-networkd[1493]: lxc_health: Gained IPv6LL May 16 05:24:31.442126 systemd-networkd[1493]: lxc753478ecad1c: Gained IPv6LL May 16 05:24:31.703078 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:48376.service - OpenSSH per-connection server daemon (10.0.0.1:48376). May 16 05:24:31.745857 sshd[3833]: Accepted publickey for core from 10.0.0.1 port 48376 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:31.747704 sshd-session[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:31.752378 systemd-logind[1550]: New session 9 of user core. May 16 05:24:31.762084 systemd-networkd[1493]: lxcf2d5b4f600e2: Gained IPv6LL May 16 05:24:31.762202 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 05:24:31.890330 sshd[3835]: Connection closed by 10.0.0.1 port 48376 May 16 05:24:31.891154 sshd-session[3833]: pam_unix(sshd:session): session closed for user core May 16 05:24:31.895995 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. May 16 05:24:31.896737 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:48376.service: Deactivated successfully. May 16 05:24:31.899087 systemd[1]: session-9.scope: Deactivated successfully. May 16 05:24:31.901445 systemd-logind[1550]: Removed session 9. May 16 05:24:33.136193 containerd[1566]: time="2025-05-16T05:24:33.136127736Z" level=info msg="connecting to shim 86eeafbb598a87d4891c04d968061b3b26dcd78296bc08edfc776ff60f1912e6" address="unix:///run/containerd/s/915b6e05002868dcdcb1bb206ee883573ba07a0f92491b8a2a27f67be22d65d9" namespace=k8s.io protocol=ttrpc version=3 May 16 05:24:33.152638 containerd[1566]: time="2025-05-16T05:24:33.152568061Z" level=info msg="connecting to shim 128d28f8601539ee421da264d95e8f0e616e3489e7e1c4d1518f7aad3cd6bb10" address="unix:///run/containerd/s/ad9debd0f1b10d82c660b8070e4f3f85403ff178a35d3fd208a0f01e41428071" namespace=k8s.io protocol=ttrpc version=3 May 16 05:24:33.175124 systemd[1]: Started cri-containerd-86eeafbb598a87d4891c04d968061b3b26dcd78296bc08edfc776ff60f1912e6.scope - libcontainer container 86eeafbb598a87d4891c04d968061b3b26dcd78296bc08edfc776ff60f1912e6. May 16 05:24:33.179952 systemd[1]: Started cri-containerd-128d28f8601539ee421da264d95e8f0e616e3489e7e1c4d1518f7aad3cd6bb10.scope - libcontainer container 128d28f8601539ee421da264d95e8f0e616e3489e7e1c4d1518f7aad3cd6bb10. May 16 05:24:33.192848 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:24:33.197553 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:24:33.232222 containerd[1566]: time="2025-05-16T05:24:33.232166569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nkw6t,Uid:67d123d6-f405-48c0-bd25-cc64ee6e2b1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"86eeafbb598a87d4891c04d968061b3b26dcd78296bc08edfc776ff60f1912e6\"" May 16 05:24:33.236214 containerd[1566]: time="2025-05-16T05:24:33.236172329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7vw6r,Uid:351bfa21-d079-40f3-8f78-f73c573b2302,Namespace:kube-system,Attempt:0,} returns sandbox id \"128d28f8601539ee421da264d95e8f0e616e3489e7e1c4d1518f7aad3cd6bb10\"" May 16 05:24:33.236988 kubelet[2648]: E0516 05:24:33.236953 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:33.237561 kubelet[2648]: E0516 05:24:33.237136 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:33.242292 containerd[1566]: time="2025-05-16T05:24:33.242218976Z" level=info msg="CreateContainer within sandbox \"128d28f8601539ee421da264d95e8f0e616e3489e7e1c4d1518f7aad3cd6bb10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 05:24:33.245385 containerd[1566]: time="2025-05-16T05:24:33.245345397Z" level=info msg="CreateContainer within sandbox \"86eeafbb598a87d4891c04d968061b3b26dcd78296bc08edfc776ff60f1912e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 05:24:33.288940 containerd[1566]: time="2025-05-16T05:24:33.288870192Z" level=info msg="Container 93dca391245d2ec872f639056698dbceecd7a40e229ef5f0621fec6a49b94a19: CDI devices from CRI Config.CDIDevices: []" May 16 05:24:33.296608 containerd[1566]: time="2025-05-16T05:24:33.296575912Z" level=info msg="CreateContainer within sandbox \"128d28f8601539ee421da264d95e8f0e616e3489e7e1c4d1518f7aad3cd6bb10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93dca391245d2ec872f639056698dbceecd7a40e229ef5f0621fec6a49b94a19\"" May 16 05:24:33.297356 containerd[1566]: time="2025-05-16T05:24:33.297319374Z" level=info msg="StartContainer for \"93dca391245d2ec872f639056698dbceecd7a40e229ef5f0621fec6a49b94a19\"" May 16 05:24:33.297539 containerd[1566]: time="2025-05-16T05:24:33.297518797Z" level=info msg="Container 38ff158726e94276c20a9e3a7c645555911b5dc4905cfc8487927ad8e47fc282: CDI devices from CRI Config.CDIDevices: []" May 16 05:24:33.298242 containerd[1566]: time="2025-05-16T05:24:33.298212521Z" level=info msg="connecting to shim 93dca391245d2ec872f639056698dbceecd7a40e229ef5f0621fec6a49b94a19" address="unix:///run/containerd/s/ad9debd0f1b10d82c660b8070e4f3f85403ff178a35d3fd208a0f01e41428071" protocol=ttrpc version=3 May 16 05:24:33.305718 containerd[1566]: time="2025-05-16T05:24:33.305674132Z" level=info msg="CreateContainer within sandbox \"86eeafbb598a87d4891c04d968061b3b26dcd78296bc08edfc776ff60f1912e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38ff158726e94276c20a9e3a7c645555911b5dc4905cfc8487927ad8e47fc282\"" May 16 05:24:33.307159 containerd[1566]: time="2025-05-16T05:24:33.306955654Z" level=info msg="StartContainer for \"38ff158726e94276c20a9e3a7c645555911b5dc4905cfc8487927ad8e47fc282\"" May 16 05:24:33.318041 systemd[1]: Started cri-containerd-93dca391245d2ec872f639056698dbceecd7a40e229ef5f0621fec6a49b94a19.scope - libcontainer container 93dca391245d2ec872f639056698dbceecd7a40e229ef5f0621fec6a49b94a19. May 16 05:24:33.324145 containerd[1566]: time="2025-05-16T05:24:33.324103000Z" level=info msg="connecting to shim 38ff158726e94276c20a9e3a7c645555911b5dc4905cfc8487927ad8e47fc282" address="unix:///run/containerd/s/915b6e05002868dcdcb1bb206ee883573ba07a0f92491b8a2a27f67be22d65d9" protocol=ttrpc version=3 May 16 05:24:33.343217 systemd[1]: Started cri-containerd-38ff158726e94276c20a9e3a7c645555911b5dc4905cfc8487927ad8e47fc282.scope - libcontainer container 38ff158726e94276c20a9e3a7c645555911b5dc4905cfc8487927ad8e47fc282. May 16 05:24:33.358844 containerd[1566]: time="2025-05-16T05:24:33.358797972Z" level=info msg="StartContainer for \"93dca391245d2ec872f639056698dbceecd7a40e229ef5f0621fec6a49b94a19\" returns successfully" May 16 05:24:33.374768 containerd[1566]: time="2025-05-16T05:24:33.374709146Z" level=info msg="StartContainer for \"38ff158726e94276c20a9e3a7c645555911b5dc4905cfc8487927ad8e47fc282\" returns successfully" May 16 05:24:34.109634 kubelet[2648]: E0516 05:24:34.109594 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:34.112752 kubelet[2648]: E0516 05:24:34.112299 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:34.127844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569874782.mount: Deactivated successfully. May 16 05:24:34.247235 kubelet[2648]: I0516 05:24:34.246987 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nkw6t" podStartSLOduration=26.246971132 podStartE2EDuration="26.246971132s" podCreationTimestamp="2025-05-16 05:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:24:34.246410511 +0000 UTC m=+33.310716964" watchObservedRunningTime="2025-05-16 05:24:34.246971132 +0000 UTC m=+33.311277615" May 16 05:24:35.124591 kubelet[2648]: E0516 05:24:35.124555 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:35.124914 kubelet[2648]: E0516 05:24:35.124708 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:35.591590 kubelet[2648]: I0516 05:24:35.591555 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:24:35.592091 kubelet[2648]: E0516 05:24:35.592059 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:35.606547 kubelet[2648]: I0516 05:24:35.606481 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7vw6r" podStartSLOduration=27.606460308 podStartE2EDuration="27.606460308s" podCreationTimestamp="2025-05-16 05:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:24:34.29180916 +0000 UTC m=+33.356115633" watchObservedRunningTime="2025-05-16 05:24:35.606460308 +0000 UTC m=+34.670766761" May 16 05:24:36.128051 kubelet[2648]: E0516 05:24:36.128013 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:36.128051 kubelet[2648]: E0516 05:24:36.128178 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:36.128051 kubelet[2648]: E0516 05:24:36.128358 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:24:36.916432 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:48382.service - OpenSSH per-connection server daemon (10.0.0.1:48382). May 16 05:24:36.978273 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 48382 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:36.980071 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:36.985020 systemd-logind[1550]: New session 10 of user core. May 16 05:24:36.992067 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 05:24:37.111299 sshd[4027]: Connection closed by 10.0.0.1 port 48382 May 16 05:24:37.111622 sshd-session[4025]: pam_unix(sshd:session): session closed for user core May 16 05:24:37.115698 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:48382.service: Deactivated successfully. May 16 05:24:37.117823 systemd[1]: session-10.scope: Deactivated successfully. May 16 05:24:37.118621 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. May 16 05:24:37.119906 systemd-logind[1550]: Removed session 10. May 16 05:24:42.127085 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:58972.service - OpenSSH per-connection server daemon (10.0.0.1:58972). May 16 05:24:42.179448 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 58972 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:42.181090 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:42.185821 systemd-logind[1550]: New session 11 of user core. May 16 05:24:42.193011 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 05:24:42.301401 sshd[4048]: Connection closed by 10.0.0.1 port 58972 May 16 05:24:42.301734 sshd-session[4046]: pam_unix(sshd:session): session closed for user core May 16 05:24:42.310791 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:58972.service: Deactivated successfully. May 16 05:24:42.312978 systemd[1]: session-11.scope: Deactivated successfully. May 16 05:24:42.313951 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. May 16 05:24:42.317597 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:58986.service - OpenSSH per-connection server daemon (10.0.0.1:58986). May 16 05:24:42.318516 systemd-logind[1550]: Removed session 11. May 16 05:24:42.364722 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 58986 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:42.366232 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:42.370885 systemd-logind[1550]: New session 12 of user core. May 16 05:24:42.385037 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 05:24:42.524934 sshd[4064]: Connection closed by 10.0.0.1 port 58986 May 16 05:24:42.525579 sshd-session[4062]: pam_unix(sshd:session): session closed for user core May 16 05:24:42.536922 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:58986.service: Deactivated successfully. May 16 05:24:42.544220 systemd[1]: session-12.scope: Deactivated successfully. May 16 05:24:42.546198 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. May 16 05:24:42.551529 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:58996.service - OpenSSH per-connection server daemon (10.0.0.1:58996). May 16 05:24:42.554722 systemd-logind[1550]: Removed session 12. May 16 05:24:42.604926 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 58996 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:42.606311 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:42.610987 systemd-logind[1550]: New session 13 of user core. May 16 05:24:42.621063 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 05:24:42.729792 sshd[4077]: Connection closed by 10.0.0.1 port 58996 May 16 05:24:42.730112 sshd-session[4075]: pam_unix(sshd:session): session closed for user core May 16 05:24:42.734611 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:58996.service: Deactivated successfully. May 16 05:24:42.736564 systemd[1]: session-13.scope: Deactivated successfully. May 16 05:24:42.737403 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. May 16 05:24:42.738991 systemd-logind[1550]: Removed session 13. May 16 05:24:47.743849 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:59006.service - OpenSSH per-connection server daemon (10.0.0.1:59006). May 16 05:24:47.793746 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 59006 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:47.795456 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:47.800627 systemd-logind[1550]: New session 14 of user core. May 16 05:24:47.809053 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 05:24:47.927608 sshd[4092]: Connection closed by 10.0.0.1 port 59006 May 16 05:24:47.928037 sshd-session[4090]: pam_unix(sshd:session): session closed for user core May 16 05:24:47.932568 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:59006.service: Deactivated successfully. May 16 05:24:47.934755 systemd[1]: session-14.scope: Deactivated successfully. May 16 05:24:47.935702 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. May 16 05:24:47.937149 systemd-logind[1550]: Removed session 14. May 16 05:24:52.940949 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:37378.service - OpenSSH per-connection server daemon (10.0.0.1:37378). May 16 05:24:52.993560 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 37378 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:52.995244 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:52.999436 systemd-logind[1550]: New session 15 of user core. May 16 05:24:53.012045 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 05:24:53.123118 sshd[4107]: Connection closed by 10.0.0.1 port 37378 May 16 05:24:53.123501 sshd-session[4105]: pam_unix(sshd:session): session closed for user core May 16 05:24:53.137951 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:37378.service: Deactivated successfully. May 16 05:24:53.139984 systemd[1]: session-15.scope: Deactivated successfully. May 16 05:24:53.140865 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. May 16 05:24:53.143855 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:37384.service - OpenSSH per-connection server daemon (10.0.0.1:37384). May 16 05:24:53.144752 systemd-logind[1550]: Removed session 15. May 16 05:24:53.193492 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 37384 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:53.194971 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:53.199376 systemd-logind[1550]: New session 16 of user core. May 16 05:24:53.210023 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 05:24:53.418672 sshd[4123]: Connection closed by 10.0.0.1 port 37384 May 16 05:24:53.419191 sshd-session[4121]: pam_unix(sshd:session): session closed for user core May 16 05:24:53.433648 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:37384.service: Deactivated successfully. May 16 05:24:53.436104 systemd[1]: session-16.scope: Deactivated successfully. May 16 05:24:53.437181 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. May 16 05:24:53.441447 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:37390.service - OpenSSH per-connection server daemon (10.0.0.1:37390). May 16 05:24:53.442278 systemd-logind[1550]: Removed session 16. May 16 05:24:53.506059 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 37390 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:53.508093 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:53.514447 systemd-logind[1550]: New session 17 of user core. May 16 05:24:53.534089 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 05:24:54.376378 sshd[4136]: Connection closed by 10.0.0.1 port 37390 May 16 05:24:54.377224 sshd-session[4134]: pam_unix(sshd:session): session closed for user core May 16 05:24:54.393141 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:37390.service: Deactivated successfully. May 16 05:24:54.397126 systemd[1]: session-17.scope: Deactivated successfully. May 16 05:24:54.399402 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. May 16 05:24:54.407369 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:37402.service - OpenSSH per-connection server daemon (10.0.0.1:37402). May 16 05:24:54.408511 systemd-logind[1550]: Removed session 17. May 16 05:24:54.464137 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 37402 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:54.466336 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:54.473513 systemd-logind[1550]: New session 18 of user core. May 16 05:24:54.487149 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 05:24:54.770037 sshd[4158]: Connection closed by 10.0.0.1 port 37402 May 16 05:24:54.770515 sshd-session[4156]: pam_unix(sshd:session): session closed for user core May 16 05:24:54.782284 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:37402.service: Deactivated successfully. May 16 05:24:54.785288 systemd[1]: session-18.scope: Deactivated successfully. May 16 05:24:54.786721 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. May 16 05:24:54.791290 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:37410.service - OpenSSH per-connection server daemon (10.0.0.1:37410). May 16 05:24:54.792201 systemd-logind[1550]: Removed session 18. May 16 05:24:54.839564 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 37410 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:24:54.841463 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:24:54.846359 systemd-logind[1550]: New session 19 of user core. May 16 05:24:54.865033 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 05:24:54.982253 sshd[4171]: Connection closed by 10.0.0.1 port 37410 May 16 05:24:54.982598 sshd-session[4169]: pam_unix(sshd:session): session closed for user core May 16 05:24:54.987328 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:37410.service: Deactivated successfully. May 16 05:24:54.989594 systemd[1]: session-19.scope: Deactivated successfully. May 16 05:24:54.990388 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. May 16 05:24:54.991679 systemd-logind[1550]: Removed session 19. May 16 05:24:59.999773 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:46260.service - OpenSSH per-connection server daemon (10.0.0.1:46260). May 16 05:25:00.048880 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 46260 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:25:00.050187 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:25:00.054498 systemd-logind[1550]: New session 20 of user core. May 16 05:25:00.065023 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 05:25:00.173151 sshd[4189]: Connection closed by 10.0.0.1 port 46260 May 16 05:25:00.173472 sshd-session[4187]: pam_unix(sshd:session): session closed for user core May 16 05:25:00.177391 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:46260.service: Deactivated successfully. May 16 05:25:00.179564 systemd[1]: session-20.scope: Deactivated successfully. May 16 05:25:00.180385 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. May 16 05:25:00.181689 systemd-logind[1550]: Removed session 20. May 16 05:25:05.187639 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:46270.service - OpenSSH per-connection server daemon (10.0.0.1:46270). May 16 05:25:05.236259 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 46270 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:25:05.237585 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:25:05.241557 systemd-logind[1550]: New session 21 of user core. May 16 05:25:05.248015 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 05:25:05.357296 sshd[4207]: Connection closed by 10.0.0.1 port 46270 May 16 05:25:05.357759 sshd-session[4205]: pam_unix(sshd:session): session closed for user core May 16 05:25:05.362281 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:46270.service: Deactivated successfully. May 16 05:25:05.364147 systemd[1]: session-21.scope: Deactivated successfully. May 16 05:25:05.365003 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. May 16 05:25:05.366205 systemd-logind[1550]: Removed session 21. May 16 05:25:10.373838 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:54924.service - OpenSSH per-connection server daemon (10.0.0.1:54924). May 16 05:25:10.422175 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 54924 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:25:10.423656 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:25:10.427976 systemd-logind[1550]: New session 22 of user core. May 16 05:25:10.437054 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 05:25:10.548683 sshd[4224]: Connection closed by 10.0.0.1 port 54924 May 16 05:25:10.549193 sshd-session[4222]: pam_unix(sshd:session): session closed for user core May 16 05:25:10.557793 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:54924.service: Deactivated successfully. May 16 05:25:10.559890 systemd[1]: session-22.scope: Deactivated successfully. May 16 05:25:10.560807 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. May 16 05:25:10.564398 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:54932.service - OpenSSH per-connection server daemon (10.0.0.1:54932). May 16 05:25:10.565058 systemd-logind[1550]: Removed session 22. May 16 05:25:10.621492 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 54932 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:25:10.622794 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:25:10.627303 systemd-logind[1550]: New session 23 of user core. May 16 05:25:10.634025 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 05:25:11.973871 containerd[1566]: time="2025-05-16T05:25:11.973812366Z" level=info msg="StopContainer for \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" with timeout 30 (s)" May 16 05:25:11.974883 containerd[1566]: time="2025-05-16T05:25:11.974324003Z" level=info msg="Stop container \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" with signal terminated" May 16 05:25:11.989213 systemd[1]: cri-containerd-0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d.scope: Deactivated successfully. May 16 05:25:11.991221 containerd[1566]: time="2025-05-16T05:25:11.991166517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" id:\"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" pid:3239 exited_at:{seconds:1747373111 nanos:990751457}" May 16 05:25:11.991221 containerd[1566]: time="2025-05-16T05:25:11.991167148Z" level=info msg="received exit event container_id:\"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" id:\"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" pid:3239 exited_at:{seconds:1747373111 nanos:990751457}" May 16 05:25:12.011365 containerd[1566]: time="2025-05-16T05:25:12.011307103Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 05:25:12.013116 containerd[1566]: time="2025-05-16T05:25:12.013051570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" id:\"c0f52aaf44b93e0ecc6d1a872e39392b7a38b62e076fc1f476ca0be4edc1ba14\" pid:4266 exited_at:{seconds:1747373112 nanos:12670804}" May 16 05:25:12.014871 containerd[1566]: time="2025-05-16T05:25:12.014832194Z" level=info msg="StopContainer for \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" with timeout 2 (s)" May 16 05:25:12.015244 containerd[1566]: time="2025-05-16T05:25:12.015201059Z" level=info msg="Stop container \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" with signal terminated" May 16 05:25:12.016188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d-rootfs.mount: Deactivated successfully. May 16 05:25:12.022835 systemd-networkd[1493]: lxc_health: Link DOWN May 16 05:25:12.023120 systemd-networkd[1493]: lxc_health: Lost carrier May 16 05:25:12.024795 kubelet[2648]: E0516 05:25:12.024760 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:12.044423 systemd[1]: cri-containerd-ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611.scope: Deactivated successfully. May 16 05:25:12.044825 systemd[1]: cri-containerd-ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611.scope: Consumed 6.487s CPU time, 125.4M memory peak, 476K read from disk, 13.3M written to disk. May 16 05:25:12.046507 containerd[1566]: time="2025-05-16T05:25:12.046461618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" id:\"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" pid:3310 exited_at:{seconds:1747373112 nanos:45612242}" May 16 05:25:12.047450 containerd[1566]: time="2025-05-16T05:25:12.047208323Z" level=info msg="received exit event container_id:\"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" id:\"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" pid:3310 exited_at:{seconds:1747373112 nanos:45612242}" May 16 05:25:12.049828 containerd[1566]: time="2025-05-16T05:25:12.049794362Z" level=info msg="StopContainer for \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" returns successfully" May 16 05:25:12.050917 containerd[1566]: time="2025-05-16T05:25:12.050797092Z" level=info msg="StopPodSandbox for \"e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1\"" May 16 05:25:12.051161 containerd[1566]: time="2025-05-16T05:25:12.051140499Z" level=info msg="Container to stop \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:25:12.059386 systemd[1]: cri-containerd-e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1.scope: Deactivated successfully. May 16 05:25:12.061461 containerd[1566]: time="2025-05-16T05:25:12.061423692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1\" id:\"e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1\" pid:2865 exit_status:137 exited_at:{seconds:1747373112 nanos:60880484}" May 16 05:25:12.072514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611-rootfs.mount: Deactivated successfully. May 16 05:25:12.090853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1-rootfs.mount: Deactivated successfully. May 16 05:25:12.096544 containerd[1566]: time="2025-05-16T05:25:12.096503087Z" level=info msg="shim disconnected" id=e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1 namespace=k8s.io May 16 05:25:12.096815 containerd[1566]: time="2025-05-16T05:25:12.096786433Z" level=warning msg="cleaning up after shim disconnected" id=e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1 namespace=k8s.io May 16 05:25:12.104097 containerd[1566]: time="2025-05-16T05:25:12.096804346Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 05:25:12.104192 containerd[1566]: time="2025-05-16T05:25:12.102086848Z" level=info msg="StopContainer for \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" returns successfully" May 16 05:25:12.104811 containerd[1566]: time="2025-05-16T05:25:12.104741915Z" level=info msg="StopPodSandbox for \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\"" May 16 05:25:12.104966 containerd[1566]: time="2025-05-16T05:25:12.104828876Z" level=info msg="Container to stop \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:25:12.104966 containerd[1566]: time="2025-05-16T05:25:12.104842923Z" level=info msg="Container to stop \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:25:12.104966 containerd[1566]: time="2025-05-16T05:25:12.104851689Z" level=info msg="Container to stop \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:25:12.104966 containerd[1566]: time="2025-05-16T05:25:12.104859874Z" level=info msg="Container to stop \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:25:12.104966 containerd[1566]: time="2025-05-16T05:25:12.104867838Z" level=info msg="Container to stop \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 05:25:12.111278 systemd[1]: cri-containerd-2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f.scope: Deactivated successfully. May 16 05:25:12.128576 containerd[1566]: time="2025-05-16T05:25:12.128503638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" id:\"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" pid:2796 exit_status:137 exited_at:{seconds:1747373112 nanos:117189834}" May 16 05:25:12.129016 containerd[1566]: time="2025-05-16T05:25:12.128543482Z" level=info msg="received exit event sandbox_id:\"e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1\" exit_status:137 exited_at:{seconds:1747373112 nanos:60880484}" May 16 05:25:12.130786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1-shm.mount: Deactivated successfully. May 16 05:25:12.143657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f-rootfs.mount: Deactivated successfully. May 16 05:25:12.144854 containerd[1566]: time="2025-05-16T05:25:12.144761988Z" level=info msg="TearDown network for sandbox \"e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1\" successfully" May 16 05:25:12.144854 containerd[1566]: time="2025-05-16T05:25:12.144814295Z" level=info msg="StopPodSandbox for \"e2887ed1b46feceed224e4bd0d7c727e03b65f6bc4b5208a1c8f54df50f940d1\" returns successfully" May 16 05:25:12.149256 containerd[1566]: time="2025-05-16T05:25:12.149207277Z" level=info msg="shim disconnected" id=2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f namespace=k8s.io May 16 05:25:12.149256 containerd[1566]: time="2025-05-16T05:25:12.149234307Z" level=warning msg="cleaning up after shim disconnected" id=2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f namespace=k8s.io May 16 05:25:12.151070 containerd[1566]: time="2025-05-16T05:25:12.149241400Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 05:25:12.151312 containerd[1566]: time="2025-05-16T05:25:12.150720095Z" level=info msg="received exit event sandbox_id:\"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" exit_status:137 exited_at:{seconds:1747373112 nanos:117189834}" May 16 05:25:12.154701 containerd[1566]: time="2025-05-16T05:25:12.154660316Z" level=info msg="TearDown network for sandbox \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" successfully" May 16 05:25:12.154701 containerd[1566]: time="2025-05-16T05:25:12.154693718Z" level=info msg="StopPodSandbox for \"2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f\" returns successfully" May 16 05:25:12.196615 kubelet[2648]: I0516 05:25:12.196567 2648 scope.go:117] "RemoveContainer" containerID="0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d" May 16 05:25:12.198255 containerd[1566]: time="2025-05-16T05:25:12.198199409Z" level=info msg="RemoveContainer for \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\"" May 16 05:25:12.202877 containerd[1566]: time="2025-05-16T05:25:12.202850601Z" level=info msg="RemoveContainer for \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" returns successfully" May 16 05:25:12.203578 kubelet[2648]: I0516 05:25:12.203523 2648 scope.go:117] "RemoveContainer" containerID="0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d" May 16 05:25:12.203781 containerd[1566]: time="2025-05-16T05:25:12.203686281Z" level=error msg="ContainerStatus for \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\": not found" May 16 05:25:12.207271 kubelet[2648]: E0516 05:25:12.207215 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\": not found" containerID="0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d" May 16 05:25:12.207412 kubelet[2648]: I0516 05:25:12.207279 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d"} err="failed to get container status \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b6de79cd28dd77d541eb1d67ac6ad9b34249d6b3775a649c322afdfd675717d\": not found" May 16 05:25:12.207412 kubelet[2648]: I0516 05:25:12.207316 2648 scope.go:117] "RemoveContainer" containerID="ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611" May 16 05:25:12.208738 containerd[1566]: time="2025-05-16T05:25:12.208717918Z" level=info msg="RemoveContainer for \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\"" May 16 05:25:12.213243 containerd[1566]: time="2025-05-16T05:25:12.213200917Z" level=info msg="RemoveContainer for \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" returns successfully" May 16 05:25:12.213389 kubelet[2648]: I0516 05:25:12.213357 2648 scope.go:117] "RemoveContainer" containerID="f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9" May 16 05:25:12.214541 containerd[1566]: time="2025-05-16T05:25:12.214504726Z" level=info msg="RemoveContainer for \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\"" May 16 05:25:12.219096 containerd[1566]: time="2025-05-16T05:25:12.219056743Z" level=info msg="RemoveContainer for \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\" returns successfully" May 16 05:25:12.219283 kubelet[2648]: I0516 05:25:12.219248 2648 scope.go:117] "RemoveContainer" containerID="7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636" May 16 05:25:12.222112 containerd[1566]: time="2025-05-16T05:25:12.221309314Z" level=info msg="RemoveContainer for \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\"" May 16 05:25:12.225481 containerd[1566]: time="2025-05-16T05:25:12.225409852Z" level=info msg="RemoveContainer for \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\" returns successfully" May 16 05:25:12.225568 kubelet[2648]: I0516 05:25:12.225547 2648 scope.go:117] "RemoveContainer" containerID="53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc" May 16 05:25:12.227354 containerd[1566]: time="2025-05-16T05:25:12.227315238Z" level=info msg="RemoveContainer for \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\"" May 16 05:25:12.230730 containerd[1566]: time="2025-05-16T05:25:12.230699098Z" level=info msg="RemoveContainer for \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\" returns successfully" May 16 05:25:12.230848 kubelet[2648]: I0516 05:25:12.230829 2648 scope.go:117] "RemoveContainer" containerID="1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a" May 16 05:25:12.232612 containerd[1566]: time="2025-05-16T05:25:12.232092674Z" level=info msg="RemoveContainer for \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\"" May 16 05:25:12.233378 kubelet[2648]: I0516 05:25:12.233337 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4462\" (UniqueName: \"kubernetes.io/projected/b970f8ae-a417-4783-88a7-907fedd6bf99-kube-api-access-r4462\") pod \"b970f8ae-a417-4783-88a7-907fedd6bf99\" (UID: \"b970f8ae-a417-4783-88a7-907fedd6bf99\") " May 16 05:25:12.233378 kubelet[2648]: I0516 05:25:12.233374 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-run\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.233461 kubelet[2648]: I0516 05:25:12.233389 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-bpf-maps\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.233461 kubelet[2648]: I0516 05:25:12.233403 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-lib-modules\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.233461 kubelet[2648]: I0516 05:25:12.233418 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/980596e2-010b-4b7b-ac96-02194acc7e8a-hubble-tls\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.233461 kubelet[2648]: I0516 05:25:12.233433 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-hostproc\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.233461 kubelet[2648]: I0516 05:25:12.233447 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cni-path\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.233587 kubelet[2648]: I0516 05:25:12.233462 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbhgf\" (UniqueName: \"kubernetes.io/projected/980596e2-010b-4b7b-ac96-02194acc7e8a-kube-api-access-bbhgf\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.233587 kubelet[2648]: I0516 05:25:12.233545 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.233654 kubelet[2648]: I0516 05:25:12.233599 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.233733 kubelet[2648]: I0516 05:25:12.233712 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-hostproc" (OuterVolumeSpecName: "hostproc") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.234369 kubelet[2648]: I0516 05:25:12.234337 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cni-path" (OuterVolumeSpecName: "cni-path") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.234369 kubelet[2648]: I0516 05:25:12.234370 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.234525 kubelet[2648]: I0516 05:25:12.234500 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-etc-cni-netd\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.234565 kubelet[2648]: I0516 05:25:12.234538 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/980596e2-010b-4b7b-ac96-02194acc7e8a-clustermesh-secrets\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.234565 kubelet[2648]: I0516 05:25:12.234557 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b970f8ae-a417-4783-88a7-907fedd6bf99-cilium-config-path\") pod \"b970f8ae-a417-4783-88a7-907fedd6bf99\" (UID: \"b970f8ae-a417-4783-88a7-907fedd6bf99\") " May 16 05:25:12.234614 kubelet[2648]: I0516 05:25:12.234571 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-host-proc-sys-net\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.234614 kubelet[2648]: I0516 05:25:12.234589 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-config-path\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.234614 kubelet[2648]: I0516 05:25:12.234602 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-cgroup\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.234690 kubelet[2648]: I0516 05:25:12.234619 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-host-proc-sys-kernel\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.234690 kubelet[2648]: I0516 05:25:12.234636 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-xtables-lock\") pod \"980596e2-010b-4b7b-ac96-02194acc7e8a\" (UID: \"980596e2-010b-4b7b-ac96-02194acc7e8a\") " May 16 05:25:12.234690 kubelet[2648]: I0516 05:25:12.234669 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.234690 kubelet[2648]: I0516 05:25:12.234680 2648 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.234690 kubelet[2648]: I0516 05:25:12.234687 2648 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.234690 kubelet[2648]: I0516 05:25:12.234695 2648 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.234836 kubelet[2648]: I0516 05:25:12.234703 2648 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.234836 kubelet[2648]: I0516 05:25:12.234724 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.234836 kubelet[2648]: I0516 05:25:12.234739 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.235001 kubelet[2648]: I0516 05:25:12.234977 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.235088 kubelet[2648]: I0516 05:25:12.235064 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.235162 kubelet[2648]: I0516 05:25:12.235149 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 05:25:12.237233 kubelet[2648]: I0516 05:25:12.237196 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b970f8ae-a417-4783-88a7-907fedd6bf99-kube-api-access-r4462" (OuterVolumeSpecName: "kube-api-access-r4462") pod "b970f8ae-a417-4783-88a7-907fedd6bf99" (UID: "b970f8ae-a417-4783-88a7-907fedd6bf99"). InnerVolumeSpecName "kube-api-access-r4462". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 05:25:12.237313 kubelet[2648]: I0516 05:25:12.237283 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/980596e2-010b-4b7b-ac96-02194acc7e8a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 05:25:12.238219 kubelet[2648]: I0516 05:25:12.238186 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980596e2-010b-4b7b-ac96-02194acc7e8a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 05:25:12.240320 kubelet[2648]: I0516 05:25:12.240279 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980596e2-010b-4b7b-ac96-02194acc7e8a-kube-api-access-bbhgf" (OuterVolumeSpecName: "kube-api-access-bbhgf") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "kube-api-access-bbhgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 05:25:12.243528 kubelet[2648]: I0516 05:25:12.243490 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b970f8ae-a417-4783-88a7-907fedd6bf99-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b970f8ae-a417-4783-88a7-907fedd6bf99" (UID: "b970f8ae-a417-4783-88a7-907fedd6bf99"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 05:25:12.243612 kubelet[2648]: I0516 05:25:12.243556 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "980596e2-010b-4b7b-ac96-02194acc7e8a" (UID: "980596e2-010b-4b7b-ac96-02194acc7e8a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 05:25:12.249036 containerd[1566]: time="2025-05-16T05:25:12.248997423Z" level=info msg="RemoveContainer for \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\" returns successfully" May 16 05:25:12.249276 kubelet[2648]: I0516 05:25:12.249251 2648 scope.go:117] "RemoveContainer" containerID="ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611" May 16 05:25:12.249527 containerd[1566]: time="2025-05-16T05:25:12.249486199Z" level=error msg="ContainerStatus for \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\": not found" May 16 05:25:12.249677 kubelet[2648]: E0516 05:25:12.249639 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\": not found" containerID="ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611" May 16 05:25:12.249714 kubelet[2648]: I0516 05:25:12.249675 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611"} err="failed to get container status \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad5c523746cd97634150df881da8a04abb5fb7bb0de4c4f5330ad9c997fb0611\": not found" May 16 05:25:12.249714 kubelet[2648]: I0516 05:25:12.249701 2648 scope.go:117] "RemoveContainer" containerID="f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9" May 16 05:25:12.250050 containerd[1566]: time="2025-05-16T05:25:12.250005714Z" level=error msg="ContainerStatus for \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\": not found" May 16 05:25:12.250223 kubelet[2648]: E0516 05:25:12.250188 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\": not found" containerID="f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9" May 16 05:25:12.250261 kubelet[2648]: I0516 05:25:12.250219 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9"} err="failed to get container status \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f77edb1083eb1c0feaf1521c9232eb277cd996c33db6bb58a8ddb6bed4fbe7a9\": not found" May 16 05:25:12.250261 kubelet[2648]: I0516 05:25:12.250243 2648 scope.go:117] "RemoveContainer" containerID="7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636" May 16 05:25:12.250813 containerd[1566]: time="2025-05-16T05:25:12.250778457Z" level=error msg="ContainerStatus for \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\": not found" May 16 05:25:12.252003 kubelet[2648]: E0516 05:25:12.251977 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\": not found" containerID="7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636" May 16 05:25:12.252064 kubelet[2648]: I0516 05:25:12.252004 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636"} err="failed to get container status \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c94a8523ddaaa67a3031fe275df36a042cc2892824b82e285d4da6dc76d8636\": not found" May 16 05:25:12.252064 kubelet[2648]: I0516 05:25:12.252019 2648 scope.go:117] "RemoveContainer" containerID="53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc" May 16 05:25:12.252321 containerd[1566]: time="2025-05-16T05:25:12.252280595Z" level=error msg="ContainerStatus for \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\": not found" May 16 05:25:12.255773 kubelet[2648]: E0516 05:25:12.255717 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\": not found" containerID="53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc" May 16 05:25:12.256717 kubelet[2648]: I0516 05:25:12.255789 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc"} err="failed to get container status \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"53c0eedc8039e8716ae09dfb3a585e1407513f9301e3a5086b48fe847eb26cbc\": not found" May 16 05:25:12.256717 kubelet[2648]: I0516 05:25:12.255805 2648 scope.go:117] "RemoveContainer" containerID="1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a" May 16 05:25:12.256717 kubelet[2648]: E0516 05:25:12.256145 2648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\": not found" containerID="1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a" May 16 05:25:12.256717 kubelet[2648]: I0516 05:25:12.256196 2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a"} err="failed to get container status \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\": not found" May 16 05:25:12.256825 containerd[1566]: time="2025-05-16T05:25:12.256012610Z" level=error msg="ContainerStatus for \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d9f8c11718970058ac38d30341baaa7aebfa898588fa06c7943e47b375dca0a\": not found" May 16 05:25:12.335800 kubelet[2648]: I0516 05:25:12.335754 2648 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335800 kubelet[2648]: I0516 05:25:12.335789 2648 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r4462\" (UniqueName: \"kubernetes.io/projected/b970f8ae-a417-4783-88a7-907fedd6bf99-kube-api-access-r4462\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335800 kubelet[2648]: I0516 05:25:12.335802 2648 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/980596e2-010b-4b7b-ac96-02194acc7e8a-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335934 kubelet[2648]: I0516 05:25:12.335811 2648 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bbhgf\" (UniqueName: \"kubernetes.io/projected/980596e2-010b-4b7b-ac96-02194acc7e8a-kube-api-access-bbhgf\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335934 kubelet[2648]: I0516 05:25:12.335820 2648 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335934 kubelet[2648]: I0516 05:25:12.335828 2648 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/980596e2-010b-4b7b-ac96-02194acc7e8a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335934 kubelet[2648]: I0516 05:25:12.335836 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b970f8ae-a417-4783-88a7-907fedd6bf99-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335934 kubelet[2648]: I0516 05:25:12.335844 2648 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335934 kubelet[2648]: I0516 05:25:12.335853 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335934 kubelet[2648]: I0516 05:25:12.335863 2648 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.335934 kubelet[2648]: I0516 05:25:12.335872 2648 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/980596e2-010b-4b7b-ac96-02194acc7e8a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 05:25:12.507545 systemd[1]: Removed slice kubepods-besteffort-podb970f8ae_a417_4783_88a7_907fedd6bf99.slice - libcontainer container kubepods-besteffort-podb970f8ae_a417_4783_88a7_907fedd6bf99.slice. May 16 05:25:12.509519 systemd[1]: Removed slice kubepods-burstable-pod980596e2_010b_4b7b_ac96_02194acc7e8a.slice - libcontainer container kubepods-burstable-pod980596e2_010b_4b7b_ac96_02194acc7e8a.slice. May 16 05:25:12.509724 systemd[1]: kubepods-burstable-pod980596e2_010b_4b7b_ac96_02194acc7e8a.slice: Consumed 6.597s CPU time, 125.8M memory peak, 704K read from disk, 13.3M written to disk. May 16 05:25:13.015523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d27845eced64e42093b297066ce6d78150ae391b6d863f9a8893c5dac1cb95f-shm.mount: Deactivated successfully. May 16 05:25:13.015632 systemd[1]: var-lib-kubelet-pods-b970f8ae\x2da417\x2d4783\x2d88a7\x2d907fedd6bf99-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4462.mount: Deactivated successfully. May 16 05:25:13.015717 systemd[1]: var-lib-kubelet-pods-980596e2\x2d010b\x2d4b7b\x2dac96\x2d02194acc7e8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbbhgf.mount: Deactivated successfully. May 16 05:25:13.015793 systemd[1]: var-lib-kubelet-pods-980596e2\x2d010b\x2d4b7b\x2dac96\x2d02194acc7e8a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 05:25:13.015869 systemd[1]: var-lib-kubelet-pods-980596e2\x2d010b\x2d4b7b\x2dac96\x2d02194acc7e8a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 05:25:13.027308 kubelet[2648]: I0516 05:25:13.027256 2648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980596e2-010b-4b7b-ac96-02194acc7e8a" path="/var/lib/kubelet/pods/980596e2-010b-4b7b-ac96-02194acc7e8a/volumes" May 16 05:25:13.028089 kubelet[2648]: I0516 05:25:13.028059 2648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b970f8ae-a417-4783-88a7-907fedd6bf99" path="/var/lib/kubelet/pods/b970f8ae-a417-4783-88a7-907fedd6bf99/volumes" May 16 05:25:13.936156 sshd[4239]: Connection closed by 10.0.0.1 port 54932 May 16 05:25:13.936550 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 16 05:25:13.949008 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:54932.service: Deactivated successfully. May 16 05:25:13.951051 systemd[1]: session-23.scope: Deactivated successfully. May 16 05:25:13.951891 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. May 16 05:25:13.955301 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:54934.service - OpenSSH per-connection server daemon (10.0.0.1:54934). May 16 05:25:13.956129 systemd-logind[1550]: Removed session 23. May 16 05:25:14.008921 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 54934 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:25:14.010579 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:25:14.015326 systemd-logind[1550]: New session 24 of user core. May 16 05:25:14.025047 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 05:25:14.442545 sshd[4391]: Connection closed by 10.0.0.1 port 54934 May 16 05:25:14.442985 sshd-session[4389]: pam_unix(sshd:session): session closed for user core May 16 05:25:14.459563 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:54934.service: Deactivated successfully. May 16 05:25:14.466602 systemd[1]: session-24.scope: Deactivated successfully. May 16 05:25:14.470526 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. May 16 05:25:14.476642 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:54944.service - OpenSSH per-connection server daemon (10.0.0.1:54944). May 16 05:25:14.478563 systemd-logind[1550]: Removed session 24. May 16 05:25:14.495942 systemd[1]: Created slice kubepods-burstable-pod7b4853fc_e2d5_4a7e_b341_0a0a5ba9c4fa.slice - libcontainer container kubepods-burstable-pod7b4853fc_e2d5_4a7e_b341_0a0a5ba9c4fa.slice. May 16 05:25:14.537439 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 54944 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:25:14.539296 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:25:14.543976 systemd-logind[1550]: New session 25 of user core. May 16 05:25:14.551320 kubelet[2648]: I0516 05:25:14.551280 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-clustermesh-secrets\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551320 kubelet[2648]: I0516 05:25:14.551318 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-hubble-tls\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551609 kubelet[2648]: I0516 05:25:14.551338 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-cni-path\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551609 kubelet[2648]: I0516 05:25:14.551351 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-etc-cni-netd\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551609 kubelet[2648]: I0516 05:25:14.551373 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-bpf-maps\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551609 kubelet[2648]: I0516 05:25:14.551389 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-host-proc-sys-kernel\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551609 kubelet[2648]: I0516 05:25:14.551407 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-cilium-cgroup\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551609 kubelet[2648]: I0516 05:25:14.551436 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-lib-modules\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551769 kubelet[2648]: I0516 05:25:14.551458 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-cilium-config-path\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551769 kubelet[2648]: I0516 05:25:14.551472 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsmck\" (UniqueName: \"kubernetes.io/projected/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-kube-api-access-nsmck\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551769 kubelet[2648]: I0516 05:25:14.551488 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-xtables-lock\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551769 kubelet[2648]: I0516 05:25:14.551503 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-cilium-ipsec-secrets\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551769 kubelet[2648]: I0516 05:25:14.551516 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-cilium-run\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551769 kubelet[2648]: I0516 05:25:14.551533 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-hostproc\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.551926 kubelet[2648]: I0516 05:25:14.551547 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa-host-proc-sys-net\") pod \"cilium-sf84t\" (UID: \"7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa\") " pod="kube-system/cilium-sf84t" May 16 05:25:14.558080 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 05:25:14.609852 sshd[4405]: Connection closed by 10.0.0.1 port 54944 May 16 05:25:14.610287 sshd-session[4403]: pam_unix(sshd:session): session closed for user core May 16 05:25:14.623838 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:54944.service: Deactivated successfully. May 16 05:25:14.626433 systemd[1]: session-25.scope: Deactivated successfully. May 16 05:25:14.627439 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. May 16 05:25:14.631533 systemd[1]: Started sshd@25-10.0.0.92:22-10.0.0.1:54950.service - OpenSSH per-connection server daemon (10.0.0.1:54950). May 16 05:25:14.633363 systemd-logind[1550]: Removed session 25. May 16 05:25:14.686864 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 54950 ssh2: RSA SHA256:U2kfJK+RE33T9caF7rBjeX4GXyfwg+2fKp8Ub9xQ77g May 16 05:25:14.688358 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:25:14.692884 systemd-logind[1550]: New session 26 of user core. May 16 05:25:14.700045 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 05:25:14.807475 kubelet[2648]: E0516 05:25:14.807414 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:14.808632 containerd[1566]: time="2025-05-16T05:25:14.808060896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sf84t,Uid:7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa,Namespace:kube-system,Attempt:0,}" May 16 05:25:14.824611 containerd[1566]: time="2025-05-16T05:25:14.824550512Z" level=info msg="connecting to shim 407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e" address="unix:///run/containerd/s/ad22ade4c6f29745c5d5bc0ac2155edfaf5e1b42d636eac0e9072eab3dd90b64" namespace=k8s.io protocol=ttrpc version=3 May 16 05:25:14.862095 systemd[1]: Started cri-containerd-407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e.scope - libcontainer container 407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e. May 16 05:25:14.891193 containerd[1566]: time="2025-05-16T05:25:14.891135624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sf84t,Uid:7b4853fc-e2d5-4a7e-b341-0a0a5ba9c4fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\"" May 16 05:25:14.892157 kubelet[2648]: E0516 05:25:14.892133 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:14.898078 containerd[1566]: time="2025-05-16T05:25:14.898037776Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 05:25:14.918951 containerd[1566]: time="2025-05-16T05:25:14.918888330Z" level=info msg="Container 7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8: CDI devices from CRI Config.CDIDevices: []" May 16 05:25:14.925234 containerd[1566]: time="2025-05-16T05:25:14.925192220Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8\"" May 16 05:25:14.925766 containerd[1566]: time="2025-05-16T05:25:14.925697029Z" level=info msg="StartContainer for \"7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8\"" May 16 05:25:14.926584 containerd[1566]: time="2025-05-16T05:25:14.926556005Z" level=info msg="connecting to shim 7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8" address="unix:///run/containerd/s/ad22ade4c6f29745c5d5bc0ac2155edfaf5e1b42d636eac0e9072eab3dd90b64" protocol=ttrpc version=3 May 16 05:25:14.954075 systemd[1]: Started cri-containerd-7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8.scope - libcontainer container 7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8. May 16 05:25:14.983324 containerd[1566]: time="2025-05-16T05:25:14.983274684Z" level=info msg="StartContainer for \"7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8\" returns successfully" May 16 05:25:14.991926 systemd[1]: cri-containerd-7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8.scope: Deactivated successfully. May 16 05:25:14.993023 containerd[1566]: time="2025-05-16T05:25:14.992960761Z" level=info msg="received exit event container_id:\"7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8\" id:\"7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8\" pid:4484 exited_at:{seconds:1747373114 nanos:992623213}" May 16 05:25:14.993113 containerd[1566]: time="2025-05-16T05:25:14.993087337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8\" id:\"7e79bdf6437e977d5e9c1ce6781df4afcb699251666a60e6c6b41e7ce8d61af8\" pid:4484 exited_at:{seconds:1747373114 nanos:992623213}" May 16 05:25:15.213306 kubelet[2648]: E0516 05:25:15.212969 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:15.218193 containerd[1566]: time="2025-05-16T05:25:15.218133791Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 05:25:15.225129 containerd[1566]: time="2025-05-16T05:25:15.225076850Z" level=info msg="Container 524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0: CDI devices from CRI Config.CDIDevices: []" May 16 05:25:15.232621 containerd[1566]: time="2025-05-16T05:25:15.232575904Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0\"" May 16 05:25:15.233157 containerd[1566]: time="2025-05-16T05:25:15.233118254Z" level=info msg="StartContainer for \"524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0\"" May 16 05:25:15.234022 containerd[1566]: time="2025-05-16T05:25:15.233980037Z" level=info msg="connecting to shim 524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0" address="unix:///run/containerd/s/ad22ade4c6f29745c5d5bc0ac2155edfaf5e1b42d636eac0e9072eab3dd90b64" protocol=ttrpc version=3 May 16 05:25:15.260077 systemd[1]: Started cri-containerd-524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0.scope - libcontainer container 524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0. May 16 05:25:15.290004 containerd[1566]: time="2025-05-16T05:25:15.289954956Z" level=info msg="StartContainer for \"524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0\" returns successfully" May 16 05:25:15.296782 systemd[1]: cri-containerd-524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0.scope: Deactivated successfully. May 16 05:25:15.297105 containerd[1566]: time="2025-05-16T05:25:15.297066710Z" level=info msg="received exit event container_id:\"524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0\" id:\"524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0\" pid:4528 exited_at:{seconds:1747373115 nanos:296778303}" May 16 05:25:15.297159 containerd[1566]: time="2025-05-16T05:25:15.297139555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0\" id:\"524d6bc3580399cf8fa83ab40bfb15e82b2d82b94071ce136e428b09b8aa4ed0\" pid:4528 exited_at:{seconds:1747373115 nanos:296778303}" May 16 05:25:16.024432 kubelet[2648]: E0516 05:25:16.024387 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:16.071223 kubelet[2648]: E0516 05:25:16.071150 2648 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 05:25:16.216721 kubelet[2648]: E0516 05:25:16.216684 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:16.221470 containerd[1566]: time="2025-05-16T05:25:16.221423513Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 05:25:16.238276 containerd[1566]: time="2025-05-16T05:25:16.238207963Z" level=info msg="Container f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57: CDI devices from CRI Config.CDIDevices: []" May 16 05:25:16.247006 containerd[1566]: time="2025-05-16T05:25:16.246505266Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57\"" May 16 05:25:16.248171 containerd[1566]: time="2025-05-16T05:25:16.248146322Z" level=info msg="StartContainer for \"f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57\"" May 16 05:25:16.250651 containerd[1566]: time="2025-05-16T05:25:16.250609359Z" level=info msg="connecting to shim f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57" address="unix:///run/containerd/s/ad22ade4c6f29745c5d5bc0ac2155edfaf5e1b42d636eac0e9072eab3dd90b64" protocol=ttrpc version=3 May 16 05:25:16.278065 systemd[1]: Started cri-containerd-f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57.scope - libcontainer container f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57. May 16 05:25:16.320048 containerd[1566]: time="2025-05-16T05:25:16.320001697Z" level=info msg="StartContainer for \"f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57\" returns successfully" May 16 05:25:16.321086 systemd[1]: cri-containerd-f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57.scope: Deactivated successfully. May 16 05:25:16.323664 containerd[1566]: time="2025-05-16T05:25:16.323622250Z" level=info msg="received exit event container_id:\"f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57\" id:\"f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57\" pid:4572 exited_at:{seconds:1747373116 nanos:323378636}" May 16 05:25:16.324144 containerd[1566]: time="2025-05-16T05:25:16.324043264Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57\" id:\"f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57\" pid:4572 exited_at:{seconds:1747373116 nanos:323378636}" May 16 05:25:16.346111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2cac1f35c8c42460299609f253a9d15a04663e15ecc0284551ad27c813e4b57-rootfs.mount: Deactivated successfully. May 16 05:25:17.221208 kubelet[2648]: E0516 05:25:17.221171 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:17.225883 containerd[1566]: time="2025-05-16T05:25:17.225834035Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 05:25:17.233200 containerd[1566]: time="2025-05-16T05:25:17.233143260Z" level=info msg="Container bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7: CDI devices from CRI Config.CDIDevices: []" May 16 05:25:17.238688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684563512.mount: Deactivated successfully. May 16 05:25:17.240559 containerd[1566]: time="2025-05-16T05:25:17.240525741Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7\"" May 16 05:25:17.241055 containerd[1566]: time="2025-05-16T05:25:17.241027076Z" level=info msg="StartContainer for \"bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7\"" May 16 05:25:17.241861 containerd[1566]: time="2025-05-16T05:25:17.241831424Z" level=info msg="connecting to shim bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7" address="unix:///run/containerd/s/ad22ade4c6f29745c5d5bc0ac2155edfaf5e1b42d636eac0e9072eab3dd90b64" protocol=ttrpc version=3 May 16 05:25:17.263042 systemd[1]: Started cri-containerd-bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7.scope - libcontainer container bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7. May 16 05:25:17.291695 systemd[1]: cri-containerd-bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7.scope: Deactivated successfully. May 16 05:25:17.292857 containerd[1566]: time="2025-05-16T05:25:17.292821229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7\" id:\"bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7\" pid:4613 exited_at:{seconds:1747373117 nanos:292505069}" May 16 05:25:17.292988 containerd[1566]: time="2025-05-16T05:25:17.292928959Z" level=info msg="received exit event container_id:\"bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7\" id:\"bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7\" pid:4613 exited_at:{seconds:1747373117 nanos:292505069}" May 16 05:25:17.300593 containerd[1566]: time="2025-05-16T05:25:17.300542381Z" level=info msg="StartContainer for \"bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7\" returns successfully" May 16 05:25:17.314572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf72a8c68e495f4288b5722c427d7c3fd326dc7974b0acf13f3d7ecabea4e1b7-rootfs.mount: Deactivated successfully. May 16 05:25:18.226341 kubelet[2648]: E0516 05:25:18.226303 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:18.231289 containerd[1566]: time="2025-05-16T05:25:18.231245331Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 05:25:18.252870 containerd[1566]: time="2025-05-16T05:25:18.252816051Z" level=info msg="Container 404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0: CDI devices from CRI Config.CDIDevices: []" May 16 05:25:18.256567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397917242.mount: Deactivated successfully. May 16 05:25:18.260082 containerd[1566]: time="2025-05-16T05:25:18.260046169Z" level=info msg="CreateContainer within sandbox \"407e4c32eb6dddb19fb4613dfb455036af87a4ef638a4460adf9db084163c24e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0\"" May 16 05:25:18.260579 containerd[1566]: time="2025-05-16T05:25:18.260543447Z" level=info msg="StartContainer for \"404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0\"" May 16 05:25:18.262164 containerd[1566]: time="2025-05-16T05:25:18.262130738Z" level=info msg="connecting to shim 404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0" address="unix:///run/containerd/s/ad22ade4c6f29745c5d5bc0ac2155edfaf5e1b42d636eac0e9072eab3dd90b64" protocol=ttrpc version=3 May 16 05:25:18.284027 systemd[1]: Started cri-containerd-404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0.scope - libcontainer container 404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0. May 16 05:25:18.317854 containerd[1566]: time="2025-05-16T05:25:18.317809807Z" level=info msg="StartContainer for \"404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0\" returns successfully" May 16 05:25:18.388632 containerd[1566]: time="2025-05-16T05:25:18.388590598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0\" id:\"337ac5f2eacbb2f365cf4dde0879e7b0d8d649fa6c31a81a4aa49d1126e1ac3b\" pid:4683 exited_at:{seconds:1747373118 nanos:388277594}" May 16 05:25:18.741957 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 16 05:25:19.024832 kubelet[2648]: E0516 05:25:19.024710 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:19.236121 kubelet[2648]: E0516 05:25:19.236080 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:19.250202 kubelet[2648]: I0516 05:25:19.250098 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sf84t" podStartSLOduration=5.25007719 podStartE2EDuration="5.25007719s" podCreationTimestamp="2025-05-16 05:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:25:19.249543294 +0000 UTC m=+78.313849747" watchObservedRunningTime="2025-05-16 05:25:19.25007719 +0000 UTC m=+78.314383643" May 16 05:25:20.808638 kubelet[2648]: E0516 05:25:20.808564 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:20.993963 containerd[1566]: time="2025-05-16T05:25:20.993842780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0\" id:\"901799778e0eb80401c1ee96e823cf6cd8f9406c0abff699fb6ed3c93825edd3\" pid:4980 exit_status:1 exited_at:{seconds:1747373120 nanos:993430440}" May 16 05:25:21.812235 systemd-networkd[1493]: lxc_health: Link UP May 16 05:25:21.824753 systemd-networkd[1493]: lxc_health: Gained carrier May 16 05:25:22.809538 kubelet[2648]: E0516 05:25:22.809491 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:23.090057 systemd-networkd[1493]: lxc_health: Gained IPv6LL May 16 05:25:23.101326 containerd[1566]: time="2025-05-16T05:25:23.101267486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0\" id:\"96e7a0b24432906fd21983017be037c251d7e1d604cbbfafbd6f689148fcc076\" pid:5221 exited_at:{seconds:1747373123 nanos:96641895}" May 16 05:25:23.244132 kubelet[2648]: E0516 05:25:23.244100 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:24.245824 kubelet[2648]: E0516 05:25:24.245767 2648 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 05:25:25.188228 containerd[1566]: time="2025-05-16T05:25:25.188144073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0\" id:\"7d766ffa7e19015aaf4c8e03f8a5804c153da5dbb2a3881992080e8393361d81\" pid:5256 exited_at:{seconds:1747373125 nanos:187744104}" May 16 05:25:27.288869 containerd[1566]: time="2025-05-16T05:25:27.288754186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0\" id:\"e660e29e675fcd9a9fda277e0e64ead2551840985df8679a3eb26956cc67b65e\" pid:5280 exited_at:{seconds:1747373127 nanos:288006292}" May 16 05:25:29.380675 containerd[1566]: time="2025-05-16T05:25:29.380568789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"404b5b6fb313dd07449ebe1e0184f86047d5f69fd1b555ffa51d2f43fcacedd0\" id:\"55aded5b2afb764d4844363d4a920f00a446f1866650a0b90adc9ddf2ad45fff\" pid:5304 exited_at:{seconds:1747373129 nanos:380042250}" May 16 05:25:29.386869 sshd[4418]: Connection closed by 10.0.0.1 port 54950 May 16 05:25:29.387261 sshd-session[4412]: pam_unix(sshd:session): session closed for user core May 16 05:25:29.391721 systemd[1]: sshd@25-10.0.0.92:22-10.0.0.1:54950.service: Deactivated successfully. May 16 05:25:29.393702 systemd[1]: session-26.scope: Deactivated successfully. May 16 05:25:29.394665 systemd-logind[1550]: Session 26 logged out. Waiting for processes to exit. May 16 05:25:29.395970 systemd-logind[1550]: Removed session 26.