May 16 16:36:54.809417 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 14:52:24 -00 2025 May 16 16:36:54.809440 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:36:54.809449 kernel: BIOS-provided physical RAM map: May 16 16:36:54.809456 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 16:36:54.809462 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 16:36:54.809469 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 16:36:54.809476 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 16 16:36:54.809485 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 16 16:36:54.809491 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 16 16:36:54.809498 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 16 16:36:54.809504 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 16:36:54.809511 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 16:36:54.809517 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 16:36:54.809524 kernel: NX (Execute Disable) protection: active May 16 16:36:54.809534 kernel: APIC: Static calls initialized May 16 16:36:54.809541 kernel: SMBIOS 2.8 present. May 16 16:36:54.809548 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 16 16:36:54.809555 kernel: DMI: Memory slots populated: 1/1 May 16 16:36:54.809561 kernel: Hypervisor detected: KVM May 16 16:36:54.809568 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 16:36:54.809575 kernel: kvm-clock: using sched offset of 3266663998 cycles May 16 16:36:54.809582 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 16:36:54.809590 kernel: tsc: Detected 2794.746 MHz processor May 16 16:36:54.809597 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 16:36:54.809607 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 16:36:54.809614 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 16 16:36:54.809621 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 16:36:54.809628 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 16:36:54.809635 kernel: Using GB pages for direct mapping May 16 16:36:54.809643 kernel: ACPI: Early table checksum verification disabled May 16 16:36:54.809650 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 16 16:36:54.809657 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:54.809666 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:54.809695 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:54.809702 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 16 16:36:54.809709 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:54.809716 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:54.809723 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:54.809730 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:54.809737 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 16 16:36:54.809750 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 16 16:36:54.809757 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 16 16:36:54.809765 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 16 16:36:54.809772 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 16 16:36:54.809780 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 16 16:36:54.809787 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 16 16:36:54.809796 kernel: No NUMA configuration found May 16 16:36:54.809803 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 16 16:36:54.809811 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 16 16:36:54.809818 kernel: Zone ranges: May 16 16:36:54.809835 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 16:36:54.809842 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 16 16:36:54.809858 kernel: Normal empty May 16 16:36:54.809875 kernel: Device empty May 16 16:36:54.809884 kernel: Movable zone start for each node May 16 16:36:54.809892 kernel: Early memory node ranges May 16 16:36:54.809902 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 16:36:54.809910 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 16 16:36:54.809917 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 16 16:36:54.809924 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 16:36:54.809932 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 16:36:54.809939 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 16 16:36:54.809946 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 16:36:54.809958 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 16:36:54.809965 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 16:36:54.809975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 16:36:54.809982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 16:36:54.809989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 16:36:54.809997 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 16:36:54.810004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 16:36:54.810011 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 16:36:54.810019 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 16:36:54.810026 kernel: TSC deadline timer available May 16 16:36:54.810033 kernel: CPU topo: Max. logical packages: 1 May 16 16:36:54.810047 kernel: CPU topo: Max. logical dies: 1 May 16 16:36:54.810057 kernel: CPU topo: Max. dies per package: 1 May 16 16:36:54.810064 kernel: CPU topo: Max. threads per core: 1 May 16 16:36:54.810072 kernel: CPU topo: Num. cores per package: 4 May 16 16:36:54.810079 kernel: CPU topo: Num. threads per package: 4 May 16 16:36:54.810086 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 16 16:36:54.810094 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 16:36:54.810101 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 16:36:54.810108 kernel: kvm-guest: setup PV sched yield May 16 16:36:54.810115 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 16 16:36:54.810125 kernel: Booting paravirtualized kernel on KVM May 16 16:36:54.810132 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 16:36:54.810140 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 16:36:54.810147 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 16 16:36:54.810155 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 16 16:36:54.810162 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 16:36:54.810169 kernel: kvm-guest: PV spinlocks enabled May 16 16:36:54.810177 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 16:36:54.810185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:36:54.810196 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 16:36:54.810203 kernel: random: crng init done May 16 16:36:54.810210 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 16:36:54.810218 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 16:36:54.810225 kernel: Fallback order for Node 0: 0 May 16 16:36:54.810232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 16 16:36:54.810239 kernel: Policy zone: DMA32 May 16 16:36:54.810247 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 16:36:54.810257 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 16:36:54.810264 kernel: ftrace: allocating 40065 entries in 157 pages May 16 16:36:54.810271 kernel: ftrace: allocated 157 pages with 5 groups May 16 16:36:54.810279 kernel: Dynamic Preempt: voluntary May 16 16:36:54.810286 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 16:36:54.810294 kernel: rcu: RCU event tracing is enabled. May 16 16:36:54.810301 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 16:36:54.810309 kernel: Trampoline variant of Tasks RCU enabled. May 16 16:36:54.810316 kernel: Rude variant of Tasks RCU enabled. May 16 16:36:54.810324 kernel: Tracing variant of Tasks RCU enabled. May 16 16:36:54.810334 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 16:36:54.810341 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 16:36:54.810349 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:36:54.810356 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:36:54.810364 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:36:54.810371 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 16:36:54.810379 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 16:36:54.810395 kernel: Console: colour VGA+ 80x25 May 16 16:36:54.810403 kernel: printk: legacy console [ttyS0] enabled May 16 16:36:54.810410 kernel: ACPI: Core revision 20240827 May 16 16:36:54.810418 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 16:36:54.810428 kernel: APIC: Switch to symmetric I/O mode setup May 16 16:36:54.810436 kernel: x2apic enabled May 16 16:36:54.810444 kernel: APIC: Switched APIC routing to: physical x2apic May 16 16:36:54.810451 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 16:36:54.810459 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 16:36:54.810469 kernel: kvm-guest: setup PV IPIs May 16 16:36:54.810477 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 16:36:54.810484 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 16 16:36:54.810492 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 16 16:36:54.810500 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 16:36:54.810508 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 16:36:54.810515 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 16:36:54.810523 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 16:36:54.810531 kernel: Spectre V2 : Mitigation: Retpolines May 16 16:36:54.810541 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 16 16:36:54.810549 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 16 16:36:54.810556 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 16:36:54.810564 kernel: RETBleed: Mitigation: untrained return thunk May 16 16:36:54.810572 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 16:36:54.810579 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 16:36:54.810587 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 16:36:54.810595 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 16:36:54.810605 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 16:36:54.810613 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 16:36:54.810621 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 16:36:54.810628 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 16:36:54.810636 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 16:36:54.810644 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 16:36:54.810651 kernel: Freeing SMP alternatives memory: 32K May 16 16:36:54.810659 kernel: pid_max: default: 32768 minimum: 301 May 16 16:36:54.810666 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 16:36:54.810688 kernel: landlock: Up and running. May 16 16:36:54.810695 kernel: SELinux: Initializing. May 16 16:36:54.810713 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:36:54.810721 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:36:54.810729 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 16:36:54.810736 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 16:36:54.810744 kernel: ... version: 0 May 16 16:36:54.810751 kernel: ... bit width: 48 May 16 16:36:54.810759 kernel: ... generic registers: 6 May 16 16:36:54.810769 kernel: ... value mask: 0000ffffffffffff May 16 16:36:54.810776 kernel: ... max period: 00007fffffffffff May 16 16:36:54.810784 kernel: ... fixed-purpose events: 0 May 16 16:36:54.810791 kernel: ... event mask: 000000000000003f May 16 16:36:54.810799 kernel: signal: max sigframe size: 1776 May 16 16:36:54.810807 kernel: rcu: Hierarchical SRCU implementation. May 16 16:36:54.810814 kernel: rcu: Max phase no-delay instances is 400. May 16 16:36:54.810822 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 16:36:54.810830 kernel: smp: Bringing up secondary CPUs ... May 16 16:36:54.810840 kernel: smpboot: x86: Booting SMP configuration: May 16 16:36:54.810848 kernel: .... node #0, CPUs: #1 #2 #3 May 16 16:36:54.810855 kernel: smp: Brought up 1 node, 4 CPUs May 16 16:36:54.810864 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 16 16:36:54.810873 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 136904K reserved, 0K cma-reserved) May 16 16:36:54.810882 kernel: devtmpfs: initialized May 16 16:36:54.810891 kernel: x86/mm: Memory block size: 128MB May 16 16:36:54.810899 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 16:36:54.810907 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 16:36:54.810917 kernel: pinctrl core: initialized pinctrl subsystem May 16 16:36:54.810925 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 16:36:54.810940 kernel: audit: initializing netlink subsys (disabled) May 16 16:36:54.810949 kernel: audit: type=2000 audit(1747413411.690:1): state=initialized audit_enabled=0 res=1 May 16 16:36:54.810964 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 16:36:54.810972 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 16:36:54.810979 kernel: cpuidle: using governor menu May 16 16:36:54.810987 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 16:36:54.810994 kernel: dca service started, version 1.12.1 May 16 16:36:54.811005 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 16 16:36:54.811013 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 16 16:36:54.811020 kernel: PCI: Using configuration type 1 for base access May 16 16:36:54.811032 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 16:36:54.811046 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 16:36:54.811054 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 16:36:54.811062 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 16:36:54.811070 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 16:36:54.811078 kernel: ACPI: Added _OSI(Module Device) May 16 16:36:54.811088 kernel: ACPI: Added _OSI(Processor Device) May 16 16:36:54.811095 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 16:36:54.811103 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 16:36:54.811110 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 16:36:54.811118 kernel: ACPI: Interpreter enabled May 16 16:36:54.811126 kernel: ACPI: PM: (supports S0 S3 S5) May 16 16:36:54.811133 kernel: ACPI: Using IOAPIC for interrupt routing May 16 16:36:54.811141 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 16:36:54.811149 kernel: PCI: Using E820 reservations for host bridge windows May 16 16:36:54.811158 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 16:36:54.811166 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 16:36:54.811331 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 16:36:54.811449 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 16:36:54.811570 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 16:36:54.811582 kernel: PCI host bridge to bus 0000:00 May 16 16:36:54.811728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 16:36:54.811846 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 16:36:54.811959 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 16:36:54.812079 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 16 16:36:54.812190 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 16:36:54.812303 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 16 16:36:54.812448 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 16:36:54.812594 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 16 16:36:54.812744 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 16 16:36:54.812867 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 16 16:36:54.812986 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 16 16:36:54.813117 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 16 16:36:54.813237 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 16:36:54.813368 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 16:36:54.813493 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 16 16:36:54.813615 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 16 16:36:54.813751 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 16 16:36:54.813909 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 16 16:36:54.814033 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 16 16:36:54.814168 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 16 16:36:54.814289 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 16 16:36:54.814422 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 16 16:36:54.814545 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 16 16:36:54.814666 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 16 16:36:54.814811 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 16 16:36:54.814934 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 16 16:36:54.815073 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 16 16:36:54.815194 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 16:36:54.815329 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 16 16:36:54.815450 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 16 16:36:54.815569 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 16 16:36:54.815711 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 16 16:36:54.815834 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 16 16:36:54.815846 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 16:36:54.815855 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 16:36:54.815868 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 16:36:54.815877 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 16:36:54.815886 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 16:36:54.815896 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 16:36:54.815907 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 16:36:54.815916 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 16:36:54.815927 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 16:36:54.815936 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 16:36:54.815945 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 16:36:54.815956 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 16:36:54.815965 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 16:36:54.815974 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 16:36:54.815983 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 16:36:54.815992 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 16:36:54.816001 kernel: iommu: Default domain type: Translated May 16 16:36:54.816010 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 16:36:54.816019 kernel: PCI: Using ACPI for IRQ routing May 16 16:36:54.816028 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 16:36:54.816048 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 16:36:54.816057 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 16 16:36:54.816179 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 16:36:54.816298 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 16:36:54.816417 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 16:36:54.816429 kernel: vgaarb: loaded May 16 16:36:54.816438 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 16:36:54.816447 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 16:36:54.816459 kernel: clocksource: Switched to clocksource kvm-clock May 16 16:36:54.816468 kernel: VFS: Disk quotas dquot_6.6.0 May 16 16:36:54.816477 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 16:36:54.816486 kernel: pnp: PnP ACPI init May 16 16:36:54.816621 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 16 16:36:54.816634 kernel: pnp: PnP ACPI: found 6 devices May 16 16:36:54.816643 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 16:36:54.816652 kernel: NET: Registered PF_INET protocol family May 16 16:36:54.816664 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 16:36:54.816700 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 16:36:54.816710 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 16:36:54.816719 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 16:36:54.816728 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 16:36:54.816737 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 16:36:54.816746 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:36:54.816755 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:36:54.816764 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 16:36:54.816782 kernel: NET: Registered PF_XDP protocol family May 16 16:36:54.816897 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 16:36:54.817009 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 16:36:54.817129 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 16:36:54.817239 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 16 16:36:54.817353 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 16 16:36:54.817462 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 16 16:36:54.817473 kernel: PCI: CLS 0 bytes, default 64 May 16 16:36:54.817486 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 16 16:36:54.817495 kernel: Initialise system trusted keyrings May 16 16:36:54.817504 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 16:36:54.817513 kernel: Key type asymmetric registered May 16 16:36:54.817522 kernel: Asymmetric key parser 'x509' registered May 16 16:36:54.817531 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 16:36:54.817540 kernel: io scheduler mq-deadline registered May 16 16:36:54.817549 kernel: io scheduler kyber registered May 16 16:36:54.817558 kernel: io scheduler bfq registered May 16 16:36:54.817569 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 16:36:54.817578 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 16:36:54.817587 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 16:36:54.817597 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 16:36:54.817605 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 16:36:54.817614 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 16:36:54.817624 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 16:36:54.817633 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 16:36:54.817642 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 16:36:54.817789 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 16:36:54.817803 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 16:36:54.817914 kernel: rtc_cmos 00:04: registered as rtc0 May 16 16:36:54.818027 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T16:36:54 UTC (1747413414) May 16 16:36:54.818197 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 16 16:36:54.818210 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 16:36:54.818219 kernel: NET: Registered PF_INET6 protocol family May 16 16:36:54.818228 kernel: Segment Routing with IPv6 May 16 16:36:54.818240 kernel: In-situ OAM (IOAM) with IPv6 May 16 16:36:54.818249 kernel: NET: Registered PF_PACKET protocol family May 16 16:36:54.818258 kernel: Key type dns_resolver registered May 16 16:36:54.818267 kernel: IPI shorthand broadcast: enabled May 16 16:36:54.818276 kernel: sched_clock: Marking stable (2727002710, 111643880)->(2854956541, -16309951) May 16 16:36:54.818285 kernel: registered taskstats version 1 May 16 16:36:54.818294 kernel: Loading compiled-in X.509 certificates May 16 16:36:54.818303 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 310304ddc2cf6c43796c9bf79d11c0543afdf71f' May 16 16:36:54.818312 kernel: Demotion targets for Node 0: null May 16 16:36:54.818323 kernel: Key type .fscrypt registered May 16 16:36:54.818332 kernel: Key type fscrypt-provisioning registered May 16 16:36:54.818341 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 16:36:54.818350 kernel: ima: Allocated hash algorithm: sha1 May 16 16:36:54.818359 kernel: ima: No architecture policies found May 16 16:36:54.818368 kernel: clk: Disabling unused clocks May 16 16:36:54.818377 kernel: Warning: unable to open an initial console. May 16 16:36:54.818386 kernel: Freeing unused kernel image (initmem) memory: 54416K May 16 16:36:54.818395 kernel: Write protecting the kernel read-only data: 24576k May 16 16:36:54.818406 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 16 16:36:54.818415 kernel: Run /init as init process May 16 16:36:54.818424 kernel: with arguments: May 16 16:36:54.818433 kernel: /init May 16 16:36:54.818442 kernel: with environment: May 16 16:36:54.818450 kernel: HOME=/ May 16 16:36:54.818459 kernel: TERM=linux May 16 16:36:54.818468 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 16:36:54.818478 systemd[1]: Successfully made /usr/ read-only. May 16 16:36:54.818501 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:36:54.818514 systemd[1]: Detected virtualization kvm. May 16 16:36:54.818523 systemd[1]: Detected architecture x86-64. May 16 16:36:54.818533 systemd[1]: Running in initrd. May 16 16:36:54.818542 systemd[1]: No hostname configured, using default hostname. May 16 16:36:54.818555 systemd[1]: Hostname set to . May 16 16:36:54.818564 systemd[1]: Initializing machine ID from VM UUID. May 16 16:36:54.818574 systemd[1]: Queued start job for default target initrd.target. May 16 16:36:54.818584 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:36:54.818594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:36:54.818604 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 16:36:54.818614 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:36:54.818625 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 16:36:54.818638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 16:36:54.818649 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 16:36:54.818659 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 16:36:54.818683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:36:54.818703 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:36:54.818713 systemd[1]: Reached target paths.target - Path Units. May 16 16:36:54.818723 systemd[1]: Reached target slices.target - Slice Units. May 16 16:36:54.818736 systemd[1]: Reached target swap.target - Swaps. May 16 16:36:54.818746 systemd[1]: Reached target timers.target - Timer Units. May 16 16:36:54.818756 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:36:54.818766 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:36:54.818776 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 16:36:54.818786 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 16:36:54.818796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:36:54.818808 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:36:54.818820 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:36:54.818829 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:36:54.818839 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 16:36:54.818850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:36:54.818861 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 16:36:54.818872 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 16:36:54.818884 systemd[1]: Starting systemd-fsck-usr.service... May 16 16:36:54.818894 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:36:54.818904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:36:54.818914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:36:54.818924 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 16:36:54.818937 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:36:54.818947 systemd[1]: Finished systemd-fsck-usr.service. May 16 16:36:54.818957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 16:36:54.818984 systemd-journald[220]: Collecting audit messages is disabled. May 16 16:36:54.819009 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:36:54.819020 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:36:54.819030 systemd-journald[220]: Journal started May 16 16:36:54.819061 systemd-journald[220]: Runtime Journal (/run/log/journal/144efd78acdf44ad949c74c7c13e9753) is 6M, max 48.6M, 42.5M free. May 16 16:36:54.808664 systemd-modules-load[222]: Inserted module 'overlay' May 16 16:36:54.859100 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:36:54.859125 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 16:36:54.859141 kernel: Bridge firewalling registered May 16 16:36:54.839715 systemd-modules-load[222]: Inserted module 'br_netfilter' May 16 16:36:54.857970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:36:54.859731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:54.873387 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 16:36:54.874940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:36:54.878407 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:36:54.886255 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:36:54.888214 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 16:36:54.890002 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:36:54.893448 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:36:54.895566 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:36:54.909819 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:36:54.911305 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 16:36:54.944016 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:36:54.953565 systemd-resolved[256]: Positive Trust Anchors: May 16 16:36:54.953584 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:36:54.953625 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:36:54.956163 systemd-resolved[256]: Defaulting to hostname 'linux'. May 16 16:36:54.957169 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:36:54.962485 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:36:55.053697 kernel: SCSI subsystem initialized May 16 16:36:55.062693 kernel: Loading iSCSI transport class v2.0-870. May 16 16:36:55.073702 kernel: iscsi: registered transport (tcp) May 16 16:36:55.095695 kernel: iscsi: registered transport (qla4xxx) May 16 16:36:55.095712 kernel: QLogic iSCSI HBA Driver May 16 16:36:55.115424 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:36:55.137898 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:36:55.139945 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:36:55.199582 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 16:36:55.203263 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 16:36:55.267696 kernel: raid6: avx2x4 gen() 26493 MB/s May 16 16:36:55.284694 kernel: raid6: avx2x2 gen() 24329 MB/s May 16 16:36:55.301811 kernel: raid6: avx2x1 gen() 23211 MB/s May 16 16:36:55.301825 kernel: raid6: using algorithm avx2x4 gen() 26493 MB/s May 16 16:36:55.319780 kernel: raid6: .... xor() 7464 MB/s, rmw enabled May 16 16:36:55.319800 kernel: raid6: using avx2x2 recovery algorithm May 16 16:36:55.339702 kernel: xor: automatically using best checksumming function avx May 16 16:36:55.501706 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 16:36:55.509913 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 16:36:55.513644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:36:55.550509 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 16 16:36:55.556281 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:36:55.560734 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 16:36:55.585524 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation May 16 16:36:55.616665 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:36:55.619608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:36:55.702039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:36:55.704519 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 16:36:55.741275 kernel: cryptd: max_cpu_qlen set to 1000 May 16 16:36:55.742690 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 16:36:55.765824 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 16:36:55.765969 kernel: AES CTR mode by8 optimization enabled May 16 16:36:55.765981 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 16:36:55.765992 kernel: GPT:9289727 != 19775487 May 16 16:36:55.766001 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 16:36:55.766016 kernel: GPT:9289727 != 19775487 May 16 16:36:55.766035 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 16:36:55.766044 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:36:55.766054 kernel: libata version 3.00 loaded. May 16 16:36:55.766065 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 16 16:36:55.782905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:36:55.785601 kernel: ahci 0000:00:1f.2: version 3.0 May 16 16:36:55.814725 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 16:36:55.814742 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 16 16:36:55.814890 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 16 16:36:55.815030 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 16:36:55.815162 kernel: scsi host0: ahci May 16 16:36:55.815302 kernel: scsi host1: ahci May 16 16:36:55.815441 kernel: scsi host2: ahci May 16 16:36:55.815574 kernel: scsi host3: ahci May 16 16:36:55.815722 kernel: scsi host4: ahci May 16 16:36:55.815858 kernel: scsi host5: ahci May 16 16:36:55.815996 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 16 16:36:55.816008 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 16 16:36:55.816028 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 16 16:36:55.816039 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 16 16:36:55.816049 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 16 16:36:55.816059 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 16 16:36:55.783034 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:55.785799 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:36:55.788764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:36:55.793866 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 16:36:55.818110 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 16:36:55.850545 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:55.866549 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:36:55.882238 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 16:36:55.889315 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 16:36:55.890591 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 16:36:55.894561 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 16:36:55.927501 disk-uuid[634]: Primary Header is updated. May 16 16:36:55.927501 disk-uuid[634]: Secondary Entries is updated. May 16 16:36:55.927501 disk-uuid[634]: Secondary Header is updated. May 16 16:36:55.930796 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:36:56.129602 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 16:36:56.129712 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 16:36:56.129738 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 16:36:56.129749 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 16:36:56.129759 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 16:36:56.130703 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 16:36:56.131702 kernel: ata3.00: applying bridge limits May 16 16:36:56.131714 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 16:36:56.132704 kernel: ata3.00: configured for UDMA/100 May 16 16:36:56.133705 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 16:36:56.199709 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 16:36:56.225638 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 16:36:56.225662 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 16:36:56.618577 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 16:36:56.619828 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:36:56.621023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:36:56.621350 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:36:56.627472 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 16:36:56.652516 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 16:36:56.944562 disk-uuid[635]: The operation has completed successfully. May 16 16:36:56.945746 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:36:56.977128 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 16:36:56.977261 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 16:36:57.009569 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 16:36:57.032831 sh[665]: Success May 16 16:36:57.051463 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 16:36:57.051517 kernel: device-mapper: uevent: version 1.0.3 May 16 16:36:57.051529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 16:36:57.059707 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 16 16:36:57.090264 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 16:36:57.093365 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 16:36:57.104980 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 16:36:57.111648 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 16:36:57.111693 kernel: BTRFS: device fsid 85b2a34c-237f-4a0a-87d0-0a783de0f256 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (677) May 16 16:36:57.113862 kernel: BTRFS info (device dm-0): first mount of filesystem 85b2a34c-237f-4a0a-87d0-0a783de0f256 May 16 16:36:57.113879 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 16:36:57.113889 kernel: BTRFS info (device dm-0): using free-space-tree May 16 16:36:57.118842 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 16:36:57.121187 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 16:36:57.123448 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 16:36:57.126145 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 16:36:57.128947 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 16:36:57.151664 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (710) May 16 16:36:57.151774 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:57.151791 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:36:57.153178 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:36:57.159692 kernel: BTRFS info (device vda6): last unmount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:57.161167 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 16:36:57.164100 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 16:36:57.240822 ignition[751]: Ignition 2.21.0 May 16 16:36:57.240836 ignition[751]: Stage: fetch-offline May 16 16:36:57.240876 ignition[751]: no configs at "/usr/lib/ignition/base.d" May 16 16:36:57.240885 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:57.240969 ignition[751]: parsed url from cmdline: "" May 16 16:36:57.240973 ignition[751]: no config URL provided May 16 16:36:57.240978 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" May 16 16:36:57.240986 ignition[751]: no config at "/usr/lib/ignition/user.ign" May 16 16:36:57.241016 ignition[751]: op(1): [started] loading QEMU firmware config module May 16 16:36:57.241022 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 16:36:57.249834 ignition[751]: op(1): [finished] loading QEMU firmware config module May 16 16:36:57.264073 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:36:57.284600 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:36:57.308024 ignition[751]: parsing config with SHA512: 04f1864ae6565f6da4b8c52c69a5590235de78281db8c2c59454da64dc88f529209c50c32c5aa5bf8a8d40064de2cb56e155bde3784a0417957a3b0d7fab4807 May 16 16:36:57.312170 unknown[751]: fetched base config from "system" May 16 16:36:57.312183 unknown[751]: fetched user config from "qemu" May 16 16:36:57.312622 ignition[751]: fetch-offline: fetch-offline passed May 16 16:36:57.326877 systemd-networkd[855]: lo: Link UP May 16 16:36:57.312687 ignition[751]: Ignition finished successfully May 16 16:36:57.326881 systemd-networkd[855]: lo: Gained carrier May 16 16:36:57.328301 systemd-networkd[855]: Enumeration completed May 16 16:36:57.328370 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:36:57.339592 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:36:57.339596 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:36:57.340019 systemd-networkd[855]: eth0: Link UP May 16 16:36:57.340023 systemd-networkd[855]: eth0: Gained carrier May 16 16:36:57.340030 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:36:57.341325 systemd[1]: Reached target network.target - Network. May 16 16:36:57.347335 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:36:57.348328 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 16:36:57.349130 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 16:36:57.380741 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:36:57.391757 ignition[859]: Ignition 2.21.0 May 16 16:36:57.391770 ignition[859]: Stage: kargs May 16 16:36:57.391935 ignition[859]: no configs at "/usr/lib/ignition/base.d" May 16 16:36:57.391945 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:57.394643 ignition[859]: kargs: kargs passed May 16 16:36:57.394734 ignition[859]: Ignition finished successfully May 16 16:36:57.399270 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 16:36:57.400546 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 16:36:57.426376 ignition[868]: Ignition 2.21.0 May 16 16:36:57.426390 ignition[868]: Stage: disks May 16 16:36:57.426515 ignition[868]: no configs at "/usr/lib/ignition/base.d" May 16 16:36:57.426526 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:57.428424 ignition[868]: disks: disks passed May 16 16:36:57.428492 ignition[868]: Ignition finished successfully May 16 16:36:57.431360 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 16:36:57.432609 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 16:36:57.433239 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 16:36:57.433564 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:36:57.438540 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:36:57.439097 systemd[1]: Reached target basic.target - Basic System. May 16 16:36:57.443094 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 16:36:57.477774 systemd-resolved[256]: Detected conflict on linux IN A 10.0.0.36 May 16 16:36:57.477789 systemd-resolved[256]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 16 16:36:57.480120 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 16:36:57.488506 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 16:36:57.492833 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 16:36:57.616713 kernel: EXT4-fs (vda9): mounted filesystem 07293137-138a-42a3-a962-d767034e11a7 r/w with ordered data mode. Quota mode: none. May 16 16:36:57.617573 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 16:36:57.618879 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 16:36:57.621209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:36:57.623634 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 16:36:57.626734 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 16:36:57.626785 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 16:36:57.626815 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:36:57.639240 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 16:36:57.641494 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 16:36:57.645699 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (886) May 16 16:36:57.645722 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:57.647712 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:36:57.648703 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:36:57.653648 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:36:57.680832 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory May 16 16:36:57.686029 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory May 16 16:36:57.690028 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory May 16 16:36:57.693692 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory May 16 16:36:57.786813 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 16:36:57.789530 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 16:36:57.791388 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 16:36:57.812744 kernel: BTRFS info (device vda6): last unmount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:57.826814 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 16:36:57.842999 ignition[1001]: INFO : Ignition 2.21.0 May 16 16:36:57.842999 ignition[1001]: INFO : Stage: mount May 16 16:36:57.845379 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:36:57.845379 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:57.845379 ignition[1001]: INFO : mount: mount passed May 16 16:36:57.845379 ignition[1001]: INFO : Ignition finished successfully May 16 16:36:57.848390 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 16:36:57.851149 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 16:36:58.110827 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 16:36:58.112428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:36:58.131468 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1013) May 16 16:36:58.131506 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:58.131527 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:36:58.132337 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:36:58.135997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:36:58.170797 ignition[1030]: INFO : Ignition 2.21.0 May 16 16:36:58.170797 ignition[1030]: INFO : Stage: files May 16 16:36:58.172559 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:36:58.172559 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:58.174900 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping May 16 16:36:58.174900 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 16:36:58.174900 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 16:36:58.178989 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 16:36:58.178989 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 16:36:58.178989 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 16:36:58.178989 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 16:36:58.178989 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 16 16:36:58.177178 unknown[1030]: wrote ssh authorized keys file for user: core May 16 16:36:58.224731 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 16:36:58.432882 systemd-networkd[855]: eth0: Gained IPv6LL May 16 16:36:58.622376 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 16:36:58.622376 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 16:36:58.627464 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 16:36:59.127620 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 16:36:59.224295 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 16:36:59.226420 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 16:36:59.226420 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 16:36:59.226420 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 16:36:59.226420 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 16:36:59.226420 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:36:59.234893 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:36:59.234893 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:36:59.234893 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:36:59.243645 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:36:59.245615 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:36:59.245615 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 16:36:59.250162 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 16:36:59.250162 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 16:36:59.250162 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 16 16:36:59.915858 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 16:37:00.286612 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 16:37:00.286612 ignition[1030]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 16:37:00.290957 ignition[1030]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:37:00.293084 ignition[1030]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:37:00.293084 ignition[1030]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 16:37:00.293084 ignition[1030]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 16:37:00.298458 ignition[1030]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:37:00.298458 ignition[1030]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:37:00.298458 ignition[1030]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 16:37:00.298458 ignition[1030]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 16:37:00.314465 ignition[1030]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:37:00.319083 ignition[1030]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:37:00.320864 ignition[1030]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 16:37:00.320864 ignition[1030]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 16:37:00.320864 ignition[1030]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 16:37:00.320864 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 16:37:00.320864 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 16:37:00.320864 ignition[1030]: INFO : files: files passed May 16 16:37:00.320864 ignition[1030]: INFO : Ignition finished successfully May 16 16:37:00.326958 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 16:37:00.332830 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 16:37:00.336169 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 16:37:00.352836 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 16:37:00.352976 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 16:37:00.358096 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory May 16 16:37:00.362575 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:37:00.364349 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 16:37:00.366366 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:37:00.367977 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:37:00.369148 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 16:37:00.371936 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 16:37:00.434299 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 16:37:00.434434 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 16:37:00.436162 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 16:37:00.438005 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 16:37:00.440123 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 16:37:00.441717 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 16:37:00.484441 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:37:00.486366 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 16:37:00.505619 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 16:37:00.506170 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:37:00.506692 systemd[1]: Stopped target timers.target - Timer Units. May 16 16:37:00.507188 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 16:37:00.507294 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:37:00.512403 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 16:37:00.512749 systemd[1]: Stopped target basic.target - Basic System. May 16 16:37:00.513245 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 16:37:00.513579 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:37:00.514100 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 16:37:00.514438 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 16:37:00.514957 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 16:37:00.515293 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:37:00.515643 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 16:37:00.516178 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 16:37:00.516502 systemd[1]: Stopped target swap.target - Swaps. May 16 16:37:00.517001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 16:37:00.517104 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 16:37:00.536929 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 16:37:00.537267 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:37:00.537569 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 16:37:00.542431 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:37:00.543144 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 16:37:00.543249 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 16:37:00.546641 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 16:37:00.546803 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:37:00.547267 systemd[1]: Stopped target paths.target - Path Units. May 16 16:37:00.551209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 16:37:00.555758 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:37:00.556297 systemd[1]: Stopped target slices.target - Slice Units. May 16 16:37:00.556613 systemd[1]: Stopped target sockets.target - Socket Units. May 16 16:37:00.557131 systemd[1]: iscsid.socket: Deactivated successfully. May 16 16:37:00.557218 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:37:00.562127 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 16:37:00.562207 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:37:00.563705 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 16:37:00.563815 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:37:00.565549 systemd[1]: ignition-files.service: Deactivated successfully. May 16 16:37:00.565646 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 16:37:00.571601 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 16:37:00.572062 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 16:37:00.572163 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:37:00.573129 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 16:37:00.576515 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 16:37:00.576632 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:37:00.577097 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 16:37:00.577188 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:37:00.585552 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 16:37:00.585655 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 16:37:00.602015 ignition[1086]: INFO : Ignition 2.21.0 May 16 16:37:00.602015 ignition[1086]: INFO : Stage: umount May 16 16:37:00.602015 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:37:00.602015 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:37:00.606288 ignition[1086]: INFO : umount: umount passed May 16 16:37:00.606288 ignition[1086]: INFO : Ignition finished successfully May 16 16:37:00.604445 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 16:37:00.620900 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 16:37:00.621048 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 16:37:00.623386 systemd[1]: Stopped target network.target - Network. May 16 16:37:00.624515 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 16:37:00.624570 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 16:37:00.626247 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 16:37:00.626294 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 16:37:00.628311 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 16:37:00.628364 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 16:37:00.630194 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 16:37:00.630239 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 16:37:00.632152 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 16:37:00.634094 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 16:37:00.644247 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 16:37:00.644408 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 16:37:00.648697 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 16:37:00.648967 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 16:37:00.649076 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 16:37:00.653389 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 16:37:00.654021 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 16:37:00.655169 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 16:37:00.655219 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 16:37:00.657101 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 16:37:00.661698 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 16:37:00.661793 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:37:00.662190 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:37:00.662231 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:37:00.667016 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 16:37:00.667091 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 16:37:00.667480 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 16:37:00.667521 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:37:00.671937 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:37:00.673461 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 16:37:00.673522 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 16:37:00.688035 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 16:37:00.688160 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 16:37:00.702456 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 16:37:00.702633 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:37:00.703333 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 16:37:00.703376 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 16:37:00.706246 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 16:37:00.706282 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:37:00.706557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 16:37:00.706599 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 16:37:00.707430 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 16:37:00.707473 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 16:37:00.715055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 16:37:00.715101 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:37:00.720464 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 16:37:00.720905 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 16:37:00.720962 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:37:00.724819 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 16:37:00.724866 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:37:00.728286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:37:00.728331 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:37:00.732588 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 16 16:37:00.732639 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 16:37:00.732700 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 16:37:00.748857 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 16:37:00.748984 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 16:37:00.788302 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 16:37:00.788431 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 16:37:00.789306 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 16:37:00.791322 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 16:37:00.791379 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 16:37:00.794110 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 16:37:00.816405 systemd[1]: Switching root. May 16 16:37:00.848226 systemd-journald[220]: Journal stopped May 16 16:37:02.071393 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 16 16:37:02.071476 kernel: SELinux: policy capability network_peer_controls=1 May 16 16:37:02.071495 kernel: SELinux: policy capability open_perms=1 May 16 16:37:02.071516 kernel: SELinux: policy capability extended_socket_class=1 May 16 16:37:02.071532 kernel: SELinux: policy capability always_check_network=0 May 16 16:37:02.071546 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 16:37:02.071560 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 16:37:02.071574 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 16:37:02.071593 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 16:37:02.071607 kernel: SELinux: policy capability userspace_initial_context=0 May 16 16:37:02.071621 kernel: audit: type=1403 audit(1747413421.253:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 16:37:02.071647 systemd[1]: Successfully loaded SELinux policy in 45.972ms. May 16 16:37:02.071686 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.237ms. May 16 16:37:02.071707 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:37:02.071723 systemd[1]: Detected virtualization kvm. May 16 16:37:02.071737 systemd[1]: Detected architecture x86-64. May 16 16:37:02.071754 systemd[1]: Detected first boot. May 16 16:37:02.071769 systemd[1]: Initializing machine ID from VM UUID. May 16 16:37:02.071785 zram_generator::config[1131]: No configuration found. May 16 16:37:02.071800 kernel: Guest personality initialized and is inactive May 16 16:37:02.071815 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 16:37:02.071833 kernel: Initialized host personality May 16 16:37:02.071847 kernel: NET: Registered PF_VSOCK protocol family May 16 16:37:02.071863 systemd[1]: Populated /etc with preset unit settings. May 16 16:37:02.071879 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 16:37:02.071902 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 16:37:02.071929 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 16:37:02.071945 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 16:37:02.071961 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 16:37:02.071977 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 16:37:02.071995 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 16:37:02.072010 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 16:37:02.072025 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 16:37:02.072041 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 16:37:02.072056 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 16:37:02.072071 systemd[1]: Created slice user.slice - User and Session Slice. May 16 16:37:02.072086 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:37:02.072104 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:37:02.072119 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 16:37:02.072137 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 16:37:02.072153 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 16:37:02.072168 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:37:02.072183 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 16:37:02.072198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:37:02.072212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:37:02.072227 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 16:37:02.072245 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 16:37:02.072259 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 16:37:02.072274 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 16:37:02.072289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:37:02.072305 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:37:02.072321 systemd[1]: Reached target slices.target - Slice Units. May 16 16:37:02.072337 systemd[1]: Reached target swap.target - Swaps. May 16 16:37:02.072352 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 16:37:02.072367 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 16:37:02.072386 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 16:37:02.072402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:37:02.072418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:37:02.072436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:37:02.072452 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 16:37:02.072466 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 16:37:02.072481 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 16:37:02.072496 systemd[1]: Mounting media.mount - External Media Directory... May 16 16:37:02.072511 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:37:02.072529 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 16:37:02.072543 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 16:37:02.072559 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 16:37:02.072574 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 16:37:02.072589 systemd[1]: Reached target machines.target - Containers. May 16 16:37:02.072604 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 16:37:02.072620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:37:02.072635 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:37:02.072650 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 16:37:02.072667 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:37:02.072702 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:37:02.072717 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:37:02.072732 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 16:37:02.072746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:37:02.072762 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 16:37:02.072778 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 16:37:02.072796 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 16:37:02.072815 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 16:37:02.072830 systemd[1]: Stopped systemd-fsck-usr.service. May 16 16:37:02.072847 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:37:02.072862 kernel: fuse: init (API version 7.41) May 16 16:37:02.072877 kernel: loop: module loaded May 16 16:37:02.072888 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:37:02.072907 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:37:02.072919 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:37:02.072932 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 16:37:02.072947 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 16:37:02.072960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:37:02.072972 systemd[1]: verity-setup.service: Deactivated successfully. May 16 16:37:02.073008 systemd-journald[1199]: Collecting audit messages is disabled. May 16 16:37:02.073033 systemd[1]: Stopped verity-setup.service. May 16 16:37:02.073046 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:37:02.073059 systemd-journald[1199]: Journal started May 16 16:37:02.073083 systemd-journald[1199]: Runtime Journal (/run/log/journal/144efd78acdf44ad949c74c7c13e9753) is 6M, max 48.6M, 42.5M free. May 16 16:37:01.793399 systemd[1]: Queued start job for default target multi-user.target. May 16 16:37:01.815498 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 16:37:01.815982 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 16:37:02.074719 kernel: ACPI: bus type drm_connector registered May 16 16:37:02.085729 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:37:02.088159 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 16:37:02.089422 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 16:37:02.090743 systemd[1]: Mounted media.mount - External Media Directory. May 16 16:37:02.091957 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 16:37:02.093243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 16:37:02.094549 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 16:37:02.096014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:37:02.098747 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 16:37:02.098976 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 16:37:02.100591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:37:02.100823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:37:02.102269 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:37:02.102478 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:37:02.103860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:37:02.104085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:37:02.106100 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 16:37:02.106339 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 16:37:02.107788 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:37:02.108007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:37:02.109586 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:37:02.111120 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:37:02.113062 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 16:37:02.114936 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 16:37:02.131827 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:37:02.135435 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 16:37:02.138743 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 16:37:02.139913 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 16:37:02.139951 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:37:02.142043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 16:37:02.147754 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 16:37:02.149011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:37:02.163580 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 16:37:02.167499 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 16:37:02.168750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:37:02.174425 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 16:37:02.177502 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:37:02.181421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:37:02.183696 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 16:37:02.187871 systemd-journald[1199]: Time spent on flushing to /var/log/journal/144efd78acdf44ad949c74c7c13e9753 is 12.544ms for 981 entries. May 16 16:37:02.187871 systemd-journald[1199]: System Journal (/var/log/journal/144efd78acdf44ad949c74c7c13e9753) is 8M, max 195.6M, 187.6M free. May 16 16:37:02.276527 systemd-journald[1199]: Received client request to flush runtime journal. May 16 16:37:02.276559 kernel: loop0: detected capacity change from 0 to 113872 May 16 16:37:02.276572 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 16:37:02.276584 kernel: loop1: detected capacity change from 0 to 146240 May 16 16:37:02.189642 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 16:37:02.191023 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 16:37:02.192626 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 16:37:02.198981 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 16:37:02.206047 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:37:02.231169 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:37:02.279108 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 16:37:02.286972 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 16:37:02.289488 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:37:02.332955 kernel: loop2: detected capacity change from 0 to 224512 May 16 16:37:02.333872 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 16 16:37:02.333902 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 16 16:37:02.336499 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 16:37:02.338526 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 16:37:02.342527 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 16:37:02.344498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:37:02.386704 kernel: loop3: detected capacity change from 0 to 113872 May 16 16:37:02.431708 kernel: loop4: detected capacity change from 0 to 146240 May 16 16:37:02.460698 kernel: loop5: detected capacity change from 0 to 224512 May 16 16:37:02.467733 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 16:37:02.469190 (sd-merge)[1272]: Merged extensions into '/usr'. May 16 16:37:02.473269 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... May 16 16:37:02.473377 systemd[1]: Reloading... May 16 16:37:02.527716 zram_generator::config[1301]: No configuration found. May 16 16:37:02.630237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:37:02.635771 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 16:37:02.710516 systemd[1]: Reloading finished in 236 ms. May 16 16:37:02.736424 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 16:37:02.745793 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 16:37:02.747400 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 16:37:02.762167 systemd[1]: Starting ensure-sysext.service... May 16 16:37:02.764849 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:37:02.775780 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... May 16 16:37:02.775795 systemd[1]: Reloading... May 16 16:37:02.783836 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 16:37:02.784195 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 16:37:02.784488 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 16:37:02.784770 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 16:37:02.785644 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 16:37:02.785930 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 16 16:37:02.786010 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 16 16:37:02.790056 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:37:02.790069 systemd-tmpfiles[1339]: Skipping /boot May 16 16:37:02.802255 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:37:02.802314 systemd-tmpfiles[1339]: Skipping /boot May 16 16:37:02.824699 zram_generator::config[1366]: No configuration found. May 16 16:37:02.921462 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:37:03.001600 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 16:37:03.002131 systemd[1]: Reloading finished in 226 ms. May 16 16:37:03.043529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:37:03.052104 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:37:03.078728 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 16:37:03.081097 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 16:37:03.101760 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:37:03.104826 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 16:37:03.108421 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:37:03.108601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:37:03.109692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:37:03.112820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:37:03.115015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:37:03.116155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:37:03.116316 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:37:03.116452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:37:03.117554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:37:03.117806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:37:03.119500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:37:03.119726 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:37:03.121497 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:37:03.121709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:37:03.128302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:37:03.128472 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:37:03.129756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:37:03.131839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:37:03.145999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:37:03.147501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:37:03.147618 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:37:03.149719 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 16:37:03.150954 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:37:03.152862 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 16:37:03.155035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:37:03.155338 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:37:03.157262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:37:03.160952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:37:03.181788 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:37:03.182018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:37:03.188455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:37:03.188777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:37:03.190228 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:37:03.205540 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:37:03.208022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:37:03.210896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:37:03.212182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:37:03.212297 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:37:03.212444 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:37:03.214932 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 16:37:03.215641 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 16:37:03.218285 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:37:03.218515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:37:03.220525 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:37:03.221099 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:37:03.223230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:37:03.223442 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:37:03.225226 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:37:03.225444 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:37:03.237243 systemd[1]: Finished ensure-sysext.service. May 16 16:37:03.245445 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:37:03.245519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:37:03.248076 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 16:37:03.249531 augenrules[1456]: No rules May 16 16:37:03.249855 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 16:37:03.251571 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:37:03.251946 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:37:03.254314 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 16:37:03.267998 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 16:37:03.271977 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:37:03.274882 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 16:37:03.293099 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 16:37:03.298501 systemd-resolved[1413]: Positive Trust Anchors: May 16 16:37:03.298516 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:37:03.298547 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:37:03.302060 systemd-resolved[1413]: Defaulting to hostname 'linux'. May 16 16:37:03.303931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:37:03.305234 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:37:03.316515 systemd-udevd[1466]: Using default interface naming scheme 'v255'. May 16 16:37:03.330644 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 16:37:03.332054 systemd[1]: Reached target time-set.target - System Time Set. May 16 16:37:03.335118 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:37:03.337256 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:37:03.338537 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 16:37:03.339902 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 16:37:03.341213 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 16 16:37:03.342560 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 16:37:03.345190 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 16:37:03.346733 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 16:37:03.348254 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 16:37:03.348289 systemd[1]: Reached target paths.target - Path Units. May 16 16:37:03.349428 systemd[1]: Reached target timers.target - Timer Units. May 16 16:37:03.351627 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 16:37:03.354579 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 16:37:03.358233 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 16:37:03.360929 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 16:37:03.363539 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 16:37:03.372480 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 16:37:03.373917 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 16:37:03.378217 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:37:03.381315 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 16:37:03.387386 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:37:03.389734 systemd[1]: Reached target basic.target - Basic System. May 16 16:37:03.390773 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 16:37:03.390804 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 16:37:03.394851 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 16:37:03.398477 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 16:37:03.401821 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 16:37:03.406597 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 16:37:03.407637 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 16:37:03.414372 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 16 16:37:03.417824 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 16:37:03.420855 jq[1503]: false May 16 16:37:03.421820 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 16:37:03.426902 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 16:37:03.435890 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 16:37:03.446456 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Refreshing passwd entry cache May 16 16:37:03.446467 oslogin_cache_refresh[1505]: Refreshing passwd entry cache May 16 16:37:03.449904 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 16:37:03.451261 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Failure getting users, quitting May 16 16:37:03.451261 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 16:37:03.451253 oslogin_cache_refresh[1505]: Failure getting users, quitting May 16 16:37:03.451339 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Refreshing group entry cache May 16 16:37:03.451268 oslogin_cache_refresh[1505]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 16:37:03.451306 oslogin_cache_refresh[1505]: Refreshing group entry cache May 16 16:37:03.451790 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Failure getting groups, quitting May 16 16:37:03.451790 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 16:37:03.451781 oslogin_cache_refresh[1505]: Failure getting groups, quitting May 16 16:37:03.451790 oslogin_cache_refresh[1505]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 16:37:03.452541 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 16:37:03.453160 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 16:37:03.454434 systemd[1]: Starting update-engine.service - Update Engine... May 16 16:37:03.457834 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 16:37:03.471500 jq[1519]: true May 16 16:37:03.481815 update_engine[1518]: I20250516 16:37:03.481467 1518 main.cc:92] Flatcar Update Engine starting May 16 16:37:03.494650 systemd-networkd[1499]: lo: Link UP May 16 16:37:03.494663 systemd-networkd[1499]: lo: Gained carrier May 16 16:37:03.498586 systemd-networkd[1499]: Enumeration completed May 16 16:37:03.503010 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:37:03.503018 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:37:03.504145 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:37:03.504174 systemd-networkd[1499]: eth0: Link UP May 16 16:37:03.504396 systemd-networkd[1499]: eth0: Gained carrier May 16 16:37:03.504409 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:37:03.507312 extend-filesystems[1504]: Found loop3 May 16 16:37:03.508750 extend-filesystems[1504]: Found loop4 May 16 16:37:03.508750 extend-filesystems[1504]: Found loop5 May 16 16:37:03.508750 extend-filesystems[1504]: Found sr0 May 16 16:37:03.508750 extend-filesystems[1504]: Found vda May 16 16:37:03.508750 extend-filesystems[1504]: Found vda1 May 16 16:37:03.508750 extend-filesystems[1504]: Found vda2 May 16 16:37:03.508750 extend-filesystems[1504]: Found vda3 May 16 16:37:03.508750 extend-filesystems[1504]: Found usr May 16 16:37:03.508750 extend-filesystems[1504]: Found vda4 May 16 16:37:03.508750 extend-filesystems[1504]: Found vda6 May 16 16:37:03.508750 extend-filesystems[1504]: Found vda7 May 16 16:37:03.508750 extend-filesystems[1504]: Found vda9 May 16 16:37:03.519050 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:37:03.521114 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 16:37:03.524010 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 16:37:03.524744 systemd-networkd[1499]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:37:03.524803 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 16:37:03.525121 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 16:37:03.528318 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 16:37:03.528666 systemd-timesyncd[1461]: Network configuration changed, trying to establish connection. May 16 16:37:03.529651 systemd-timesyncd[1461]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 16:37:03.529728 systemd-timesyncd[1461]: Initial clock synchronization to Fri 2025-05-16 16:37:03.460447 UTC. May 16 16:37:03.531221 systemd-logind[1513]: New seat seat0. May 16 16:37:03.532806 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 16 16:37:03.533043 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 16 16:37:03.538873 systemd[1]: motdgen.service: Deactivated successfully. May 16 16:37:03.539445 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 16:37:03.540945 systemd[1]: Started systemd-logind.service - User Login Management. May 16 16:37:03.546115 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 16:37:03.546393 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 16:37:03.562179 jq[1531]: true May 16 16:37:03.566468 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 16:37:03.574698 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 16 16:37:03.579651 tar[1530]: linux-amd64/LICENSE May 16 16:37:03.580885 kernel: ACPI: button: Power Button [PWRF] May 16 16:37:03.580927 tar[1530]: linux-amd64/helm May 16 16:37:03.585942 dbus-daemon[1500]: [system] SELinux support is enabled May 16 16:37:03.588910 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 16:37:03.600826 update_engine[1518]: I20250516 16:37:03.600773 1518 update_check_scheduler.cc:74] Next update check in 3m6s May 16 16:37:03.601537 systemd[1]: Reached target network.target - Network. May 16 16:37:03.602482 dbus-daemon[1500]: [system] Successfully activated service 'org.freedesktop.systemd1' May 16 16:37:03.608589 systemd[1]: Starting containerd.service - containerd container runtime... May 16 16:37:03.609853 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 16:37:03.609890 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 16:37:03.613104 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 16:37:03.614706 kernel: mousedev: PS/2 mouse device common for all mice May 16 16:37:03.617165 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 16:37:03.618755 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 16:37:03.618784 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 16:37:03.620169 systemd[1]: Started update-engine.service - Update Engine. May 16 16:37:03.625148 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 16:37:03.633431 bash[1558]: Updated "/home/core/.ssh/authorized_keys" May 16 16:37:03.637117 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:37:03.639685 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 16:37:03.643203 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 16:37:03.644468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 16:37:03.661581 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 16:37:03.661845 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 16:37:03.681921 (ntainerd)[1573]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 16:37:03.682094 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 16:37:03.685583 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 16:37:03.814956 sshd_keygen[1523]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 16:37:03.843443 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 16:37:03.843792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:37:03.865476 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 16:37:03.871130 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 16:37:03.880191 kernel: kvm_amd: TSC scaling supported May 16 16:37:03.880228 kernel: kvm_amd: Nested Virtualization enabled May 16 16:37:03.880241 kernel: kvm_amd: Nested Paging enabled May 16 16:37:03.880252 kernel: kvm_amd: LBR virtualization supported May 16 16:37:03.882020 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 16:37:03.882065 kernel: kvm_amd: Virtual GIF supported May 16 16:37:03.900982 systemd[1]: issuegen.service: Deactivated successfully. May 16 16:37:03.901251 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 16:37:03.908997 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 16:37:03.911660 systemd-logind[1513]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 16:37:03.932705 kernel: EDAC MC: Ver: 3.0.0 May 16 16:37:03.933110 containerd[1573]: time="2025-05-16T16:37:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 16:37:03.933809 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 16:37:03.935764 containerd[1573]: time="2025-05-16T16:37:03.935070493Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 16:37:03.936107 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 16:37:03.938920 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 16:37:03.939288 systemd[1]: Reached target getty.target - Login Prompts. May 16 16:37:03.941881 systemd-logind[1513]: Watching system buttons on /dev/input/event2 (Power Button) May 16 16:37:03.951371 containerd[1573]: time="2025-05-16T16:37:03.951326256Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.621µs" May 16 16:37:03.951371 containerd[1573]: time="2025-05-16T16:37:03.951367553Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 16:37:03.951425 containerd[1573]: time="2025-05-16T16:37:03.951387882Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 16:37:03.951588 containerd[1573]: time="2025-05-16T16:37:03.951564753Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 16:37:03.951588 containerd[1573]: time="2025-05-16T16:37:03.951586114Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 16:37:03.951640 containerd[1573]: time="2025-05-16T16:37:03.951609558Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:37:03.951715 containerd[1573]: time="2025-05-16T16:37:03.951693535Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:37:03.951715 containerd[1573]: time="2025-05-16T16:37:03.951712070Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:37:03.952001 containerd[1573]: time="2025-05-16T16:37:03.951977969Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:37:03.952001 containerd[1573]: time="2025-05-16T16:37:03.951997055Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:37:03.952046 containerd[1573]: time="2025-05-16T16:37:03.952007324Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:37:03.952046 containerd[1573]: time="2025-05-16T16:37:03.952016090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 16:37:03.952120 containerd[1573]: time="2025-05-16T16:37:03.952101541Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 16:37:03.952342 containerd[1573]: time="2025-05-16T16:37:03.952319900Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:37:03.952369 containerd[1573]: time="2025-05-16T16:37:03.952357431Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:37:03.952369 containerd[1573]: time="2025-05-16T16:37:03.952367229Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 16:37:03.952406 containerd[1573]: time="2025-05-16T16:37:03.952396825Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 16:37:03.952634 containerd[1573]: time="2025-05-16T16:37:03.952612930Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 16:37:03.952757 containerd[1573]: time="2025-05-16T16:37:03.952736342Z" level=info msg="metadata content store policy set" policy=shared May 16 16:37:03.959037 containerd[1573]: time="2025-05-16T16:37:03.959009703Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 16:37:03.959075 containerd[1573]: time="2025-05-16T16:37:03.959049167Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 16:37:03.959075 containerd[1573]: time="2025-05-16T16:37:03.959062282Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 16:37:03.959111 containerd[1573]: time="2025-05-16T16:37:03.959074655Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 16:37:03.959111 containerd[1573]: time="2025-05-16T16:37:03.959092900Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 16:37:03.959111 containerd[1573]: time="2025-05-16T16:37:03.959104201Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 16:37:03.959173 containerd[1573]: time="2025-05-16T16:37:03.959115562Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 16:37:03.959173 containerd[1573]: time="2025-05-16T16:37:03.959128025Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 16:37:03.959173 containerd[1573]: time="2025-05-16T16:37:03.959139206Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 16:37:03.959173 containerd[1573]: time="2025-05-16T16:37:03.959149055Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 16:37:03.959173 containerd[1573]: time="2025-05-16T16:37:03.959159565Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 16:37:03.959173 containerd[1573]: time="2025-05-16T16:37:03.959172339Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 16:37:03.959295 containerd[1573]: time="2025-05-16T16:37:03.959273849Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 16:37:03.959320 containerd[1573]: time="2025-05-16T16:37:03.959299147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 16:37:03.959320 containerd[1573]: time="2025-05-16T16:37:03.959314165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 16:37:03.959355 containerd[1573]: time="2025-05-16T16:37:03.959326187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 16:37:03.959355 containerd[1573]: time="2025-05-16T16:37:03.959336146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 16:37:03.959355 containerd[1573]: time="2025-05-16T16:37:03.959346265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 16:37:03.959460 containerd[1573]: time="2025-05-16T16:37:03.959357005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 16:37:03.959460 containerd[1573]: time="2025-05-16T16:37:03.959367735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 16:37:03.959460 containerd[1573]: time="2025-05-16T16:37:03.959379117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 16:37:03.959460 containerd[1573]: time="2025-05-16T16:37:03.959388765Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 16:37:03.959460 containerd[1573]: time="2025-05-16T16:37:03.959398783Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 16:37:03.959460 containerd[1573]: time="2025-05-16T16:37:03.959458085Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 16:37:03.959573 containerd[1573]: time="2025-05-16T16:37:03.959470969Z" level=info msg="Start snapshots syncer" May 16 16:37:03.959573 containerd[1573]: time="2025-05-16T16:37:03.959493782Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 16:37:03.961159 containerd[1573]: time="2025-05-16T16:37:03.961111457Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 16:37:03.961262 containerd[1573]: time="2025-05-16T16:37:03.961179715Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 16:37:03.961285 containerd[1573]: time="2025-05-16T16:37:03.961264524Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 16:37:03.961393 containerd[1573]: time="2025-05-16T16:37:03.961366606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 16:37:03.961417 containerd[1573]: time="2025-05-16T16:37:03.961399047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 16:37:03.961417 containerd[1573]: time="2025-05-16T16:37:03.961414025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 16:37:03.961461 containerd[1573]: time="2025-05-16T16:37:03.961428362Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 16:37:03.961461 containerd[1573]: time="2025-05-16T16:37:03.961443140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 16:37:03.961461 containerd[1573]: time="2025-05-16T16:37:03.961455483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 16:37:03.961514 containerd[1573]: time="2025-05-16T16:37:03.961470190Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 16:37:03.961567 containerd[1573]: time="2025-05-16T16:37:03.961502892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 16:37:03.961628 containerd[1573]: time="2025-05-16T16:37:03.961597589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 16:37:03.961657 containerd[1573]: time="2025-05-16T16:37:03.961639708Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 16:37:03.961734 containerd[1573]: time="2025-05-16T16:37:03.961711513Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:37:03.961758 containerd[1573]: time="2025-05-16T16:37:03.961738985Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:37:03.961758 containerd[1573]: time="2025-05-16T16:37:03.961752841Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:37:03.961795 containerd[1573]: time="2025-05-16T16:37:03.961767288Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:37:03.961795 containerd[1573]: time="2025-05-16T16:37:03.961775994Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 16:37:03.961795 containerd[1573]: time="2025-05-16T16:37:03.961789610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 16:37:03.961862 containerd[1573]: time="2025-05-16T16:37:03.961804187Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 16:37:03.961862 containerd[1573]: time="2025-05-16T16:37:03.961828813Z" level=info msg="runtime interface created" May 16 16:37:03.961862 containerd[1573]: time="2025-05-16T16:37:03.961834925Z" level=info msg="created NRI interface" May 16 16:37:03.961862 containerd[1573]: time="2025-05-16T16:37:03.961846867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 16:37:03.961931 containerd[1573]: time="2025-05-16T16:37:03.961874259Z" level=info msg="Connect containerd service" May 16 16:37:03.961931 containerd[1573]: time="2025-05-16T16:37:03.961915957Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 16:37:03.962954 containerd[1573]: time="2025-05-16T16:37:03.962924980Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:37:04.045780 containerd[1573]: time="2025-05-16T16:37:04.045718169Z" level=info msg="Start subscribing containerd event" May 16 16:37:04.045895 containerd[1573]: time="2025-05-16T16:37:04.045792526Z" level=info msg="Start recovering state" May 16 16:37:04.045942 containerd[1573]: time="2025-05-16T16:37:04.045921161Z" level=info msg="Start event monitor" May 16 16:37:04.045981 containerd[1573]: time="2025-05-16T16:37:04.045923086Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 16:37:04.046005 containerd[1573]: time="2025-05-16T16:37:04.045947935Z" level=info msg="Start cni network conf syncer for default" May 16 16:37:04.046005 containerd[1573]: time="2025-05-16T16:37:04.045994930Z" level=info msg="Start streaming server" May 16 16:37:04.046059 containerd[1573]: time="2025-05-16T16:37:04.046014881Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 16:37:04.046059 containerd[1573]: time="2025-05-16T16:37:04.046018074Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 16:37:04.046095 containerd[1573]: time="2025-05-16T16:37:04.046065417Z" level=info msg="runtime interface starting up..." May 16 16:37:04.046095 containerd[1573]: time="2025-05-16T16:37:04.046071782Z" level=info msg="starting plugins..." May 16 16:37:04.046095 containerd[1573]: time="2025-05-16T16:37:04.046094157Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 16:37:04.046343 systemd[1]: Started containerd.service - containerd container runtime. May 16 16:37:04.046687 containerd[1573]: time="2025-05-16T16:37:04.046423608Z" level=info msg="containerd successfully booted in 0.114091s" May 16 16:37:04.070329 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:37:04.179848 tar[1530]: linux-amd64/README.md May 16 16:37:04.201841 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 16:37:05.536839 systemd-networkd[1499]: eth0: Gained IPv6LL May 16 16:37:05.539726 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 16:37:05.541557 systemd[1]: Reached target network-online.target - Network is Online. May 16 16:37:05.544163 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 16:37:05.546491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:05.548767 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 16:37:05.573526 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 16:37:05.575206 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 16:37:05.575457 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 16:37:05.577760 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 16:37:06.232736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:06.234302 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 16:37:06.235697 systemd[1]: Startup finished in 2.802s (kernel) + 6.623s (initrd) + 5.026s (userspace) = 14.453s. May 16 16:37:06.266022 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:37:06.655661 kubelet[1662]: E0516 16:37:06.655539 1662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:37:06.659498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:37:06.659730 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:37:06.660107 systemd[1]: kubelet.service: Consumed 951ms CPU time, 263.8M memory peak. May 16 16:37:07.600106 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 16:37:07.601449 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:36288.service - OpenSSH per-connection server daemon (10.0.0.1:36288). May 16 16:37:07.668277 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 36288 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:37:07.669917 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:07.676695 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 16:37:07.677804 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 16:37:07.685098 systemd-logind[1513]: New session 1 of user core. May 16 16:37:07.705272 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 16:37:07.709406 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 16:37:07.724923 (systemd)[1679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 16:37:07.727294 systemd-logind[1513]: New session c1 of user core. May 16 16:37:07.874943 systemd[1679]: Queued start job for default target default.target. May 16 16:37:07.890856 systemd[1679]: Created slice app.slice - User Application Slice. May 16 16:37:07.890880 systemd[1679]: Reached target paths.target - Paths. May 16 16:37:07.890919 systemd[1679]: Reached target timers.target - Timers. May 16 16:37:07.892411 systemd[1679]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 16:37:07.903315 systemd[1679]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 16:37:07.903454 systemd[1679]: Reached target sockets.target - Sockets. May 16 16:37:07.903497 systemd[1679]: Reached target basic.target - Basic System. May 16 16:37:07.903540 systemd[1679]: Reached target default.target - Main User Target. May 16 16:37:07.903574 systemd[1679]: Startup finished in 167ms. May 16 16:37:07.903986 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 16:37:07.905522 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 16:37:07.967342 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:36296.service - OpenSSH per-connection server daemon (10.0.0.1:36296). May 16 16:37:08.016689 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 36296 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:37:08.018323 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:08.022704 systemd-logind[1513]: New session 2 of user core. May 16 16:37:08.033803 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 16:37:08.086285 sshd[1692]: Connection closed by 10.0.0.1 port 36296 May 16 16:37:08.086645 sshd-session[1690]: pam_unix(sshd:session): session closed for user core May 16 16:37:08.096176 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:36296.service: Deactivated successfully. May 16 16:37:08.098570 systemd[1]: session-2.scope: Deactivated successfully. May 16 16:37:08.099389 systemd-logind[1513]: Session 2 logged out. Waiting for processes to exit. May 16 16:37:08.102961 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:36302.service - OpenSSH per-connection server daemon (10.0.0.1:36302). May 16 16:37:08.103603 systemd-logind[1513]: Removed session 2. May 16 16:37:08.156133 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 36302 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:37:08.157414 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:08.161560 systemd-logind[1513]: New session 3 of user core. May 16 16:37:08.172804 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 16:37:08.221500 sshd[1700]: Connection closed by 10.0.0.1 port 36302 May 16 16:37:08.221830 sshd-session[1698]: pam_unix(sshd:session): session closed for user core May 16 16:37:08.237266 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:36302.service: Deactivated successfully. May 16 16:37:08.238904 systemd[1]: session-3.scope: Deactivated successfully. May 16 16:37:08.239654 systemd-logind[1513]: Session 3 logged out. Waiting for processes to exit. May 16 16:37:08.242513 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:36318.service - OpenSSH per-connection server daemon (10.0.0.1:36318). May 16 16:37:08.243046 systemd-logind[1513]: Removed session 3. May 16 16:37:08.292811 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 36318 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:37:08.294209 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:08.298386 systemd-logind[1513]: New session 4 of user core. May 16 16:37:08.308794 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 16:37:08.361960 sshd[1708]: Connection closed by 10.0.0.1 port 36318 May 16 16:37:08.362273 sshd-session[1706]: pam_unix(sshd:session): session closed for user core May 16 16:37:08.374252 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:36318.service: Deactivated successfully. May 16 16:37:08.376057 systemd[1]: session-4.scope: Deactivated successfully. May 16 16:37:08.376762 systemd-logind[1513]: Session 4 logged out. Waiting for processes to exit. May 16 16:37:08.380153 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:36320.service - OpenSSH per-connection server daemon (10.0.0.1:36320). May 16 16:37:08.380770 systemd-logind[1513]: Removed session 4. May 16 16:37:08.441298 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 36320 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:37:08.442695 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:08.447040 systemd-logind[1513]: New session 5 of user core. May 16 16:37:08.457809 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 16:37:08.515603 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 16:37:08.515937 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:37:08.529905 sudo[1717]: pam_unix(sudo:session): session closed for user root May 16 16:37:08.531628 sshd[1716]: Connection closed by 10.0.0.1 port 36320 May 16 16:37:08.531967 sshd-session[1714]: pam_unix(sshd:session): session closed for user core May 16 16:37:08.544377 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:36320.service: Deactivated successfully. May 16 16:37:08.546024 systemd[1]: session-5.scope: Deactivated successfully. May 16 16:37:08.546869 systemd-logind[1513]: Session 5 logged out. Waiting for processes to exit. May 16 16:37:08.549860 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:36334.service - OpenSSH per-connection server daemon (10.0.0.1:36334). May 16 16:37:08.550656 systemd-logind[1513]: Removed session 5. May 16 16:37:08.600659 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 36334 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:37:08.602083 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:08.606323 systemd-logind[1513]: New session 6 of user core. May 16 16:37:08.615790 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 16:37:08.668445 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 16:37:08.668754 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:37:08.694861 sudo[1728]: pam_unix(sudo:session): session closed for user root May 16 16:37:08.701432 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 16:37:08.701840 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:37:08.711722 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:37:08.752045 augenrules[1750]: No rules May 16 16:37:08.753800 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:37:08.754065 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:37:08.755078 sudo[1727]: pam_unix(sudo:session): session closed for user root May 16 16:37:08.756384 sshd[1726]: Connection closed by 10.0.0.1 port 36334 May 16 16:37:08.756664 sshd-session[1723]: pam_unix(sshd:session): session closed for user core May 16 16:37:08.771963 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:36334.service: Deactivated successfully. May 16 16:37:08.773540 systemd[1]: session-6.scope: Deactivated successfully. May 16 16:37:08.774277 systemd-logind[1513]: Session 6 logged out. Waiting for processes to exit. May 16 16:37:08.777068 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:36348.service - OpenSSH per-connection server daemon (10.0.0.1:36348). May 16 16:37:08.777557 systemd-logind[1513]: Removed session 6. May 16 16:37:08.826382 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 36348 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:37:08.827605 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:08.831745 systemd-logind[1513]: New session 7 of user core. May 16 16:37:08.841781 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 16:37:08.893142 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 16:37:08.893494 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:37:09.173227 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 16:37:09.194963 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 16:37:09.403528 dockerd[1783]: time="2025-05-16T16:37:09.403456909Z" level=info msg="Starting up" May 16 16:37:09.405405 dockerd[1783]: time="2025-05-16T16:37:09.405366940Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 16:37:09.789331 dockerd[1783]: time="2025-05-16T16:37:09.789273036Z" level=info msg="Loading containers: start." May 16 16:37:09.798717 kernel: Initializing XFRM netlink socket May 16 16:37:10.036087 systemd-networkd[1499]: docker0: Link UP May 16 16:37:10.040790 dockerd[1783]: time="2025-05-16T16:37:10.040707829Z" level=info msg="Loading containers: done." May 16 16:37:10.053901 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3697215451-merged.mount: Deactivated successfully. May 16 16:37:10.056986 dockerd[1783]: time="2025-05-16T16:37:10.056938871Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 16:37:10.057061 dockerd[1783]: time="2025-05-16T16:37:10.057032052Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 16:37:10.057166 dockerd[1783]: time="2025-05-16T16:37:10.057146930Z" level=info msg="Initializing buildkit" May 16 16:37:10.086349 dockerd[1783]: time="2025-05-16T16:37:10.086306074Z" level=info msg="Completed buildkit initialization" May 16 16:37:10.092515 dockerd[1783]: time="2025-05-16T16:37:10.092468874Z" level=info msg="Daemon has completed initialization" May 16 16:37:10.092632 dockerd[1783]: time="2025-05-16T16:37:10.092570426Z" level=info msg="API listen on /run/docker.sock" May 16 16:37:10.092713 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 16:37:10.909896 containerd[1573]: time="2025-05-16T16:37:10.909853389Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 16:37:11.572771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087341350.mount: Deactivated successfully. May 16 16:37:12.408989 containerd[1573]: time="2025-05-16T16:37:12.408932280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:12.409648 containerd[1573]: time="2025-05-16T16:37:12.409625893Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 16 16:37:12.411039 containerd[1573]: time="2025-05-16T16:37:12.410997728Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:12.414900 containerd[1573]: time="2025-05-16T16:37:12.414856895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:12.415830 containerd[1573]: time="2025-05-16T16:37:12.415783543Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.505889943s" May 16 16:37:12.415830 containerd[1573]: time="2025-05-16T16:37:12.415828062Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 16 16:37:12.416408 containerd[1573]: time="2025-05-16T16:37:12.416362696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 16:37:13.469378 containerd[1573]: time="2025-05-16T16:37:13.469317937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:13.470256 containerd[1573]: time="2025-05-16T16:37:13.470192843Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 16 16:37:13.471400 containerd[1573]: time="2025-05-16T16:37:13.471360971Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:13.474032 containerd[1573]: time="2025-05-16T16:37:13.474000232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:13.474868 containerd[1573]: time="2025-05-16T16:37:13.474833171Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.058444181s" May 16 16:37:13.474868 containerd[1573]: time="2025-05-16T16:37:13.474862575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 16 16:37:13.475389 containerd[1573]: time="2025-05-16T16:37:13.475339366Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 16:37:14.685862 containerd[1573]: time="2025-05-16T16:37:14.685804372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:14.686708 containerd[1573]: time="2025-05-16T16:37:14.686668305Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 16 16:37:14.687960 containerd[1573]: time="2025-05-16T16:37:14.687912790Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:14.690340 containerd[1573]: time="2025-05-16T16:37:14.690278047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:14.691173 containerd[1573]: time="2025-05-16T16:37:14.691138851Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.215764642s" May 16 16:37:14.691173 containerd[1573]: time="2025-05-16T16:37:14.691168609Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 16 16:37:14.691646 containerd[1573]: time="2025-05-16T16:37:14.691598702Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 16:37:15.571839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839111619.mount: Deactivated successfully. May 16 16:37:16.131035 containerd[1573]: time="2025-05-16T16:37:16.130957083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:16.131862 containerd[1573]: time="2025-05-16T16:37:16.131831903Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 16 16:37:16.134340 containerd[1573]: time="2025-05-16T16:37:16.133487752Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:16.135616 containerd[1573]: time="2025-05-16T16:37:16.135557043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:16.136021 containerd[1573]: time="2025-05-16T16:37:16.135984474Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.444347115s" May 16 16:37:16.136021 containerd[1573]: time="2025-05-16T16:37:16.136014921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 16 16:37:16.136567 containerd[1573]: time="2025-05-16T16:37:16.136539890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 16:37:16.728432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 16:37:16.730807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:16.738130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792783662.mount: Deactivated successfully. May 16 16:37:16.934863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:16.938410 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:37:16.992806 kubelet[2080]: E0516 16:37:16.992603 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:37:16.999070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:37:16.999288 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:37:16.999717 systemd[1]: kubelet.service: Consumed 225ms CPU time, 111.1M memory peak. May 16 16:37:17.578872 containerd[1573]: time="2025-05-16T16:37:17.578814884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:17.580334 containerd[1573]: time="2025-05-16T16:37:17.580244528Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 16:37:17.582067 containerd[1573]: time="2025-05-16T16:37:17.581979824Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:17.584619 containerd[1573]: time="2025-05-16T16:37:17.584582430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:17.585552 containerd[1573]: time="2025-05-16T16:37:17.585520038Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.448950811s" May 16 16:37:17.585593 containerd[1573]: time="2025-05-16T16:37:17.585553529Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 16:37:17.586009 containerd[1573]: time="2025-05-16T16:37:17.585972733Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 16:37:18.085329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354195927.mount: Deactivated successfully. May 16 16:37:18.090643 containerd[1573]: time="2025-05-16T16:37:18.090609652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:37:18.091283 containerd[1573]: time="2025-05-16T16:37:18.091262004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 16:37:18.092315 containerd[1573]: time="2025-05-16T16:37:18.092263961Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:37:18.094111 containerd[1573]: time="2025-05-16T16:37:18.094071350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:37:18.094682 containerd[1573]: time="2025-05-16T16:37:18.094638671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.543217ms" May 16 16:37:18.094721 containerd[1573]: time="2025-05-16T16:37:18.094686848Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 16:37:18.095209 containerd[1573]: time="2025-05-16T16:37:18.095177960Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 16:37:18.596554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828719992.mount: Deactivated successfully. May 16 16:37:20.086542 containerd[1573]: time="2025-05-16T16:37:20.086480210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:20.087438 containerd[1573]: time="2025-05-16T16:37:20.087384226Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 16 16:37:20.088903 containerd[1573]: time="2025-05-16T16:37:20.088869419Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:20.091417 containerd[1573]: time="2025-05-16T16:37:20.091377149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:20.092427 containerd[1573]: time="2025-05-16T16:37:20.092387530Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.997176105s" May 16 16:37:20.092427 containerd[1573]: time="2025-05-16T16:37:20.092415450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 16 16:37:23.253473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:23.253649 systemd[1]: kubelet.service: Consumed 225ms CPU time, 111.1M memory peak. May 16 16:37:23.255804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:23.282750 systemd[1]: Reload requested from client PID 2221 ('systemctl') (unit session-7.scope)... May 16 16:37:23.282759 systemd[1]: Reloading... May 16 16:37:23.363701 zram_generator::config[2266]: No configuration found. May 16 16:37:23.500505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:37:23.614150 systemd[1]: Reloading finished in 331 ms. May 16 16:37:23.686343 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 16:37:23.686441 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 16:37:23.686780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:23.686822 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.2M memory peak. May 16 16:37:23.688374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:23.854585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:23.860112 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:37:23.904071 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:37:23.904071 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 16:37:23.904071 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:37:23.904453 kubelet[2311]: I0516 16:37:23.904071 2311 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:37:24.250969 kubelet[2311]: I0516 16:37:24.250855 2311 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 16:37:24.250969 kubelet[2311]: I0516 16:37:24.250884 2311 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:37:24.251168 kubelet[2311]: I0516 16:37:24.251143 2311 server.go:954] "Client rotation is on, will bootstrap in background" May 16 16:37:24.276910 kubelet[2311]: I0516 16:37:24.276795 2311 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:37:24.276954 kubelet[2311]: E0516 16:37:24.276903 2311 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:24.283692 kubelet[2311]: I0516 16:37:24.283656 2311 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:37:24.288469 kubelet[2311]: I0516 16:37:24.288442 2311 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:37:24.290017 kubelet[2311]: I0516 16:37:24.289978 2311 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:37:24.290157 kubelet[2311]: I0516 16:37:24.290006 2311 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:37:24.290265 kubelet[2311]: I0516 16:37:24.290161 2311 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:37:24.290265 kubelet[2311]: I0516 16:37:24.290170 2311 container_manager_linux.go:304] "Creating device plugin manager" May 16 16:37:24.290307 kubelet[2311]: I0516 16:37:24.290298 2311 state_mem.go:36] "Initialized new in-memory state store" May 16 16:37:24.292731 kubelet[2311]: I0516 16:37:24.292705 2311 kubelet.go:446] "Attempting to sync node with API server" May 16 16:37:24.292761 kubelet[2311]: I0516 16:37:24.292749 2311 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:37:24.292792 kubelet[2311]: I0516 16:37:24.292770 2311 kubelet.go:352] "Adding apiserver pod source" May 16 16:37:24.292792 kubelet[2311]: I0516 16:37:24.292781 2311 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:37:24.294548 kubelet[2311]: W0516 16:37:24.294394 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 16 16:37:24.294548 kubelet[2311]: E0516 16:37:24.294440 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:24.294704 kubelet[2311]: W0516 16:37:24.294533 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 16 16:37:24.294704 kubelet[2311]: E0516 16:37:24.294591 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:24.295302 kubelet[2311]: I0516 16:37:24.295198 2311 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:37:24.296173 kubelet[2311]: I0516 16:37:24.295529 2311 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:37:24.296216 kubelet[2311]: W0516 16:37:24.296176 2311 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 16:37:24.298856 kubelet[2311]: I0516 16:37:24.298825 2311 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 16:37:24.298898 kubelet[2311]: I0516 16:37:24.298880 2311 server.go:1287] "Started kubelet" May 16 16:37:24.300403 kubelet[2311]: I0516 16:37:24.300375 2311 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:37:24.302025 kubelet[2311]: I0516 16:37:24.302005 2311 server.go:479] "Adding debug handlers to kubelet server" May 16 16:37:24.302219 kubelet[2311]: I0516 16:37:24.302173 2311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:37:24.302577 kubelet[2311]: I0516 16:37:24.302548 2311 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:37:24.303402 kubelet[2311]: I0516 16:37:24.303107 2311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:37:24.304115 kubelet[2311]: I0516 16:37:24.304069 2311 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:37:24.305022 kubelet[2311]: E0516 16:37:24.303631 2311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18400f44f72cbf41 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 16:37:24.298850113 +0000 UTC m=+0.434624738,LastTimestamp:2025-05-16 16:37:24.298850113 +0000 UTC m=+0.434624738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 16:37:24.305160 kubelet[2311]: E0516 16:37:24.305147 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:24.305231 kubelet[2311]: I0516 16:37:24.305220 2311 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 16:37:24.305537 kubelet[2311]: I0516 16:37:24.305524 2311 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 16:37:24.305641 kubelet[2311]: I0516 16:37:24.305631 2311 reconciler.go:26] "Reconciler: start to sync state" May 16 16:37:24.306046 kubelet[2311]: E0516 16:37:24.305809 2311 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:37:24.306046 kubelet[2311]: W0516 16:37:24.306002 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 16 16:37:24.306135 kubelet[2311]: E0516 16:37:24.306057 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" May 16 16:37:24.306216 kubelet[2311]: E0516 16:37:24.306177 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:24.307203 kubelet[2311]: I0516 16:37:24.307181 2311 factory.go:221] Registration of the containerd container factory successfully May 16 16:37:24.307203 kubelet[2311]: I0516 16:37:24.307198 2311 factory.go:221] Registration of the systemd container factory successfully May 16 16:37:24.307308 kubelet[2311]: I0516 16:37:24.307289 2311 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:37:24.318585 kubelet[2311]: I0516 16:37:24.318539 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:37:24.319776 kubelet[2311]: I0516 16:37:24.319737 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:37:24.319776 kubelet[2311]: I0516 16:37:24.319763 2311 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 16:37:24.319850 kubelet[2311]: I0516 16:37:24.319786 2311 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 16:37:24.319850 kubelet[2311]: I0516 16:37:24.319796 2311 kubelet.go:2382] "Starting kubelet main sync loop" May 16 16:37:24.319897 kubelet[2311]: E0516 16:37:24.319849 2311 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:37:24.322949 kubelet[2311]: I0516 16:37:24.322922 2311 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 16:37:24.322949 kubelet[2311]: I0516 16:37:24.322941 2311 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 16:37:24.323017 kubelet[2311]: I0516 16:37:24.322956 2311 state_mem.go:36] "Initialized new in-memory state store" May 16 16:37:24.323017 kubelet[2311]: W0516 16:37:24.322976 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 16 16:37:24.323059 kubelet[2311]: E0516 16:37:24.323023 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:24.405573 kubelet[2311]: E0516 16:37:24.405531 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:24.420833 kubelet[2311]: E0516 16:37:24.420792 2311 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 16:37:24.506259 kubelet[2311]: E0516 16:37:24.506120 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:24.506724 kubelet[2311]: E0516 16:37:24.506630 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" May 16 16:37:24.607086 kubelet[2311]: E0516 16:37:24.607030 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:24.621378 kubelet[2311]: E0516 16:37:24.621277 2311 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 16:37:24.642932 kubelet[2311]: I0516 16:37:24.642879 2311 policy_none.go:49] "None policy: Start" May 16 16:37:24.642932 kubelet[2311]: I0516 16:37:24.642901 2311 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 16:37:24.642932 kubelet[2311]: I0516 16:37:24.642913 2311 state_mem.go:35] "Initializing new in-memory state store" May 16 16:37:24.649813 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 16:37:24.664178 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 16:37:24.667810 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 16:37:24.679654 kubelet[2311]: I0516 16:37:24.679617 2311 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:37:24.680097 kubelet[2311]: I0516 16:37:24.679897 2311 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:37:24.680097 kubelet[2311]: I0516 16:37:24.679917 2311 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:37:24.680166 kubelet[2311]: I0516 16:37:24.680159 2311 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:37:24.681028 kubelet[2311]: E0516 16:37:24.680964 2311 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 16:37:24.681096 kubelet[2311]: E0516 16:37:24.681052 2311 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 16:37:24.782265 kubelet[2311]: I0516 16:37:24.782152 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:37:24.782651 kubelet[2311]: E0516 16:37:24.782603 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" May 16 16:37:24.908231 kubelet[2311]: E0516 16:37:24.908188 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" May 16 16:37:24.984361 kubelet[2311]: I0516 16:37:24.984319 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:37:24.984798 kubelet[2311]: E0516 16:37:24.984747 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" May 16 16:37:25.030597 systemd[1]: Created slice kubepods-burstable-pod997fdc0727583263628b95615d4474fe.slice - libcontainer container kubepods-burstable-pod997fdc0727583263628b95615d4474fe.slice. May 16 16:37:25.059638 kubelet[2311]: E0516 16:37:25.059528 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:37:25.063328 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 16:37:25.065177 kubelet[2311]: E0516 16:37:25.065142 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:37:25.067648 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 16:37:25.069291 kubelet[2311]: E0516 16:37:25.069259 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:37:25.110707 kubelet[2311]: I0516 16:37:25.110629 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:25.110707 kubelet[2311]: I0516 16:37:25.110665 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:25.110707 kubelet[2311]: I0516 16:37:25.110712 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 16:37:25.110923 kubelet[2311]: I0516 16:37:25.110728 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/997fdc0727583263628b95615d4474fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"997fdc0727583263628b95615d4474fe\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:25.110923 kubelet[2311]: I0516 16:37:25.110804 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:25.110923 kubelet[2311]: I0516 16:37:25.110896 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:25.110923 kubelet[2311]: I0516 16:37:25.110919 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:25.111017 kubelet[2311]: I0516 16:37:25.110971 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/997fdc0727583263628b95615d4474fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"997fdc0727583263628b95615d4474fe\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:25.111047 kubelet[2311]: I0516 16:37:25.111005 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/997fdc0727583263628b95615d4474fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"997fdc0727583263628b95615d4474fe\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:25.245789 kubelet[2311]: W0516 16:37:25.245720 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 16 16:37:25.245789 kubelet[2311]: E0516 16:37:25.245777 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:25.360940 kubelet[2311]: E0516 16:37:25.360793 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:25.361640 containerd[1573]: time="2025-05-16T16:37:25.361599437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:997fdc0727583263628b95615d4474fe,Namespace:kube-system,Attempt:0,}" May 16 16:37:25.365816 kubelet[2311]: E0516 16:37:25.365787 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:25.366336 containerd[1573]: time="2025-05-16T16:37:25.366282033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 16:37:25.370603 kubelet[2311]: E0516 16:37:25.370570 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:25.371106 containerd[1573]: time="2025-05-16T16:37:25.371064713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 16:37:25.388277 kubelet[2311]: I0516 16:37:25.387912 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:37:25.388622 kubelet[2311]: E0516 16:37:25.388553 2311 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" May 16 16:37:25.392691 containerd[1573]: time="2025-05-16T16:37:25.392386279Z" level=info msg="connecting to shim cbd7aec48bb1b722ca587777914f374471edb3c1c4d0e86d1fb944f5d3c9f81e" address="unix:///run/containerd/s/02995d8d83f02e77acf2eb0266ac19f5a35b6219d924da2e7951aa0d489c9210" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:25.405863 containerd[1573]: time="2025-05-16T16:37:25.405802010Z" level=info msg="connecting to shim c55aeb15553c40f0850ee0618aa867731ce2c7af4522faabd4037b8f99801c83" address="unix:///run/containerd/s/b4425c24af9a2157313e3e7de59b7790cf47b954af34b6e0126781ea8d8cb584" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:25.417218 containerd[1573]: time="2025-05-16T16:37:25.416816237Z" level=info msg="connecting to shim 5c8e6563a3cbafb181b8a06902a08f784fbd71826b1781cbc43e06fd830a3fec" address="unix:///run/containerd/s/3ad6fc842e6d3386f2ec7780cb281a92e07114dd3ec01e0b3405be8b8a07256a" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:25.432069 systemd[1]: Started cri-containerd-cbd7aec48bb1b722ca587777914f374471edb3c1c4d0e86d1fb944f5d3c9f81e.scope - libcontainer container cbd7aec48bb1b722ca587777914f374471edb3c1c4d0e86d1fb944f5d3c9f81e. May 16 16:37:25.435529 systemd[1]: Started cri-containerd-c55aeb15553c40f0850ee0618aa867731ce2c7af4522faabd4037b8f99801c83.scope - libcontainer container c55aeb15553c40f0850ee0618aa867731ce2c7af4522faabd4037b8f99801c83. May 16 16:37:25.442082 systemd[1]: Started cri-containerd-5c8e6563a3cbafb181b8a06902a08f784fbd71826b1781cbc43e06fd830a3fec.scope - libcontainer container 5c8e6563a3cbafb181b8a06902a08f784fbd71826b1781cbc43e06fd830a3fec. May 16 16:37:25.484015 containerd[1573]: time="2025-05-16T16:37:25.483962553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c55aeb15553c40f0850ee0618aa867731ce2c7af4522faabd4037b8f99801c83\"" May 16 16:37:25.485541 kubelet[2311]: E0516 16:37:25.485516 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:25.486093 containerd[1573]: time="2025-05-16T16:37:25.486057132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:997fdc0727583263628b95615d4474fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbd7aec48bb1b722ca587777914f374471edb3c1c4d0e86d1fb944f5d3c9f81e\"" May 16 16:37:25.486869 kubelet[2311]: E0516 16:37:25.486842 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:25.487772 containerd[1573]: time="2025-05-16T16:37:25.487643132Z" level=info msg="CreateContainer within sandbox \"c55aeb15553c40f0850ee0618aa867731ce2c7af4522faabd4037b8f99801c83\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 16:37:25.490886 containerd[1573]: time="2025-05-16T16:37:25.490737224Z" level=info msg="CreateContainer within sandbox \"cbd7aec48bb1b722ca587777914f374471edb3c1c4d0e86d1fb944f5d3c9f81e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 16:37:25.492407 containerd[1573]: time="2025-05-16T16:37:25.492357100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c8e6563a3cbafb181b8a06902a08f784fbd71826b1781cbc43e06fd830a3fec\"" May 16 16:37:25.493233 kubelet[2311]: E0516 16:37:25.493201 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:25.494549 containerd[1573]: time="2025-05-16T16:37:25.494527037Z" level=info msg="CreateContainer within sandbox \"5c8e6563a3cbafb181b8a06902a08f784fbd71826b1781cbc43e06fd830a3fec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 16:37:25.504045 containerd[1573]: time="2025-05-16T16:37:25.504015300Z" level=info msg="Container 063521489523a97c4b7de9d682c893e32e7f49e6a5e06f9e23c9c191c042a6cc: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:25.504182 kubelet[2311]: W0516 16:37:25.504046 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 16 16:37:25.504182 kubelet[2311]: E0516 16:37:25.504103 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:25.508074 containerd[1573]: time="2025-05-16T16:37:25.508039701Z" level=info msg="Container 30bc5e66c41442fcbcc3b5d261e1467da0d53d80ddebbbff3743964f32b66af3: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:25.513734 containerd[1573]: time="2025-05-16T16:37:25.513708298Z" level=info msg="CreateContainer within sandbox \"c55aeb15553c40f0850ee0618aa867731ce2c7af4522faabd4037b8f99801c83\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"063521489523a97c4b7de9d682c893e32e7f49e6a5e06f9e23c9c191c042a6cc\"" May 16 16:37:25.514414 containerd[1573]: time="2025-05-16T16:37:25.514342073Z" level=info msg="StartContainer for \"063521489523a97c4b7de9d682c893e32e7f49e6a5e06f9e23c9c191c042a6cc\"" May 16 16:37:25.514735 containerd[1573]: time="2025-05-16T16:37:25.514713005Z" level=info msg="Container 3a641ba5f2c594a284d3998d3d916ad6b096b2a69cdb2de7cb3a32ddc431271d: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:25.515404 containerd[1573]: time="2025-05-16T16:37:25.515370017Z" level=info msg="connecting to shim 063521489523a97c4b7de9d682c893e32e7f49e6a5e06f9e23c9c191c042a6cc" address="unix:///run/containerd/s/b4425c24af9a2157313e3e7de59b7790cf47b954af34b6e0126781ea8d8cb584" protocol=ttrpc version=3 May 16 16:37:25.522431 containerd[1573]: time="2025-05-16T16:37:25.522387264Z" level=info msg="CreateContainer within sandbox \"cbd7aec48bb1b722ca587777914f374471edb3c1c4d0e86d1fb944f5d3c9f81e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a641ba5f2c594a284d3998d3d916ad6b096b2a69cdb2de7cb3a32ddc431271d\"" May 16 16:37:25.522936 containerd[1573]: time="2025-05-16T16:37:25.522911103Z" level=info msg="StartContainer for \"3a641ba5f2c594a284d3998d3d916ad6b096b2a69cdb2de7cb3a32ddc431271d\"" May 16 16:37:25.523953 containerd[1573]: time="2025-05-16T16:37:25.523929891Z" level=info msg="connecting to shim 3a641ba5f2c594a284d3998d3d916ad6b096b2a69cdb2de7cb3a32ddc431271d" address="unix:///run/containerd/s/02995d8d83f02e77acf2eb0266ac19f5a35b6219d924da2e7951aa0d489c9210" protocol=ttrpc version=3 May 16 16:37:25.524323 containerd[1573]: time="2025-05-16T16:37:25.524287174Z" level=info msg="CreateContainer within sandbox \"5c8e6563a3cbafb181b8a06902a08f784fbd71826b1781cbc43e06fd830a3fec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"30bc5e66c41442fcbcc3b5d261e1467da0d53d80ddebbbff3743964f32b66af3\"" May 16 16:37:25.524834 containerd[1573]: time="2025-05-16T16:37:25.524800475Z" level=info msg="StartContainer for \"30bc5e66c41442fcbcc3b5d261e1467da0d53d80ddebbbff3743964f32b66af3\"" May 16 16:37:25.525718 containerd[1573]: time="2025-05-16T16:37:25.525693147Z" level=info msg="connecting to shim 30bc5e66c41442fcbcc3b5d261e1467da0d53d80ddebbbff3743964f32b66af3" address="unix:///run/containerd/s/3ad6fc842e6d3386f2ec7780cb281a92e07114dd3ec01e0b3405be8b8a07256a" protocol=ttrpc version=3 May 16 16:37:25.536872 systemd[1]: Started cri-containerd-063521489523a97c4b7de9d682c893e32e7f49e6a5e06f9e23c9c191c042a6cc.scope - libcontainer container 063521489523a97c4b7de9d682c893e32e7f49e6a5e06f9e23c9c191c042a6cc. May 16 16:37:25.542383 systemd[1]: Started cri-containerd-30bc5e66c41442fcbcc3b5d261e1467da0d53d80ddebbbff3743964f32b66af3.scope - libcontainer container 30bc5e66c41442fcbcc3b5d261e1467da0d53d80ddebbbff3743964f32b66af3. May 16 16:37:25.545609 systemd[1]: Started cri-containerd-3a641ba5f2c594a284d3998d3d916ad6b096b2a69cdb2de7cb3a32ddc431271d.scope - libcontainer container 3a641ba5f2c594a284d3998d3d916ad6b096b2a69cdb2de7cb3a32ddc431271d. May 16 16:37:25.651241 kubelet[2311]: W0516 16:37:25.651170 2311 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 16 16:37:25.651241 kubelet[2311]: E0516 16:37:25.651255 2311 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:26.010525 containerd[1573]: time="2025-05-16T16:37:26.010292876Z" level=info msg="StartContainer for \"30bc5e66c41442fcbcc3b5d261e1467da0d53d80ddebbbff3743964f32b66af3\" returns successfully" May 16 16:37:26.010525 containerd[1573]: time="2025-05-16T16:37:26.010425907Z" level=info msg="StartContainer for \"3a641ba5f2c594a284d3998d3d916ad6b096b2a69cdb2de7cb3a32ddc431271d\" returns successfully" May 16 16:37:26.010895 containerd[1573]: time="2025-05-16T16:37:26.010770254Z" level=info msg="StartContainer for \"063521489523a97c4b7de9d682c893e32e7f49e6a5e06f9e23c9c191c042a6cc\" returns successfully" May 16 16:37:26.192884 kubelet[2311]: I0516 16:37:26.192843 2311 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:37:26.331556 kubelet[2311]: E0516 16:37:26.331112 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:37:26.331835 kubelet[2311]: E0516 16:37:26.331786 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:26.334463 kubelet[2311]: E0516 16:37:26.334268 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:37:26.334463 kubelet[2311]: E0516 16:37:26.334413 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:26.338225 kubelet[2311]: E0516 16:37:26.338199 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:37:26.338321 kubelet[2311]: E0516 16:37:26.338295 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:26.421178 kubelet[2311]: E0516 16:37:26.421107 2311 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 16:37:26.513283 kubelet[2311]: I0516 16:37:26.513249 2311 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 16:37:26.513283 kubelet[2311]: E0516 16:37:26.513278 2311 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 16:37:26.521077 kubelet[2311]: E0516 16:37:26.521046 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:26.621456 kubelet[2311]: E0516 16:37:26.621325 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:26.722048 kubelet[2311]: E0516 16:37:26.722001 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:26.822999 kubelet[2311]: E0516 16:37:26.822964 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:26.923424 kubelet[2311]: E0516 16:37:26.923373 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.023850 kubelet[2311]: E0516 16:37:27.023805 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.124250 kubelet[2311]: E0516 16:37:27.124202 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.224781 kubelet[2311]: E0516 16:37:27.224643 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.325541 kubelet[2311]: E0516 16:37:27.325497 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.339636 kubelet[2311]: E0516 16:37:27.339605 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:37:27.339793 kubelet[2311]: E0516 16:37:27.339768 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:27.340371 kubelet[2311]: E0516 16:37:27.340344 2311 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:37:27.340480 kubelet[2311]: E0516 16:37:27.340465 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:27.426164 kubelet[2311]: E0516 16:37:27.426118 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.526713 kubelet[2311]: E0516 16:37:27.526582 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.627751 kubelet[2311]: E0516 16:37:27.627709 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.728373 kubelet[2311]: E0516 16:37:27.728335 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.829240 kubelet[2311]: E0516 16:37:27.829132 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:27.929626 kubelet[2311]: E0516 16:37:27.929579 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:28.030154 kubelet[2311]: E0516 16:37:28.030100 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:28.130585 kubelet[2311]: E0516 16:37:28.130552 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:28.206969 kubelet[2311]: I0516 16:37:28.206932 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 16:37:28.216009 kubelet[2311]: I0516 16:37:28.214194 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:37:28.219474 kubelet[2311]: I0516 16:37:28.219451 2311 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:37:28.296002 kubelet[2311]: I0516 16:37:28.295962 2311 apiserver.go:52] "Watching apiserver" May 16 16:37:28.297831 kubelet[2311]: E0516 16:37:28.297788 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:28.300363 systemd[1]: Reload requested from client PID 2581 ('systemctl') (unit session-7.scope)... May 16 16:37:28.300380 systemd[1]: Reloading... May 16 16:37:28.306342 kubelet[2311]: I0516 16:37:28.306323 2311 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 16:37:28.342203 kubelet[2311]: E0516 16:37:28.342176 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:28.344538 kubelet[2311]: E0516 16:37:28.344513 2311 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:28.390818 zram_generator::config[2630]: No configuration found. May 16 16:37:28.473617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:37:28.600849 systemd[1]: Reloading finished in 300 ms. May 16 16:37:28.631459 kubelet[2311]: I0516 16:37:28.631420 2311 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:37:28.631553 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:28.644198 systemd[1]: kubelet.service: Deactivated successfully. May 16 16:37:28.644501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:28.644554 systemd[1]: kubelet.service: Consumed 850ms CPU time, 131.8M memory peak. May 16 16:37:28.647503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:28.904722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:28.914068 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:37:28.957558 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:37:28.957558 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 16:37:28.957558 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:37:28.957901 kubelet[2669]: I0516 16:37:28.957738 2669 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:37:28.965539 kubelet[2669]: I0516 16:37:28.965498 2669 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 16:37:28.965539 kubelet[2669]: I0516 16:37:28.965531 2669 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:37:28.965886 kubelet[2669]: I0516 16:37:28.965855 2669 server.go:954] "Client rotation is on, will bootstrap in background" May 16 16:37:28.967333 kubelet[2669]: I0516 16:37:28.967307 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 16:37:28.970023 kubelet[2669]: I0516 16:37:28.969972 2669 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:37:28.975148 kubelet[2669]: I0516 16:37:28.975059 2669 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:37:28.980684 kubelet[2669]: I0516 16:37:28.980644 2669 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:37:28.980949 kubelet[2669]: I0516 16:37:28.980906 2669 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:37:28.981105 kubelet[2669]: I0516 16:37:28.980937 2669 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:37:28.981177 kubelet[2669]: I0516 16:37:28.981106 2669 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:37:28.981177 kubelet[2669]: I0516 16:37:28.981114 2669 container_manager_linux.go:304] "Creating device plugin manager" May 16 16:37:28.981177 kubelet[2669]: I0516 16:37:28.981163 2669 state_mem.go:36] "Initialized new in-memory state store" May 16 16:37:28.981343 kubelet[2669]: I0516 16:37:28.981310 2669 kubelet.go:446] "Attempting to sync node with API server" May 16 16:37:28.981343 kubelet[2669]: I0516 16:37:28.981339 2669 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:37:28.981893 kubelet[2669]: I0516 16:37:28.981373 2669 kubelet.go:352] "Adding apiserver pod source" May 16 16:37:28.981893 kubelet[2669]: I0516 16:37:28.981391 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:37:28.982110 kubelet[2669]: I0516 16:37:28.982077 2669 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:37:28.983352 kubelet[2669]: I0516 16:37:28.982428 2669 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:37:28.983352 kubelet[2669]: I0516 16:37:28.982926 2669 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 16:37:28.983352 kubelet[2669]: I0516 16:37:28.982983 2669 server.go:1287] "Started kubelet" May 16 16:37:28.983436 kubelet[2669]: I0516 16:37:28.983371 2669 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:37:28.983748 kubelet[2669]: I0516 16:37:28.983697 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:37:28.984058 kubelet[2669]: I0516 16:37:28.984035 2669 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:37:28.984538 kubelet[2669]: I0516 16:37:28.984510 2669 server.go:479] "Adding debug handlers to kubelet server" May 16 16:37:28.986360 kubelet[2669]: I0516 16:37:28.986333 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:37:28.987987 kubelet[2669]: I0516 16:37:28.987959 2669 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:37:28.989790 kubelet[2669]: I0516 16:37:28.989222 2669 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 16:37:28.989790 kubelet[2669]: E0516 16:37:28.989553 2669 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:28.990689 kubelet[2669]: I0516 16:37:28.990666 2669 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 16:37:28.991153 kubelet[2669]: I0516 16:37:28.991140 2669 reconciler.go:26] "Reconciler: start to sync state" May 16 16:37:28.993737 kubelet[2669]: I0516 16:37:28.993709 2669 factory.go:221] Registration of the systemd container factory successfully May 16 16:37:28.993827 kubelet[2669]: I0516 16:37:28.993800 2669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:37:28.997685 kubelet[2669]: I0516 16:37:28.996599 2669 factory.go:221] Registration of the containerd container factory successfully May 16 16:37:29.005810 kubelet[2669]: E0516 16:37:29.005775 2669 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:37:29.008651 kubelet[2669]: I0516 16:37:29.008605 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:37:29.010081 kubelet[2669]: I0516 16:37:29.010053 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:37:29.010127 kubelet[2669]: I0516 16:37:29.010109 2669 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 16:37:29.010160 kubelet[2669]: I0516 16:37:29.010129 2669 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 16:37:29.010160 kubelet[2669]: I0516 16:37:29.010138 2669 kubelet.go:2382] "Starting kubelet main sync loop" May 16 16:37:29.010233 kubelet[2669]: E0516 16:37:29.010212 2669 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:37:29.030628 kubelet[2669]: I0516 16:37:29.030605 2669 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 16:37:29.030628 kubelet[2669]: I0516 16:37:29.030619 2669 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 16:37:29.030720 kubelet[2669]: I0516 16:37:29.030636 2669 state_mem.go:36] "Initialized new in-memory state store" May 16 16:37:29.030793 kubelet[2669]: I0516 16:37:29.030775 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 16:37:29.030829 kubelet[2669]: I0516 16:37:29.030788 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 16:37:29.030829 kubelet[2669]: I0516 16:37:29.030805 2669 policy_none.go:49] "None policy: Start" May 16 16:37:29.030829 kubelet[2669]: I0516 16:37:29.030822 2669 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 16:37:29.030829 kubelet[2669]: I0516 16:37:29.030831 2669 state_mem.go:35] "Initializing new in-memory state store" May 16 16:37:29.030936 kubelet[2669]: I0516 16:37:29.030921 2669 state_mem.go:75] "Updated machine memory state" May 16 16:37:29.034850 kubelet[2669]: I0516 16:37:29.034830 2669 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:37:29.035010 kubelet[2669]: I0516 16:37:29.034996 2669 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:37:29.035044 kubelet[2669]: I0516 16:37:29.035010 2669 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:37:29.035281 kubelet[2669]: I0516 16:37:29.035267 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:37:29.035984 kubelet[2669]: E0516 16:37:29.035929 2669 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 16:37:29.111298 kubelet[2669]: I0516 16:37:29.111260 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 16:37:29.111416 kubelet[2669]: I0516 16:37:29.111333 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:37:29.111416 kubelet[2669]: I0516 16:37:29.111377 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:37:29.116823 kubelet[2669]: E0516 16:37:29.116774 2669 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 16:37:29.117286 kubelet[2669]: E0516 16:37:29.117258 2669 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 16:37:29.118685 kubelet[2669]: E0516 16:37:29.117517 2669 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 16:37:29.139177 kubelet[2669]: I0516 16:37:29.139155 2669 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:37:29.143962 kubelet[2669]: I0516 16:37:29.143910 2669 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 16:37:29.144117 kubelet[2669]: I0516 16:37:29.143981 2669 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 16:37:29.192617 kubelet[2669]: I0516 16:37:29.192479 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:29.192617 kubelet[2669]: I0516 16:37:29.192512 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:29.192617 kubelet[2669]: I0516 16:37:29.192532 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/997fdc0727583263628b95615d4474fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"997fdc0727583263628b95615d4474fe\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:29.192617 kubelet[2669]: I0516 16:37:29.192551 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/997fdc0727583263628b95615d4474fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"997fdc0727583263628b95615d4474fe\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:29.192617 kubelet[2669]: I0516 16:37:29.192566 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:29.192845 kubelet[2669]: I0516 16:37:29.192582 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:29.192845 kubelet[2669]: I0516 16:37:29.192596 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 16:37:29.192845 kubelet[2669]: I0516 16:37:29.192611 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/997fdc0727583263628b95615d4474fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"997fdc0727583263628b95615d4474fe\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:29.192845 kubelet[2669]: I0516 16:37:29.192629 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:29.298707 sudo[2706]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 16:37:29.299012 sudo[2706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 16:37:29.417686 kubelet[2669]: E0516 16:37:29.417637 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:29.417812 kubelet[2669]: E0516 16:37:29.417641 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:29.417834 kubelet[2669]: E0516 16:37:29.417803 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:29.748716 sudo[2706]: pam_unix(sudo:session): session closed for user root May 16 16:37:29.982518 kubelet[2669]: I0516 16:37:29.982473 2669 apiserver.go:52] "Watching apiserver" May 16 16:37:29.991268 kubelet[2669]: I0516 16:37:29.991237 2669 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 16:37:30.004702 kubelet[2669]: I0516 16:37:30.004556 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.004522288 podStartE2EDuration="2.004522288s" podCreationTimestamp="2025-05-16 16:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:30.003993191 +0000 UTC m=+1.085579723" watchObservedRunningTime="2025-05-16 16:37:30.004522288 +0000 UTC m=+1.086108810" May 16 16:37:30.011210 kubelet[2669]: I0516 16:37:30.011166 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.011157401 podStartE2EDuration="2.011157401s" podCreationTimestamp="2025-05-16 16:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:30.01095943 +0000 UTC m=+1.092545952" watchObservedRunningTime="2025-05-16 16:37:30.011157401 +0000 UTC m=+1.092743913" May 16 16:37:30.017791 kubelet[2669]: I0516 16:37:30.017547 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.017530074 podStartE2EDuration="2.017530074s" podCreationTimestamp="2025-05-16 16:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:30.017420586 +0000 UTC m=+1.099007108" watchObservedRunningTime="2025-05-16 16:37:30.017530074 +0000 UTC m=+1.099116586" May 16 16:37:30.018252 kubelet[2669]: I0516 16:37:30.018235 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:37:30.018479 kubelet[2669]: I0516 16:37:30.018374 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 16:37:30.018681 kubelet[2669]: E0516 16:37:30.018564 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:30.023740 kubelet[2669]: E0516 16:37:30.023705 2669 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 16:37:30.023902 kubelet[2669]: E0516 16:37:30.023874 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:30.026060 kubelet[2669]: E0516 16:37:30.026037 2669 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 16:37:30.026191 kubelet[2669]: E0516 16:37:30.026149 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:30.976415 sudo[1763]: pam_unix(sudo:session): session closed for user root May 16 16:37:30.977648 sshd[1762]: Connection closed by 10.0.0.1 port 36348 May 16 16:37:30.978040 sshd-session[1759]: pam_unix(sshd:session): session closed for user core May 16 16:37:30.982019 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:36348.service: Deactivated successfully. May 16 16:37:30.984368 systemd[1]: session-7.scope: Deactivated successfully. May 16 16:37:30.984576 systemd[1]: session-7.scope: Consumed 4.866s CPU time, 264M memory peak. May 16 16:37:30.985799 systemd-logind[1513]: Session 7 logged out. Waiting for processes to exit. May 16 16:37:30.987360 systemd-logind[1513]: Removed session 7. May 16 16:37:31.019881 kubelet[2669]: E0516 16:37:31.019851 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:31.020210 kubelet[2669]: E0516 16:37:31.019982 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:33.134858 kubelet[2669]: I0516 16:37:33.134812 2669 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 16:37:33.136023 containerd[1573]: time="2025-05-16T16:37:33.135201050Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 16:37:33.136343 kubelet[2669]: I0516 16:37:33.136093 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 16:37:34.277913 systemd[1]: Created slice kubepods-besteffort-podefaaf9ea_afda_42f2_bdb2_56421e3feb7c.slice - libcontainer container kubepods-besteffort-podefaaf9ea_afda_42f2_bdb2_56421e3feb7c.slice. May 16 16:37:34.312693 systemd[1]: Created slice kubepods-burstable-pod14fc5109_adcd_450e_98b0_b75f3d15ff78.slice - libcontainer container kubepods-burstable-pod14fc5109_adcd_450e_98b0_b75f3d15ff78.slice. May 16 16:37:34.328543 kubelet[2669]: I0516 16:37:34.328500 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-run\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.328543 kubelet[2669]: I0516 16:37:34.328533 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-cgroup\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329008 kubelet[2669]: I0516 16:37:34.328552 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-etc-cni-netd\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329008 kubelet[2669]: I0516 16:37:34.328567 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-host-proc-sys-kernel\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329008 kubelet[2669]: I0516 16:37:34.328584 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/efaaf9ea-afda-42f2-bdb2-56421e3feb7c-kube-proxy\") pod \"kube-proxy-r4mfv\" (UID: \"efaaf9ea-afda-42f2-bdb2-56421e3feb7c\") " pod="kube-system/kube-proxy-r4mfv" May 16 16:37:34.329008 kubelet[2669]: I0516 16:37:34.328617 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-xtables-lock\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329008 kubelet[2669]: I0516 16:37:34.328636 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-bpf-maps\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329008 kubelet[2669]: I0516 16:37:34.328659 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cni-path\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329178 kubelet[2669]: I0516 16:37:34.328695 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-host-proc-sys-net\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329178 kubelet[2669]: I0516 16:37:34.328715 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efaaf9ea-afda-42f2-bdb2-56421e3feb7c-xtables-lock\") pod \"kube-proxy-r4mfv\" (UID: \"efaaf9ea-afda-42f2-bdb2-56421e3feb7c\") " pod="kube-system/kube-proxy-r4mfv" May 16 16:37:34.329178 kubelet[2669]: I0516 16:37:34.328738 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mc56\" (UniqueName: \"kubernetes.io/projected/efaaf9ea-afda-42f2-bdb2-56421e3feb7c-kube-api-access-5mc56\") pod \"kube-proxy-r4mfv\" (UID: \"efaaf9ea-afda-42f2-bdb2-56421e3feb7c\") " pod="kube-system/kube-proxy-r4mfv" May 16 16:37:34.329178 kubelet[2669]: I0516 16:37:34.328753 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-hostproc\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329178 kubelet[2669]: I0516 16:37:34.328774 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5r8k\" (UniqueName: \"kubernetes.io/projected/14fc5109-adcd-450e-98b0-b75f3d15ff78-kube-api-access-n5r8k\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329288 kubelet[2669]: I0516 16:37:34.328819 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efaaf9ea-afda-42f2-bdb2-56421e3feb7c-lib-modules\") pod \"kube-proxy-r4mfv\" (UID: \"efaaf9ea-afda-42f2-bdb2-56421e3feb7c\") " pod="kube-system/kube-proxy-r4mfv" May 16 16:37:34.329288 kubelet[2669]: I0516 16:37:34.328847 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-lib-modules\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329288 kubelet[2669]: I0516 16:37:34.328866 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14fc5109-adcd-450e-98b0-b75f3d15ff78-hubble-tls\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329288 kubelet[2669]: I0516 16:37:34.328881 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14fc5109-adcd-450e-98b0-b75f3d15ff78-clustermesh-secrets\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.329288 kubelet[2669]: I0516 16:37:34.328899 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-config-path\") pod \"cilium-q2qck\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " pod="kube-system/cilium-q2qck" May 16 16:37:34.343398 systemd[1]: Created slice kubepods-besteffort-pod9bed3429_0144_4199_a882_8a29811d275c.slice - libcontainer container kubepods-besteffort-pod9bed3429_0144_4199_a882_8a29811d275c.slice. May 16 16:37:34.429540 kubelet[2669]: I0516 16:37:34.429497 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr78s\" (UniqueName: \"kubernetes.io/projected/9bed3429-0144-4199-a882-8a29811d275c-kube-api-access-gr78s\") pod \"cilium-operator-6c4d7847fc-bh54c\" (UID: \"9bed3429-0144-4199-a882-8a29811d275c\") " pod="kube-system/cilium-operator-6c4d7847fc-bh54c" May 16 16:37:34.429540 kubelet[2669]: I0516 16:37:34.429555 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bed3429-0144-4199-a882-8a29811d275c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bh54c\" (UID: \"9bed3429-0144-4199-a882-8a29811d275c\") " pod="kube-system/cilium-operator-6c4d7847fc-bh54c" May 16 16:37:34.591909 kubelet[2669]: E0516 16:37:34.591802 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:34.592452 containerd[1573]: time="2025-05-16T16:37:34.592386806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4mfv,Uid:efaaf9ea-afda-42f2-bdb2-56421e3feb7c,Namespace:kube-system,Attempt:0,}" May 16 16:37:34.616238 kubelet[2669]: E0516 16:37:34.616187 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:34.616901 containerd[1573]: time="2025-05-16T16:37:34.616864192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2qck,Uid:14fc5109-adcd-450e-98b0-b75f3d15ff78,Namespace:kube-system,Attempt:0,}" May 16 16:37:34.647867 kubelet[2669]: E0516 16:37:34.647807 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:34.648464 containerd[1573]: time="2025-05-16T16:37:34.648402893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bh54c,Uid:9bed3429-0144-4199-a882-8a29811d275c,Namespace:kube-system,Attempt:0,}" May 16 16:37:34.825700 containerd[1573]: time="2025-05-16T16:37:34.824861068Z" level=info msg="connecting to shim ce6e9ae4c9175217bbe73c8743667beb9fa7ae791661610f2e7cf54205ca9845" address="unix:///run/containerd/s/c04aee3984824214871a41b625724db608f1781ee1073d87a27b829b49dd09a2" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:34.834238 containerd[1573]: time="2025-05-16T16:37:34.834202302Z" level=info msg="connecting to shim 452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8" address="unix:///run/containerd/s/60a818ad81d57451b3f6e207c690c3de99199d657e73a1039e930be88cee8780" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:34.835309 containerd[1573]: time="2025-05-16T16:37:34.835260790Z" level=info msg="connecting to shim 2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d" address="unix:///run/containerd/s/cb613cc26b4773e11f7cb865450738c02747db3c0e07a712829356f25d1d087c" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:34.862824 systemd[1]: Started cri-containerd-ce6e9ae4c9175217bbe73c8743667beb9fa7ae791661610f2e7cf54205ca9845.scope - libcontainer container ce6e9ae4c9175217bbe73c8743667beb9fa7ae791661610f2e7cf54205ca9845. May 16 16:37:34.868823 systemd[1]: Started cri-containerd-2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d.scope - libcontainer container 2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d. May 16 16:37:34.870512 systemd[1]: Started cri-containerd-452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8.scope - libcontainer container 452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8. May 16 16:37:34.912775 containerd[1573]: time="2025-05-16T16:37:34.912648377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2qck,Uid:14fc5109-adcd-450e-98b0-b75f3d15ff78,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\"" May 16 16:37:34.913416 kubelet[2669]: E0516 16:37:34.913367 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:34.915070 containerd[1573]: time="2025-05-16T16:37:34.915044822Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 16:37:34.917223 containerd[1573]: time="2025-05-16T16:37:34.917198593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4mfv,Uid:efaaf9ea-afda-42f2-bdb2-56421e3feb7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce6e9ae4c9175217bbe73c8743667beb9fa7ae791661610f2e7cf54205ca9845\"" May 16 16:37:34.917653 kubelet[2669]: E0516 16:37:34.917628 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:34.920612 containerd[1573]: time="2025-05-16T16:37:34.920569610Z" level=info msg="CreateContainer within sandbox \"ce6e9ae4c9175217bbe73c8743667beb9fa7ae791661610f2e7cf54205ca9845\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 16:37:34.930444 containerd[1573]: time="2025-05-16T16:37:34.930397615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bh54c,Uid:9bed3429-0144-4199-a882-8a29811d275c,Namespace:kube-system,Attempt:0,} returns sandbox id \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\"" May 16 16:37:34.934370 kubelet[2669]: E0516 16:37:34.934350 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:34.935543 containerd[1573]: time="2025-05-16T16:37:34.935128418Z" level=info msg="Container 715f1991ffde03b705cc0729331f9bbbcf9582ae3ca4fbd00a5345d2906fa00c: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:34.943467 containerd[1573]: time="2025-05-16T16:37:34.943425422Z" level=info msg="CreateContainer within sandbox \"ce6e9ae4c9175217bbe73c8743667beb9fa7ae791661610f2e7cf54205ca9845\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"715f1991ffde03b705cc0729331f9bbbcf9582ae3ca4fbd00a5345d2906fa00c\"" May 16 16:37:34.944076 containerd[1573]: time="2025-05-16T16:37:34.943988569Z" level=info msg="StartContainer for \"715f1991ffde03b705cc0729331f9bbbcf9582ae3ca4fbd00a5345d2906fa00c\"" May 16 16:37:34.945318 containerd[1573]: time="2025-05-16T16:37:34.945286646Z" level=info msg="connecting to shim 715f1991ffde03b705cc0729331f9bbbcf9582ae3ca4fbd00a5345d2906fa00c" address="unix:///run/containerd/s/c04aee3984824214871a41b625724db608f1781ee1073d87a27b829b49dd09a2" protocol=ttrpc version=3 May 16 16:37:34.970804 systemd[1]: Started cri-containerd-715f1991ffde03b705cc0729331f9bbbcf9582ae3ca4fbd00a5345d2906fa00c.scope - libcontainer container 715f1991ffde03b705cc0729331f9bbbcf9582ae3ca4fbd00a5345d2906fa00c. May 16 16:37:35.012321 containerd[1573]: time="2025-05-16T16:37:35.012271390Z" level=info msg="StartContainer for \"715f1991ffde03b705cc0729331f9bbbcf9582ae3ca4fbd00a5345d2906fa00c\" returns successfully" May 16 16:37:35.029535 kubelet[2669]: E0516 16:37:35.029505 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:35.039503 kubelet[2669]: I0516 16:37:35.038666 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r4mfv" podStartSLOduration=1.038649545 podStartE2EDuration="1.038649545s" podCreationTimestamp="2025-05-16 16:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:35.038059292 +0000 UTC m=+6.119645814" watchObservedRunningTime="2025-05-16 16:37:35.038649545 +0000 UTC m=+6.120236067" May 16 16:37:35.106127 kubelet[2669]: E0516 16:37:35.106059 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:36.034548 kubelet[2669]: E0516 16:37:36.034518 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:38.358621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500590736.mount: Deactivated successfully. May 16 16:37:39.695796 kubelet[2669]: E0516 16:37:39.695709 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:39.787174 kubelet[2669]: E0516 16:37:39.787096 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:40.040388 kubelet[2669]: E0516 16:37:40.040267 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:41.451843 containerd[1573]: time="2025-05-16T16:37:41.451752668Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:41.452507 containerd[1573]: time="2025-05-16T16:37:41.452452173Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 16:37:41.453547 containerd[1573]: time="2025-05-16T16:37:41.453525963Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:41.454862 containerd[1573]: time="2025-05-16T16:37:41.454828051Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.539599636s" May 16 16:37:41.454862 containerd[1573]: time="2025-05-16T16:37:41.454857785Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 16:37:41.456010 containerd[1573]: time="2025-05-16T16:37:41.455967958Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 16:37:41.457165 containerd[1573]: time="2025-05-16T16:37:41.457132372Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 16:37:41.465787 containerd[1573]: time="2025-05-16T16:37:41.465751291Z" level=info msg="Container 7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:41.469696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2306134061.mount: Deactivated successfully. May 16 16:37:41.471667 containerd[1573]: time="2025-05-16T16:37:41.471627040Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\"" May 16 16:37:41.472105 containerd[1573]: time="2025-05-16T16:37:41.472067712Z" level=info msg="StartContainer for \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\"" May 16 16:37:41.472798 containerd[1573]: time="2025-05-16T16:37:41.472777973Z" level=info msg="connecting to shim 7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe" address="unix:///run/containerd/s/cb613cc26b4773e11f7cb865450738c02747db3c0e07a712829356f25d1d087c" protocol=ttrpc version=3 May 16 16:37:41.502805 systemd[1]: Started cri-containerd-7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe.scope - libcontainer container 7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe. May 16 16:37:41.532710 containerd[1573]: time="2025-05-16T16:37:41.532647112Z" level=info msg="StartContainer for \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\" returns successfully" May 16 16:37:41.543721 systemd[1]: cri-containerd-7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe.scope: Deactivated successfully. May 16 16:37:41.544826 containerd[1573]: time="2025-05-16T16:37:41.544798410Z" level=info msg="received exit event container_id:\"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\" id:\"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\" pid:3091 exited_at:{seconds:1747413461 nanos:544399399}" May 16 16:37:41.544899 containerd[1573]: time="2025-05-16T16:37:41.544849326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\" id:\"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\" pid:3091 exited_at:{seconds:1747413461 nanos:544399399}" May 16 16:37:41.566431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe-rootfs.mount: Deactivated successfully. May 16 16:37:42.045146 kubelet[2669]: E0516 16:37:42.045111 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:42.047906 containerd[1573]: time="2025-05-16T16:37:42.047846302Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 16:37:42.058929 containerd[1573]: time="2025-05-16T16:37:42.058877726Z" level=info msg="Container a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:42.066065 containerd[1573]: time="2025-05-16T16:37:42.066012064Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\"" May 16 16:37:42.066748 containerd[1573]: time="2025-05-16T16:37:42.066468210Z" level=info msg="StartContainer for \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\"" May 16 16:37:42.067286 containerd[1573]: time="2025-05-16T16:37:42.067257087Z" level=info msg="connecting to shim a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3" address="unix:///run/containerd/s/cb613cc26b4773e11f7cb865450738c02747db3c0e07a712829356f25d1d087c" protocol=ttrpc version=3 May 16 16:37:42.090812 systemd[1]: Started cri-containerd-a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3.scope - libcontainer container a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3. May 16 16:37:42.119244 containerd[1573]: time="2025-05-16T16:37:42.119200740Z" level=info msg="StartContainer for \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\" returns successfully" May 16 16:37:42.132104 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:37:42.132394 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:37:42.132843 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 16:37:42.134438 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:37:42.134809 systemd[1]: cri-containerd-a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3.scope: Deactivated successfully. May 16 16:37:42.135799 containerd[1573]: time="2025-05-16T16:37:42.135751721Z" level=info msg="received exit event container_id:\"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\" id:\"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\" pid:3135 exited_at:{seconds:1747413462 nanos:135465430}" May 16 16:37:42.136368 containerd[1573]: time="2025-05-16T16:37:42.136335970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\" id:\"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\" pid:3135 exited_at:{seconds:1747413462 nanos:135465430}" May 16 16:37:42.167365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:37:43.023643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232424859.mount: Deactivated successfully. May 16 16:37:43.060364 kubelet[2669]: E0516 16:37:43.059367 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:43.065440 containerd[1573]: time="2025-05-16T16:37:43.065334966Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 16:37:43.120188 containerd[1573]: time="2025-05-16T16:37:43.119249856Z" level=info msg="Container 702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:43.123623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606914929.mount: Deactivated successfully. May 16 16:37:43.132856 containerd[1573]: time="2025-05-16T16:37:43.132810592Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\"" May 16 16:37:43.133567 containerd[1573]: time="2025-05-16T16:37:43.133340802Z" level=info msg="StartContainer for \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\"" May 16 16:37:43.135364 containerd[1573]: time="2025-05-16T16:37:43.135306787Z" level=info msg="connecting to shim 702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763" address="unix:///run/containerd/s/cb613cc26b4773e11f7cb865450738c02747db3c0e07a712829356f25d1d087c" protocol=ttrpc version=3 May 16 16:37:43.157890 systemd[1]: Started cri-containerd-702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763.scope - libcontainer container 702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763. May 16 16:37:43.201165 systemd[1]: cri-containerd-702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763.scope: Deactivated successfully. May 16 16:37:43.202442 containerd[1573]: time="2025-05-16T16:37:43.202404756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\" id:\"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\" pid:3199 exited_at:{seconds:1747413463 nanos:202142094}" May 16 16:37:43.213703 containerd[1573]: time="2025-05-16T16:37:43.213648620Z" level=info msg="received exit event container_id:\"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\" id:\"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\" pid:3199 exited_at:{seconds:1747413463 nanos:202142094}" May 16 16:37:43.223942 containerd[1573]: time="2025-05-16T16:37:43.223894719Z" level=info msg="StartContainer for \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\" returns successfully" May 16 16:37:43.722643 containerd[1573]: time="2025-05-16T16:37:43.722567684Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:43.723256 containerd[1573]: time="2025-05-16T16:37:43.723231478Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 16:37:43.724406 containerd[1573]: time="2025-05-16T16:37:43.724371511Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:43.725513 containerd[1573]: time="2025-05-16T16:37:43.725467256Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.269463623s" May 16 16:37:43.725513 containerd[1573]: time="2025-05-16T16:37:43.725511703Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 16:37:43.727572 containerd[1573]: time="2025-05-16T16:37:43.727537009Z" level=info msg="CreateContainer within sandbox \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 16:37:43.734907 containerd[1573]: time="2025-05-16T16:37:43.734852980Z" level=info msg="Container bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:43.743698 containerd[1573]: time="2025-05-16T16:37:43.743626960Z" level=info msg="CreateContainer within sandbox \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\"" May 16 16:37:43.744152 containerd[1573]: time="2025-05-16T16:37:43.744124450Z" level=info msg="StartContainer for \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\"" May 16 16:37:43.744970 containerd[1573]: time="2025-05-16T16:37:43.744947958Z" level=info msg="connecting to shim bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0" address="unix:///run/containerd/s/60a818ad81d57451b3f6e207c690c3de99199d657e73a1039e930be88cee8780" protocol=ttrpc version=3 May 16 16:37:43.765799 systemd[1]: Started cri-containerd-bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0.scope - libcontainer container bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0. May 16 16:37:43.796594 containerd[1573]: time="2025-05-16T16:37:43.796535002Z" level=info msg="StartContainer for \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" returns successfully" May 16 16:37:44.065808 kubelet[2669]: E0516 16:37:44.065579 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:44.073694 kubelet[2669]: E0516 16:37:44.073314 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:44.076952 containerd[1573]: time="2025-05-16T16:37:44.076911450Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 16:37:44.288493 containerd[1573]: time="2025-05-16T16:37:44.288436447Z" level=info msg="Container 78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:44.323137 kubelet[2669]: I0516 16:37:44.322770 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bh54c" podStartSLOduration=1.531302012 podStartE2EDuration="10.322660718s" podCreationTimestamp="2025-05-16 16:37:34 +0000 UTC" firstStartedPulling="2025-05-16 16:37:34.934799005 +0000 UTC m=+6.016385527" lastFinishedPulling="2025-05-16 16:37:43.726157711 +0000 UTC m=+14.807744233" observedRunningTime="2025-05-16 16:37:44.200063159 +0000 UTC m=+15.281649681" watchObservedRunningTime="2025-05-16 16:37:44.322660718 +0000 UTC m=+15.404247240" May 16 16:37:44.325318 containerd[1573]: time="2025-05-16T16:37:44.325270343Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\"" May 16 16:37:44.326093 containerd[1573]: time="2025-05-16T16:37:44.326066608Z" level=info msg="StartContainer for \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\"" May 16 16:37:44.326920 containerd[1573]: time="2025-05-16T16:37:44.326891647Z" level=info msg="connecting to shim 78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209" address="unix:///run/containerd/s/cb613cc26b4773e11f7cb865450738c02747db3c0e07a712829356f25d1d087c" protocol=ttrpc version=3 May 16 16:37:44.372907 systemd[1]: Started cri-containerd-78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209.scope - libcontainer container 78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209. May 16 16:37:44.410803 systemd[1]: cri-containerd-78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209.scope: Deactivated successfully. May 16 16:37:44.416054 containerd[1573]: time="2025-05-16T16:37:44.416014977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\" id:\"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\" pid:3279 exited_at:{seconds:1747413464 nanos:412637051}" May 16 16:37:44.417191 containerd[1573]: time="2025-05-16T16:37:44.412690423Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14fc5109_adcd_450e_98b0_b75f3d15ff78.slice/cri-containerd-78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209.scope/memory.events\": no such file or directory" May 16 16:37:44.459019 containerd[1573]: time="2025-05-16T16:37:44.458956130Z" level=info msg="received exit event container_id:\"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\" id:\"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\" pid:3279 exited_at:{seconds:1747413464 nanos:412637051}" May 16 16:37:44.467822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064120454.mount: Deactivated successfully. May 16 16:37:44.468061 containerd[1573]: time="2025-05-16T16:37:44.468023125Z" level=info msg="StartContainer for \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\" returns successfully" May 16 16:37:44.483923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209-rootfs.mount: Deactivated successfully. May 16 16:37:45.079415 kubelet[2669]: E0516 16:37:45.079324 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:45.079918 kubelet[2669]: E0516 16:37:45.079482 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:45.081404 containerd[1573]: time="2025-05-16T16:37:45.081357985Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 16:37:45.096114 containerd[1573]: time="2025-05-16T16:37:45.096071844Z" level=info msg="Container 1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:45.099750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056953521.mount: Deactivated successfully. May 16 16:37:45.105663 containerd[1573]: time="2025-05-16T16:37:45.105620811Z" level=info msg="CreateContainer within sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\"" May 16 16:37:45.106243 containerd[1573]: time="2025-05-16T16:37:45.106196124Z" level=info msg="StartContainer for \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\"" May 16 16:37:45.107209 containerd[1573]: time="2025-05-16T16:37:45.107173679Z" level=info msg="connecting to shim 1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c" address="unix:///run/containerd/s/cb613cc26b4773e11f7cb865450738c02747db3c0e07a712829356f25d1d087c" protocol=ttrpc version=3 May 16 16:37:45.134807 systemd[1]: Started cri-containerd-1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c.scope - libcontainer container 1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c. May 16 16:37:45.170942 containerd[1573]: time="2025-05-16T16:37:45.170891669Z" level=info msg="StartContainer for \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" returns successfully" May 16 16:37:45.239808 containerd[1573]: time="2025-05-16T16:37:45.239766257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" id:\"59faa08d052f6334e35421199cd8dffbf7c05f6800f3bac5e96dcc75c5fe5a4d\" pid:3344 exited_at:{seconds:1747413465 nanos:239192105}" May 16 16:37:45.339034 kubelet[2669]: I0516 16:37:45.338617 2669 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 16:37:45.373215 systemd[1]: Created slice kubepods-burstable-pod60341790_b4cf_4095_96d9_deebe76b387c.slice - libcontainer container kubepods-burstable-pod60341790_b4cf_4095_96d9_deebe76b387c.slice. May 16 16:37:45.381342 systemd[1]: Created slice kubepods-burstable-pod881a35c4_36d3_4eb4_8d7c_bbb3f276049a.slice - libcontainer container kubepods-burstable-pod881a35c4_36d3_4eb4_8d7c_bbb3f276049a.slice. May 16 16:37:45.402198 kubelet[2669]: I0516 16:37:45.402155 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/881a35c4-36d3-4eb4-8d7c-bbb3f276049a-config-volume\") pod \"coredns-668d6bf9bc-bc5s9\" (UID: \"881a35c4-36d3-4eb4-8d7c-bbb3f276049a\") " pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:37:45.402198 kubelet[2669]: I0516 16:37:45.402203 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60341790-b4cf-4095-96d9-deebe76b387c-config-volume\") pod \"coredns-668d6bf9bc-hnvrt\" (UID: \"60341790-b4cf-4095-96d9-deebe76b387c\") " pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:37:45.402412 kubelet[2669]: I0516 16:37:45.402248 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2j48\" (UniqueName: \"kubernetes.io/projected/60341790-b4cf-4095-96d9-deebe76b387c-kube-api-access-v2j48\") pod \"coredns-668d6bf9bc-hnvrt\" (UID: \"60341790-b4cf-4095-96d9-deebe76b387c\") " pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:37:45.402412 kubelet[2669]: I0516 16:37:45.402273 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st84c\" (UniqueName: \"kubernetes.io/projected/881a35c4-36d3-4eb4-8d7c-bbb3f276049a-kube-api-access-st84c\") pod \"coredns-668d6bf9bc-bc5s9\" (UID: \"881a35c4-36d3-4eb4-8d7c-bbb3f276049a\") " pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:37:45.677595 kubelet[2669]: E0516 16:37:45.677559 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:45.678206 containerd[1573]: time="2025-05-16T16:37:45.678154720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnvrt,Uid:60341790-b4cf-4095-96d9-deebe76b387c,Namespace:kube-system,Attempt:0,}" May 16 16:37:45.684429 kubelet[2669]: E0516 16:37:45.684393 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:45.684830 containerd[1573]: time="2025-05-16T16:37:45.684807167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bc5s9,Uid:881a35c4-36d3-4eb4-8d7c-bbb3f276049a,Namespace:kube-system,Attempt:0,}" May 16 16:37:46.097466 kubelet[2669]: E0516 16:37:46.097267 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:46.110805 kubelet[2669]: I0516 16:37:46.110715 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q2qck" podStartSLOduration=5.569297387 podStartE2EDuration="12.110695372s" podCreationTimestamp="2025-05-16 16:37:34 +0000 UTC" firstStartedPulling="2025-05-16 16:37:34.914316859 +0000 UTC m=+5.995903381" lastFinishedPulling="2025-05-16 16:37:41.455714854 +0000 UTC m=+12.537301366" observedRunningTime="2025-05-16 16:37:46.110135662 +0000 UTC m=+17.191722185" watchObservedRunningTime="2025-05-16 16:37:46.110695372 +0000 UTC m=+17.192281894" May 16 16:37:47.098844 kubelet[2669]: E0516 16:37:47.098791 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:47.364383 systemd-networkd[1499]: cilium_host: Link UP May 16 16:37:47.368127 systemd-networkd[1499]: cilium_net: Link UP May 16 16:37:47.372164 systemd-networkd[1499]: cilium_net: Gained carrier May 16 16:37:47.372617 systemd-networkd[1499]: cilium_host: Gained carrier May 16 16:37:47.472275 systemd-networkd[1499]: cilium_vxlan: Link UP May 16 16:37:47.472285 systemd-networkd[1499]: cilium_vxlan: Gained carrier May 16 16:37:47.674706 kernel: NET: Registered PF_ALG protocol family May 16 16:37:47.808858 systemd-networkd[1499]: cilium_host: Gained IPv6LL May 16 16:37:47.968850 systemd-networkd[1499]: cilium_net: Gained IPv6LL May 16 16:37:48.101595 kubelet[2669]: E0516 16:37:48.101560 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:48.275617 systemd-networkd[1499]: lxc_health: Link UP May 16 16:37:48.277497 systemd-networkd[1499]: lxc_health: Gained carrier May 16 16:37:48.728534 systemd-networkd[1499]: lxcd109cd2ccd41: Link UP May 16 16:37:48.729701 kernel: eth0: renamed from tmp5f99c May 16 16:37:48.741905 kernel: eth0: renamed from tmpb7186 May 16 16:37:48.743751 systemd-networkd[1499]: lxcd109cd2ccd41: Gained carrier May 16 16:37:48.744334 systemd-networkd[1499]: lxc9c12c6a5005d: Link UP May 16 16:37:48.746135 systemd-networkd[1499]: lxc9c12c6a5005d: Gained carrier May 16 16:37:48.786307 update_engine[1518]: I20250516 16:37:48.785730 1518 update_attempter.cc:509] Updating boot flags... May 16 16:37:49.098219 kubelet[2669]: I0516 16:37:49.098112 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 16 16:37:49.098219 kubelet[2669]: I0516 16:37:49.098154 2669 container_gc.go:86] "Attempting to delete unused containers" May 16 16:37:49.101104 kubelet[2669]: I0516 16:37:49.101087 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 16 16:37:49.103029 kubelet[2669]: E0516 16:37:49.103007 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:49.116976 kubelet[2669]: I0516 16:37:49.116938 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 16 16:37:49.117110 kubelet[2669]: I0516 16:37:49.117051 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-bc5s9","kube-system/coredns-668d6bf9bc-hnvrt","kube-system/cilium-operator-6c4d7847fc-bh54c","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-r4mfv","kube-system/kube-apiserver-localhost","kube-system/cilium-q2qck","kube-system/kube-scheduler-localhost"] May 16 16:37:49.117110 kubelet[2669]: E0516 16:37:49.117093 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:37:49.117110 kubelet[2669]: E0516 16:37:49.117102 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:37:49.117170 kubelet[2669]: E0516 16:37:49.117117 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-bh54c" May 16 16:37:49.117170 kubelet[2669]: E0516 16:37:49.117127 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:37:49.117170 kubelet[2669]: E0516 16:37:49.117134 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-r4mfv" May 16 16:37:49.117170 kubelet[2669]: E0516 16:37:49.117142 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 16 16:37:49.117170 kubelet[2669]: E0516 16:37:49.117150 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-q2qck" May 16 16:37:49.117170 kubelet[2669]: E0516 16:37:49.117161 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 16 16:37:49.117170 kubelet[2669]: I0516 16:37:49.117170 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 16 16:37:49.376847 systemd-networkd[1499]: cilium_vxlan: Gained IPv6LL May 16 16:37:49.568894 systemd-networkd[1499]: lxc_health: Gained IPv6LL May 16 16:37:50.080842 systemd-networkd[1499]: lxc9c12c6a5005d: Gained IPv6LL May 16 16:37:50.336878 systemd-networkd[1499]: lxcd109cd2ccd41: Gained IPv6LL May 16 16:37:51.992696 containerd[1573]: time="2025-05-16T16:37:51.992613358Z" level=info msg="connecting to shim b7186af0b689fdfa43788b6514f7ba45a514be3c63233dd7fe731cab2419e0a6" address="unix:///run/containerd/s/129317cf37fdbdb737cbb1950af2e543158520c5e0001177b859ab7d611651fe" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:51.994067 containerd[1573]: time="2025-05-16T16:37:51.993998790Z" level=info msg="connecting to shim 5f99c53a128e690948e736669a1cfe18cef28149299c9278e242bde9b876e7ed" address="unix:///run/containerd/s/4dd16668cc42da73f55a2ca44a5d6501795557b09c60f0b68cf1973e428dbde8" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:52.022816 systemd[1]: Started cri-containerd-5f99c53a128e690948e736669a1cfe18cef28149299c9278e242bde9b876e7ed.scope - libcontainer container 5f99c53a128e690948e736669a1cfe18cef28149299c9278e242bde9b876e7ed. May 16 16:37:52.025584 systemd[1]: Started cri-containerd-b7186af0b689fdfa43788b6514f7ba45a514be3c63233dd7fe731cab2419e0a6.scope - libcontainer container b7186af0b689fdfa43788b6514f7ba45a514be3c63233dd7fe731cab2419e0a6. May 16 16:37:52.036626 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:37:52.039259 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:37:52.068545 containerd[1573]: time="2025-05-16T16:37:52.068460685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bc5s9,Uid:881a35c4-36d3-4eb4-8d7c-bbb3f276049a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f99c53a128e690948e736669a1cfe18cef28149299c9278e242bde9b876e7ed\"" May 16 16:37:52.071698 kubelet[2669]: E0516 16:37:52.071653 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:52.073266 containerd[1573]: time="2025-05-16T16:37:52.073219088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnvrt,Uid:60341790-b4cf-4095-96d9-deebe76b387c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7186af0b689fdfa43788b6514f7ba45a514be3c63233dd7fe731cab2419e0a6\"" May 16 16:37:52.074802 containerd[1573]: time="2025-05-16T16:37:52.074268042Z" level=info msg="CreateContainer within sandbox \"5f99c53a128e690948e736669a1cfe18cef28149299c9278e242bde9b876e7ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:37:52.074859 kubelet[2669]: E0516 16:37:52.074276 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:52.075977 containerd[1573]: time="2025-05-16T16:37:52.075948848Z" level=info msg="CreateContainer within sandbox \"b7186af0b689fdfa43788b6514f7ba45a514be3c63233dd7fe731cab2419e0a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:37:52.088945 containerd[1573]: time="2025-05-16T16:37:52.088896634Z" level=info msg="Container d4042c586737762ee367e2c8e22a87571aa9969d4bea594a64f0c7f9f495550e: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:52.096201 containerd[1573]: time="2025-05-16T16:37:52.096116615Z" level=info msg="CreateContainer within sandbox \"5f99c53a128e690948e736669a1cfe18cef28149299c9278e242bde9b876e7ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4042c586737762ee367e2c8e22a87571aa9969d4bea594a64f0c7f9f495550e\"" May 16 16:37:52.096925 containerd[1573]: time="2025-05-16T16:37:52.096666670Z" level=info msg="StartContainer for \"d4042c586737762ee367e2c8e22a87571aa9969d4bea594a64f0c7f9f495550e\"" May 16 16:37:52.097827 containerd[1573]: time="2025-05-16T16:37:52.097791152Z" level=info msg="connecting to shim d4042c586737762ee367e2c8e22a87571aa9969d4bea594a64f0c7f9f495550e" address="unix:///run/containerd/s/4dd16668cc42da73f55a2ca44a5d6501795557b09c60f0b68cf1973e428dbde8" protocol=ttrpc version=3 May 16 16:37:52.120822 systemd[1]: Started cri-containerd-d4042c586737762ee367e2c8e22a87571aa9969d4bea594a64f0c7f9f495550e.scope - libcontainer container d4042c586737762ee367e2c8e22a87571aa9969d4bea594a64f0c7f9f495550e. May 16 16:37:52.128237 containerd[1573]: time="2025-05-16T16:37:52.128182421Z" level=info msg="Container 3dd9a23f7bf963c092b9cc32794beea412e65501575d8b2e84f0aa6c00c9208c: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:52.137339 containerd[1573]: time="2025-05-16T16:37:52.137296249Z" level=info msg="CreateContainer within sandbox \"b7186af0b689fdfa43788b6514f7ba45a514be3c63233dd7fe731cab2419e0a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dd9a23f7bf963c092b9cc32794beea412e65501575d8b2e84f0aa6c00c9208c\"" May 16 16:37:52.138195 containerd[1573]: time="2025-05-16T16:37:52.138163847Z" level=info msg="StartContainer for \"3dd9a23f7bf963c092b9cc32794beea412e65501575d8b2e84f0aa6c00c9208c\"" May 16 16:37:52.140618 containerd[1573]: time="2025-05-16T16:37:52.140570635Z" level=info msg="connecting to shim 3dd9a23f7bf963c092b9cc32794beea412e65501575d8b2e84f0aa6c00c9208c" address="unix:///run/containerd/s/129317cf37fdbdb737cbb1950af2e543158520c5e0001177b859ab7d611651fe" protocol=ttrpc version=3 May 16 16:37:52.159379 containerd[1573]: time="2025-05-16T16:37:52.159301916Z" level=info msg="StartContainer for \"d4042c586737762ee367e2c8e22a87571aa9969d4bea594a64f0c7f9f495550e\" returns successfully" May 16 16:37:52.169851 systemd[1]: Started cri-containerd-3dd9a23f7bf963c092b9cc32794beea412e65501575d8b2e84f0aa6c00c9208c.scope - libcontainer container 3dd9a23f7bf963c092b9cc32794beea412e65501575d8b2e84f0aa6c00c9208c. May 16 16:37:52.206807 containerd[1573]: time="2025-05-16T16:37:52.206759787Z" level=info msg="StartContainer for \"3dd9a23f7bf963c092b9cc32794beea412e65501575d8b2e84f0aa6c00c9208c\" returns successfully" May 16 16:37:52.974193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42176268.mount: Deactivated successfully. May 16 16:37:53.114146 kubelet[2669]: E0516 16:37:53.113902 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:53.116631 kubelet[2669]: E0516 16:37:53.116613 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:53.125361 kubelet[2669]: I0516 16:37:53.125293 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bc5s9" podStartSLOduration=19.12403201 podStartE2EDuration="19.12403201s" podCreationTimestamp="2025-05-16 16:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:53.123542961 +0000 UTC m=+24.205129493" watchObservedRunningTime="2025-05-16 16:37:53.12403201 +0000 UTC m=+24.205618532" May 16 16:37:53.147586 kubelet[2669]: I0516 16:37:53.147500 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hnvrt" podStartSLOduration=19.147470212 podStartE2EDuration="19.147470212s" podCreationTimestamp="2025-05-16 16:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:53.146860619 +0000 UTC m=+24.228447141" watchObservedRunningTime="2025-05-16 16:37:53.147470212 +0000 UTC m=+24.229056734" May 16 16:37:54.068716 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:37384.service - OpenSSH per-connection server daemon (10.0.0.1:37384). May 16 16:37:54.118090 kubelet[2669]: E0516 16:37:54.118058 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:54.118476 kubelet[2669]: E0516 16:37:54.118224 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:54.121530 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 37384 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:37:54.123170 sshd-session[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:54.127858 systemd-logind[1513]: New session 8 of user core. May 16 16:37:54.138805 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 16:37:54.140931 kubelet[2669]: I0516 16:37:54.140748 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:37:54.141220 kubelet[2669]: E0516 16:37:54.141187 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:54.257461 sshd[4004]: Connection closed by 10.0.0.1 port 37384 May 16 16:37:54.258177 sshd-session[4002]: pam_unix(sshd:session): session closed for user core May 16 16:37:54.262749 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:37384.service: Deactivated successfully. May 16 16:37:54.264751 systemd[1]: session-8.scope: Deactivated successfully. May 16 16:37:54.265464 systemd-logind[1513]: Session 8 logged out. Waiting for processes to exit. May 16 16:37:54.266782 systemd-logind[1513]: Removed session 8. May 16 16:37:55.119756 kubelet[2669]: E0516 16:37:55.119718 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:55.120153 kubelet[2669]: E0516 16:37:55.119792 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:55.120153 kubelet[2669]: E0516 16:37:55.119935 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:59.145902 kubelet[2669]: I0516 16:37:59.145859 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 16 16:37:59.145902 kubelet[2669]: I0516 16:37:59.145892 2669 container_gc.go:86] "Attempting to delete unused containers" May 16 16:37:59.147245 kubelet[2669]: I0516 16:37:59.147221 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 16 16:37:59.159015 kubelet[2669]: I0516 16:37:59.158984 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 16 16:37:59.159127 kubelet[2669]: I0516 16:37:59.159091 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-bh54c","kube-system/coredns-668d6bf9bc-hnvrt","kube-system/coredns-668d6bf9bc-bc5s9","kube-system/cilium-q2qck","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-r4mfv","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 16 16:37:59.159127 kubelet[2669]: E0516 16:37:59.159121 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-bh54c" May 16 16:37:59.159187 kubelet[2669]: E0516 16:37:59.159131 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:37:59.159187 kubelet[2669]: E0516 16:37:59.159139 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:37:59.159187 kubelet[2669]: E0516 16:37:59.159148 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-q2qck" May 16 16:37:59.159187 kubelet[2669]: E0516 16:37:59.159155 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:37:59.159187 kubelet[2669]: E0516 16:37:59.159163 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-r4mfv" May 16 16:37:59.159187 kubelet[2669]: E0516 16:37:59.159170 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 16 16:37:59.159187 kubelet[2669]: E0516 16:37:59.159177 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 16 16:37:59.159187 kubelet[2669]: I0516 16:37:59.159187 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 16 16:37:59.276148 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:52940.service - OpenSSH per-connection server daemon (10.0.0.1:52940). May 16 16:37:59.330332 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 52940 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:37:59.332093 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:59.336784 systemd-logind[1513]: New session 9 of user core. May 16 16:37:59.344795 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 16:37:59.458681 sshd[4023]: Connection closed by 10.0.0.1 port 52940 May 16 16:37:59.458942 sshd-session[4021]: pam_unix(sshd:session): session closed for user core May 16 16:37:59.463913 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:52940.service: Deactivated successfully. May 16 16:37:59.466107 systemd[1]: session-9.scope: Deactivated successfully. May 16 16:37:59.466821 systemd-logind[1513]: Session 9 logged out. Waiting for processes to exit. May 16 16:37:59.468253 systemd-logind[1513]: Removed session 9. May 16 16:38:04.477418 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:52946.service - OpenSSH per-connection server daemon (10.0.0.1:52946). May 16 16:38:04.519930 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 52946 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:04.521272 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:04.525614 systemd-logind[1513]: New session 10 of user core. May 16 16:38:04.533817 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 16:38:04.643402 sshd[4039]: Connection closed by 10.0.0.1 port 52946 May 16 16:38:04.643693 sshd-session[4037]: pam_unix(sshd:session): session closed for user core May 16 16:38:04.647313 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:52946.service: Deactivated successfully. May 16 16:38:04.649284 systemd[1]: session-10.scope: Deactivated successfully. May 16 16:38:04.650015 systemd-logind[1513]: Session 10 logged out. Waiting for processes to exit. May 16 16:38:04.651184 systemd-logind[1513]: Removed session 10. May 16 16:38:09.178393 kubelet[2669]: I0516 16:38:09.178333 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 16 16:38:09.178393 kubelet[2669]: I0516 16:38:09.178369 2669 container_gc.go:86] "Attempting to delete unused containers" May 16 16:38:09.180274 kubelet[2669]: I0516 16:38:09.180245 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 16 16:38:09.197227 kubelet[2669]: I0516 16:38:09.197186 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 16 16:38:09.197387 kubelet[2669]: I0516 16:38:09.197310 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-bh54c","kube-system/coredns-668d6bf9bc-hnvrt","kube-system/coredns-668d6bf9bc-bc5s9","kube-system/cilium-q2qck","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-r4mfv","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 16 16:38:09.197387 kubelet[2669]: E0516 16:38:09.197346 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-bh54c" May 16 16:38:09.197387 kubelet[2669]: E0516 16:38:09.197357 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:38:09.197387 kubelet[2669]: E0516 16:38:09.197366 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:38:09.197387 kubelet[2669]: E0516 16:38:09.197383 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-q2qck" May 16 16:38:09.197387 kubelet[2669]: E0516 16:38:09.197392 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:38:09.197526 kubelet[2669]: E0516 16:38:09.197401 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-r4mfv" May 16 16:38:09.197526 kubelet[2669]: E0516 16:38:09.197410 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 16 16:38:09.197526 kubelet[2669]: E0516 16:38:09.197418 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 16 16:38:09.197526 kubelet[2669]: I0516 16:38:09.197428 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 16 16:38:09.659522 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:34992.service - OpenSSH per-connection server daemon (10.0.0.1:34992). May 16 16:38:09.718798 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 34992 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:09.720520 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:09.724913 systemd-logind[1513]: New session 11 of user core. May 16 16:38:09.735814 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 16:38:09.859869 sshd[4057]: Connection closed by 10.0.0.1 port 34992 May 16 16:38:09.860193 sshd-session[4055]: pam_unix(sshd:session): session closed for user core May 16 16:38:09.877027 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:34992.service: Deactivated successfully. May 16 16:38:09.879129 systemd[1]: session-11.scope: Deactivated successfully. May 16 16:38:09.880140 systemd-logind[1513]: Session 11 logged out. Waiting for processes to exit. May 16 16:38:09.883198 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:35008.service - OpenSSH per-connection server daemon (10.0.0.1:35008). May 16 16:38:09.883877 systemd-logind[1513]: Removed session 11. May 16 16:38:09.938062 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 35008 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:09.939927 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:09.945415 systemd-logind[1513]: New session 12 of user core. May 16 16:38:09.957925 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 16:38:10.150526 sshd[4073]: Connection closed by 10.0.0.1 port 35008 May 16 16:38:10.151757 sshd-session[4071]: pam_unix(sshd:session): session closed for user core May 16 16:38:10.171106 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:35008.service: Deactivated successfully. May 16 16:38:10.174565 systemd[1]: session-12.scope: Deactivated successfully. May 16 16:38:10.177522 systemd-logind[1513]: Session 12 logged out. Waiting for processes to exit. May 16 16:38:10.181175 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:35010.service - OpenSSH per-connection server daemon (10.0.0.1:35010). May 16 16:38:10.182240 systemd-logind[1513]: Removed session 12. May 16 16:38:10.231527 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 35010 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:10.233274 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:10.239457 systemd-logind[1513]: New session 13 of user core. May 16 16:38:10.251849 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 16:38:10.362802 sshd[4088]: Connection closed by 10.0.0.1 port 35010 May 16 16:38:10.363180 sshd-session[4086]: pam_unix(sshd:session): session closed for user core May 16 16:38:10.367808 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:35010.service: Deactivated successfully. May 16 16:38:10.369789 systemd[1]: session-13.scope: Deactivated successfully. May 16 16:38:10.370606 systemd-logind[1513]: Session 13 logged out. Waiting for processes to exit. May 16 16:38:10.372209 systemd-logind[1513]: Removed session 13. May 16 16:38:15.375548 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:35024.service - OpenSSH per-connection server daemon (10.0.0.1:35024). May 16 16:38:15.434110 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 35024 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:15.435879 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:15.440461 systemd-logind[1513]: New session 14 of user core. May 16 16:38:15.449855 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 16:38:15.561210 sshd[4103]: Connection closed by 10.0.0.1 port 35024 May 16 16:38:15.561553 sshd-session[4101]: pam_unix(sshd:session): session closed for user core May 16 16:38:15.566048 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:35024.service: Deactivated successfully. May 16 16:38:15.568345 systemd[1]: session-14.scope: Deactivated successfully. May 16 16:38:15.569339 systemd-logind[1513]: Session 14 logged out. Waiting for processes to exit. May 16 16:38:15.570786 systemd-logind[1513]: Removed session 14. May 16 16:38:19.215853 kubelet[2669]: I0516 16:38:19.215814 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 16 16:38:19.215853 kubelet[2669]: I0516 16:38:19.215852 2669 container_gc.go:86] "Attempting to delete unused containers" May 16 16:38:19.217157 kubelet[2669]: I0516 16:38:19.217131 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 16 16:38:19.228643 kubelet[2669]: I0516 16:38:19.228614 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 16 16:38:19.228770 kubelet[2669]: I0516 16:38:19.228736 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-bh54c","kube-system/coredns-668d6bf9bc-hnvrt","kube-system/coredns-668d6bf9bc-bc5s9","kube-system/cilium-q2qck","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-r4mfv","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 16 16:38:19.228770 kubelet[2669]: E0516 16:38:19.228768 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-bh54c" May 16 16:38:19.228820 kubelet[2669]: E0516 16:38:19.228777 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:38:19.228820 kubelet[2669]: E0516 16:38:19.228786 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:38:19.228820 kubelet[2669]: E0516 16:38:19.228795 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-q2qck" May 16 16:38:19.228820 kubelet[2669]: E0516 16:38:19.228804 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:38:19.228820 kubelet[2669]: E0516 16:38:19.228812 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-r4mfv" May 16 16:38:19.228820 kubelet[2669]: E0516 16:38:19.228822 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 16 16:38:19.228941 kubelet[2669]: E0516 16:38:19.228830 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 16 16:38:19.228941 kubelet[2669]: I0516 16:38:19.228839 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 16 16:38:20.577635 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:58176.service - OpenSSH per-connection server daemon (10.0.0.1:58176). May 16 16:38:20.642641 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 58176 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:20.644192 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:20.648720 systemd-logind[1513]: New session 15 of user core. May 16 16:38:20.660801 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 16:38:20.770738 sshd[4118]: Connection closed by 10.0.0.1 port 58176 May 16 16:38:20.771046 sshd-session[4116]: pam_unix(sshd:session): session closed for user core May 16 16:38:20.782592 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:58176.service: Deactivated successfully. May 16 16:38:20.784782 systemd[1]: session-15.scope: Deactivated successfully. May 16 16:38:20.785645 systemd-logind[1513]: Session 15 logged out. Waiting for processes to exit. May 16 16:38:20.788783 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:58182.service - OpenSSH per-connection server daemon (10.0.0.1:58182). May 16 16:38:20.790175 systemd-logind[1513]: Removed session 15. May 16 16:38:20.833476 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 58182 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:20.835085 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:20.839424 systemd-logind[1513]: New session 16 of user core. May 16 16:38:20.844809 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 16:38:21.084360 sshd[4134]: Connection closed by 10.0.0.1 port 58182 May 16 16:38:21.084740 sshd-session[4132]: pam_unix(sshd:session): session closed for user core May 16 16:38:21.098074 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:58182.service: Deactivated successfully. May 16 16:38:21.100210 systemd[1]: session-16.scope: Deactivated successfully. May 16 16:38:21.101178 systemd-logind[1513]: Session 16 logged out. Waiting for processes to exit. May 16 16:38:21.104883 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:58194.service - OpenSSH per-connection server daemon (10.0.0.1:58194). May 16 16:38:21.105712 systemd-logind[1513]: Removed session 16. May 16 16:38:21.170934 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 58194 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:21.172664 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:21.177308 systemd-logind[1513]: New session 17 of user core. May 16 16:38:21.192862 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 16:38:22.031617 sshd[4147]: Connection closed by 10.0.0.1 port 58194 May 16 16:38:22.031918 sshd-session[4145]: pam_unix(sshd:session): session closed for user core May 16 16:38:22.042881 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:58194.service: Deactivated successfully. May 16 16:38:22.045557 systemd[1]: session-17.scope: Deactivated successfully. May 16 16:38:22.046907 systemd-logind[1513]: Session 17 logged out. Waiting for processes to exit. May 16 16:38:22.052816 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:58202.service - OpenSSH per-connection server daemon (10.0.0.1:58202). May 16 16:38:22.053605 systemd-logind[1513]: Removed session 17. May 16 16:38:22.100158 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 58202 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:22.101584 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:22.105867 systemd-logind[1513]: New session 18 of user core. May 16 16:38:22.114785 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 16:38:22.320885 sshd[4168]: Connection closed by 10.0.0.1 port 58202 May 16 16:38:22.321138 sshd-session[4166]: pam_unix(sshd:session): session closed for user core May 16 16:38:22.330346 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:58202.service: Deactivated successfully. May 16 16:38:22.332262 systemd[1]: session-18.scope: Deactivated successfully. May 16 16:38:22.332960 systemd-logind[1513]: Session 18 logged out. Waiting for processes to exit. May 16 16:38:22.336371 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:58216.service - OpenSSH per-connection server daemon (10.0.0.1:58216). May 16 16:38:22.336993 systemd-logind[1513]: Removed session 18. May 16 16:38:22.387316 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 58216 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:22.388857 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:22.393138 systemd-logind[1513]: New session 19 of user core. May 16 16:38:22.411076 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 16:38:22.516070 sshd[4181]: Connection closed by 10.0.0.1 port 58216 May 16 16:38:22.516361 sshd-session[4179]: pam_unix(sshd:session): session closed for user core May 16 16:38:22.519961 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:58216.service: Deactivated successfully. May 16 16:38:22.522156 systemd[1]: session-19.scope: Deactivated successfully. May 16 16:38:22.523875 systemd-logind[1513]: Session 19 logged out. Waiting for processes to exit. May 16 16:38:22.525627 systemd-logind[1513]: Removed session 19. May 16 16:38:27.543078 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:55658.service - OpenSSH per-connection server daemon (10.0.0.1:55658). May 16 16:38:27.588167 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 55658 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:27.589448 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:27.593141 systemd-logind[1513]: New session 20 of user core. May 16 16:38:27.606783 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 16:38:27.704362 sshd[4198]: Connection closed by 10.0.0.1 port 55658 May 16 16:38:27.704627 sshd-session[4196]: pam_unix(sshd:session): session closed for user core May 16 16:38:27.708409 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:55658.service: Deactivated successfully. May 16 16:38:27.710140 systemd[1]: session-20.scope: Deactivated successfully. May 16 16:38:27.710897 systemd-logind[1513]: Session 20 logged out. Waiting for processes to exit. May 16 16:38:27.711927 systemd-logind[1513]: Removed session 20. May 16 16:38:29.246227 kubelet[2669]: I0516 16:38:29.246190 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 16 16:38:29.246227 kubelet[2669]: I0516 16:38:29.246224 2669 container_gc.go:86] "Attempting to delete unused containers" May 16 16:38:29.248002 kubelet[2669]: I0516 16:38:29.247985 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 16 16:38:29.261917 kubelet[2669]: I0516 16:38:29.261892 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 16 16:38:29.262040 kubelet[2669]: I0516 16:38:29.262016 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-bh54c","kube-system/coredns-668d6bf9bc-hnvrt","kube-system/coredns-668d6bf9bc-bc5s9","kube-system/cilium-q2qck","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-r4mfv","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 16 16:38:29.262075 kubelet[2669]: E0516 16:38:29.262047 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-bh54c" May 16 16:38:29.262075 kubelet[2669]: E0516 16:38:29.262058 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:38:29.262075 kubelet[2669]: E0516 16:38:29.262066 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:38:29.262075 kubelet[2669]: E0516 16:38:29.262075 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-q2qck" May 16 16:38:29.262168 kubelet[2669]: E0516 16:38:29.262083 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:38:29.262168 kubelet[2669]: E0516 16:38:29.262101 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-r4mfv" May 16 16:38:29.262168 kubelet[2669]: E0516 16:38:29.262110 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 16 16:38:29.262168 kubelet[2669]: E0516 16:38:29.262117 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 16 16:38:29.262168 kubelet[2669]: I0516 16:38:29.262128 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 16 16:38:32.721395 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:55670.service - OpenSSH per-connection server daemon (10.0.0.1:55670). May 16 16:38:32.765102 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 55670 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:32.766317 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:32.770384 systemd-logind[1513]: New session 21 of user core. May 16 16:38:32.778798 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 16:38:32.880486 sshd[4215]: Connection closed by 10.0.0.1 port 55670 May 16 16:38:32.880807 sshd-session[4213]: pam_unix(sshd:session): session closed for user core May 16 16:38:32.885352 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:55670.service: Deactivated successfully. May 16 16:38:32.887396 systemd[1]: session-21.scope: Deactivated successfully. May 16 16:38:32.888168 systemd-logind[1513]: Session 21 logged out. Waiting for processes to exit. May 16 16:38:32.889400 systemd-logind[1513]: Removed session 21. May 16 16:38:37.011141 kubelet[2669]: E0516 16:38:37.011105 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:37.896601 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:47430.service - OpenSSH per-connection server daemon (10.0.0.1:47430). May 16 16:38:37.939410 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 47430 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:37.941045 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:37.945220 systemd-logind[1513]: New session 22 of user core. May 16 16:38:37.952804 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 16:38:38.062247 sshd[4233]: Connection closed by 10.0.0.1 port 47430 May 16 16:38:38.062566 sshd-session[4231]: pam_unix(sshd:session): session closed for user core May 16 16:38:38.067043 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:47430.service: Deactivated successfully. May 16 16:38:38.069323 systemd[1]: session-22.scope: Deactivated successfully. May 16 16:38:38.070517 systemd-logind[1513]: Session 22 logged out. Waiting for processes to exit. May 16 16:38:38.072301 systemd-logind[1513]: Removed session 22. May 16 16:38:39.279571 kubelet[2669]: I0516 16:38:39.279531 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 16 16:38:39.279571 kubelet[2669]: I0516 16:38:39.279568 2669 container_gc.go:86] "Attempting to delete unused containers" May 16 16:38:39.280952 kubelet[2669]: I0516 16:38:39.280923 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 16 16:38:39.293043 kubelet[2669]: I0516 16:38:39.293014 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 16 16:38:39.293154 kubelet[2669]: I0516 16:38:39.293128 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-bh54c","kube-system/coredns-668d6bf9bc-hnvrt","kube-system/coredns-668d6bf9bc-bc5s9","kube-system/cilium-q2qck","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-r4mfv","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 16 16:38:39.293186 kubelet[2669]: E0516 16:38:39.293163 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-bh54c" May 16 16:38:39.293186 kubelet[2669]: E0516 16:38:39.293176 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:38:39.293186 kubelet[2669]: E0516 16:38:39.293184 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:38:39.293248 kubelet[2669]: E0516 16:38:39.293194 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-q2qck" May 16 16:38:39.293248 kubelet[2669]: E0516 16:38:39.293202 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:38:39.293248 kubelet[2669]: E0516 16:38:39.293211 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-r4mfv" May 16 16:38:39.293248 kubelet[2669]: E0516 16:38:39.293220 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 16 16:38:39.293248 kubelet[2669]: E0516 16:38:39.293228 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 16 16:38:39.293248 kubelet[2669]: I0516 16:38:39.293237 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 16 16:38:40.011201 kubelet[2669]: E0516 16:38:40.011165 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:43.074345 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:47434.service - OpenSSH per-connection server daemon (10.0.0.1:47434). May 16 16:38:43.130149 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 47434 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:43.131700 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:43.135913 systemd-logind[1513]: New session 23 of user core. May 16 16:38:43.145810 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 16:38:43.248033 sshd[4249]: Connection closed by 10.0.0.1 port 47434 May 16 16:38:43.248348 sshd-session[4247]: pam_unix(sshd:session): session closed for user core May 16 16:38:43.262325 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:47434.service: Deactivated successfully. May 16 16:38:43.264095 systemd[1]: session-23.scope: Deactivated successfully. May 16 16:38:43.264937 systemd-logind[1513]: Session 23 logged out. Waiting for processes to exit. May 16 16:38:43.267953 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:47438.service - OpenSSH per-connection server daemon (10.0.0.1:47438). May 16 16:38:43.268649 systemd-logind[1513]: Removed session 23. May 16 16:38:43.317036 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 47438 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:43.318277 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:43.322365 systemd-logind[1513]: New session 24 of user core. May 16 16:38:43.331782 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 16:38:44.756276 containerd[1573]: time="2025-05-16T16:38:44.756227562Z" level=info msg="StopContainer for \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" with timeout 30 (s)" May 16 16:38:44.771694 containerd[1573]: time="2025-05-16T16:38:44.771653960Z" level=info msg="Stop container \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" with signal terminated" May 16 16:38:44.781907 systemd[1]: cri-containerd-bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0.scope: Deactivated successfully. May 16 16:38:44.783933 containerd[1573]: time="2025-05-16T16:38:44.783892564Z" level=info msg="received exit event container_id:\"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" id:\"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" pid:3242 exited_at:{seconds:1747413524 nanos:783619000}" May 16 16:38:44.784438 containerd[1573]: time="2025-05-16T16:38:44.784142615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" id:\"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" pid:3242 exited_at:{seconds:1747413524 nanos:783619000}" May 16 16:38:44.806520 containerd[1573]: time="2025-05-16T16:38:44.806467258Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:38:44.808030 containerd[1573]: time="2025-05-16T16:38:44.807939746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" id:\"7643b195d8019fbb1219261e748cc3a8ca31b0de15583a607b6d594c5166f7c4\" pid:4296 exited_at:{seconds:1747413524 nanos:806849567}" May 16 16:38:44.810892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0-rootfs.mount: Deactivated successfully. May 16 16:38:44.824530 containerd[1573]: time="2025-05-16T16:38:44.824491148Z" level=info msg="StopContainer for \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" with timeout 2 (s)" May 16 16:38:44.824903 containerd[1573]: time="2025-05-16T16:38:44.824881121Z" level=info msg="Stop container \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" with signal terminated" May 16 16:38:44.825921 containerd[1573]: time="2025-05-16T16:38:44.825890097Z" level=info msg="StopContainer for \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" returns successfully" May 16 16:38:44.826482 containerd[1573]: time="2025-05-16T16:38:44.826454709Z" level=info msg="StopPodSandbox for \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\"" May 16 16:38:44.826546 containerd[1573]: time="2025-05-16T16:38:44.826524962Z" level=info msg="Container to stop \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:38:44.832187 systemd-networkd[1499]: lxc_health: Link DOWN May 16 16:38:44.832197 systemd-networkd[1499]: lxc_health: Lost carrier May 16 16:38:44.834023 systemd[1]: cri-containerd-452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8.scope: Deactivated successfully. May 16 16:38:44.834819 containerd[1573]: time="2025-05-16T16:38:44.834783756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" id:\"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" pid:2862 exit_status:137 exited_at:{seconds:1747413524 nanos:834471699}" May 16 16:38:44.856181 systemd[1]: cri-containerd-1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c.scope: Deactivated successfully. May 16 16:38:44.856541 systemd[1]: cri-containerd-1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c.scope: Consumed 6.269s CPU time, 124.9M memory peak, 316K read from disk, 13.3M written to disk. May 16 16:38:44.858027 containerd[1573]: time="2025-05-16T16:38:44.857996449Z" level=info msg="received exit event container_id:\"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" id:\"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" pid:3314 exited_at:{seconds:1747413524 nanos:857611736}" May 16 16:38:44.864389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8-rootfs.mount: Deactivated successfully. May 16 16:38:44.879019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c-rootfs.mount: Deactivated successfully. May 16 16:38:44.894874 containerd[1573]: time="2025-05-16T16:38:44.894722944Z" level=info msg="shim disconnected" id=452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8 namespace=k8s.io May 16 16:38:44.894874 containerd[1573]: time="2025-05-16T16:38:44.894759092Z" level=warning msg="cleaning up after shim disconnected" id=452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8 namespace=k8s.io May 16 16:38:44.914861 containerd[1573]: time="2025-05-16T16:38:44.894767097Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:38:44.914932 containerd[1573]: time="2025-05-16T16:38:44.900691564Z" level=info msg="StopContainer for \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" returns successfully" May 16 16:38:44.915501 containerd[1573]: time="2025-05-16T16:38:44.915450956Z" level=info msg="StopPodSandbox for \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\"" May 16 16:38:44.915637 containerd[1573]: time="2025-05-16T16:38:44.915548290Z" level=info msg="Container to stop \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:38:44.915637 containerd[1573]: time="2025-05-16T16:38:44.915563499Z" level=info msg="Container to stop \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:38:44.915637 containerd[1573]: time="2025-05-16T16:38:44.915573698Z" level=info msg="Container to stop \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:38:44.915637 containerd[1573]: time="2025-05-16T16:38:44.915584959Z" level=info msg="Container to stop \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:38:44.915637 containerd[1573]: time="2025-05-16T16:38:44.915596000Z" level=info msg="Container to stop \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:38:44.922270 systemd[1]: cri-containerd-2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d.scope: Deactivated successfully. May 16 16:38:44.942638 containerd[1573]: time="2025-05-16T16:38:44.942587526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" id:\"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" pid:3314 exited_at:{seconds:1747413524 nanos:857611736}" May 16 16:38:44.942937 containerd[1573]: time="2025-05-16T16:38:44.942905253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" id:\"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" pid:2850 exit_status:137 exited_at:{seconds:1747413524 nanos:928919614}" May 16 16:38:44.944607 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8-shm.mount: Deactivated successfully. May 16 16:38:44.950926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d-rootfs.mount: Deactivated successfully. May 16 16:38:44.954997 containerd[1573]: time="2025-05-16T16:38:44.954878910Z" level=info msg="shim disconnected" id=2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d namespace=k8s.io May 16 16:38:44.954997 containerd[1573]: time="2025-05-16T16:38:44.954915169Z" level=warning msg="cleaning up after shim disconnected" id=2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d namespace=k8s.io May 16 16:38:44.954997 containerd[1573]: time="2025-05-16T16:38:44.954922302Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:38:44.954997 containerd[1573]: time="2025-05-16T16:38:44.954962748Z" level=info msg="received exit event sandbox_id:\"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" exit_status:137 exited_at:{seconds:1747413524 nanos:928919614}" May 16 16:38:44.956061 containerd[1573]: time="2025-05-16T16:38:44.955992814Z" level=info msg="TearDown network for sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" successfully" May 16 16:38:44.956061 containerd[1573]: time="2025-05-16T16:38:44.956038039Z" level=info msg="StopPodSandbox for \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" returns successfully" May 16 16:38:44.956275 containerd[1573]: time="2025-05-16T16:38:44.954933744Z" level=info msg="received exit event sandbox_id:\"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" exit_status:137 exited_at:{seconds:1747413524 nanos:834471699}" May 16 16:38:44.959870 containerd[1573]: time="2025-05-16T16:38:44.959840507Z" level=info msg="TearDown network for sandbox \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" successfully" May 16 16:38:44.959870 containerd[1573]: time="2025-05-16T16:38:44.959864973Z" level=info msg="StopPodSandbox for \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" returns successfully" May 16 16:38:45.051375 kubelet[2669]: I0516 16:38:45.050618 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-hostproc\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.051375 kubelet[2669]: I0516 16:38:45.050698 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5r8k\" (UniqueName: \"kubernetes.io/projected/14fc5109-adcd-450e-98b0-b75f3d15ff78-kube-api-access-n5r8k\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.051375 kubelet[2669]: I0516 16:38:45.050721 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-run\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.051375 kubelet[2669]: I0516 16:38:45.050740 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-hostproc" (OuterVolumeSpecName: "hostproc") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.051375 kubelet[2669]: I0516 16:38:45.050877 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.052140 kubelet[2669]: I0516 16:38:45.050898 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.052140 kubelet[2669]: I0516 16:38:45.050898 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-bpf-maps\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052140 kubelet[2669]: I0516 16:38:45.050944 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-xtables-lock\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052140 kubelet[2669]: I0516 16:38:45.050959 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-host-proc-sys-net\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052140 kubelet[2669]: I0516 16:38:45.050985 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-config-path\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052140 kubelet[2669]: I0516 16:38:45.051000 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cni-path\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052328 kubelet[2669]: I0516 16:38:45.051015 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-lib-modules\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052328 kubelet[2669]: I0516 16:38:45.051030 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bed3429-0144-4199-a882-8a29811d275c-cilium-config-path\") pod \"9bed3429-0144-4199-a882-8a29811d275c\" (UID: \"9bed3429-0144-4199-a882-8a29811d275c\") " May 16 16:38:45.052328 kubelet[2669]: I0516 16:38:45.051049 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14fc5109-adcd-450e-98b0-b75f3d15ff78-clustermesh-secrets\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052328 kubelet[2669]: I0516 16:38:45.051063 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-host-proc-sys-kernel\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052328 kubelet[2669]: I0516 16:38:45.051076 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-etc-cni-netd\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052328 kubelet[2669]: I0516 16:38:45.051092 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr78s\" (UniqueName: \"kubernetes.io/projected/9bed3429-0144-4199-a882-8a29811d275c-kube-api-access-gr78s\") pod \"9bed3429-0144-4199-a882-8a29811d275c\" (UID: \"9bed3429-0144-4199-a882-8a29811d275c\") " May 16 16:38:45.052526 kubelet[2669]: I0516 16:38:45.051106 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-cgroup\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052526 kubelet[2669]: I0516 16:38:45.051122 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14fc5109-adcd-450e-98b0-b75f3d15ff78-hubble-tls\") pod \"14fc5109-adcd-450e-98b0-b75f3d15ff78\" (UID: \"14fc5109-adcd-450e-98b0-b75f3d15ff78\") " May 16 16:38:45.052526 kubelet[2669]: I0516 16:38:45.051164 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.052526 kubelet[2669]: I0516 16:38:45.051173 2669 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.052526 kubelet[2669]: I0516 16:38:45.051183 2669 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.053896 kubelet[2669]: I0516 16:38:45.052808 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cni-path" (OuterVolumeSpecName: "cni-path") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.053896 kubelet[2669]: I0516 16:38:45.053126 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.055505 kubelet[2669]: I0516 16:38:45.055465 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.055783 kubelet[2669]: I0516 16:38:45.055758 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.056110 kubelet[2669]: I0516 16:38:45.056091 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.056266 kubelet[2669]: I0516 16:38:45.056210 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 16:38:45.056479 kubelet[2669]: I0516 16:38:45.056269 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.056717 kubelet[2669]: I0516 16:38:45.056654 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fc5109-adcd-450e-98b0-b75f3d15ff78-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 16:38:45.057820 kubelet[2669]: I0516 16:38:45.056857 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bed3429-0144-4199-a882-8a29811d275c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9bed3429-0144-4199-a882-8a29811d275c" (UID: "9bed3429-0144-4199-a882-8a29811d275c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 16:38:45.057820 kubelet[2669]: I0516 16:38:45.056880 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:38:45.057820 kubelet[2669]: I0516 16:38:45.057289 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fc5109-adcd-450e-98b0-b75f3d15ff78-kube-api-access-n5r8k" (OuterVolumeSpecName: "kube-api-access-n5r8k") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "kube-api-access-n5r8k". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 16:38:45.059273 kubelet[2669]: I0516 16:38:45.059246 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fc5109-adcd-450e-98b0-b75f3d15ff78-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "14fc5109-adcd-450e-98b0-b75f3d15ff78" (UID: "14fc5109-adcd-450e-98b0-b75f3d15ff78"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 16:38:45.059567 kubelet[2669]: I0516 16:38:45.059523 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bed3429-0144-4199-a882-8a29811d275c-kube-api-access-gr78s" (OuterVolumeSpecName: "kube-api-access-gr78s") pod "9bed3429-0144-4199-a882-8a29811d275c" (UID: "9bed3429-0144-4199-a882-8a29811d275c"). InnerVolumeSpecName "kube-api-access-gr78s". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 16:38:45.152312 kubelet[2669]: I0516 16:38:45.152270 2669 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152312 kubelet[2669]: I0516 16:38:45.152294 2669 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152312 kubelet[2669]: I0516 16:38:45.152305 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152312 kubelet[2669]: I0516 16:38:45.152315 2669 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152312 kubelet[2669]: I0516 16:38:45.152323 2669 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152312 kubelet[2669]: I0516 16:38:45.152332 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bed3429-0144-4199-a882-8a29811d275c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152590 kubelet[2669]: I0516 16:38:45.152341 2669 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14fc5109-adcd-450e-98b0-b75f3d15ff78-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152590 kubelet[2669]: I0516 16:38:45.152349 2669 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152590 kubelet[2669]: I0516 16:38:45.152356 2669 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152590 kubelet[2669]: I0516 16:38:45.152364 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gr78s\" (UniqueName: \"kubernetes.io/projected/9bed3429-0144-4199-a882-8a29811d275c-kube-api-access-gr78s\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152590 kubelet[2669]: I0516 16:38:45.152372 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14fc5109-adcd-450e-98b0-b75f3d15ff78-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152590 kubelet[2669]: I0516 16:38:45.152380 2669 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14fc5109-adcd-450e-98b0-b75f3d15ff78-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.152590 kubelet[2669]: I0516 16:38:45.152388 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n5r8k\" (UniqueName: \"kubernetes.io/projected/14fc5109-adcd-450e-98b0-b75f3d15ff78-kube-api-access-n5r8k\") on node \"localhost\" DevicePath \"\"" May 16 16:38:45.207143 kubelet[2669]: I0516 16:38:45.207070 2669 scope.go:117] "RemoveContainer" containerID="bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0" May 16 16:38:45.208712 containerd[1573]: time="2025-05-16T16:38:45.208652547Z" level=info msg="RemoveContainer for \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\"" May 16 16:38:45.215862 containerd[1573]: time="2025-05-16T16:38:45.215785294Z" level=info msg="RemoveContainer for \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" returns successfully" May 16 16:38:45.216073 kubelet[2669]: I0516 16:38:45.216042 2669 scope.go:117] "RemoveContainer" containerID="bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0" May 16 16:38:45.216063 systemd[1]: Removed slice kubepods-besteffort-pod9bed3429_0144_4199_a882_8a29811d275c.slice - libcontainer container kubepods-besteffort-pod9bed3429_0144_4199_a882_8a29811d275c.slice. May 16 16:38:45.218112 systemd[1]: Removed slice kubepods-burstable-pod14fc5109_adcd_450e_98b0_b75f3d15ff78.slice - libcontainer container kubepods-burstable-pod14fc5109_adcd_450e_98b0_b75f3d15ff78.slice. May 16 16:38:45.218215 systemd[1]: kubepods-burstable-pod14fc5109_adcd_450e_98b0_b75f3d15ff78.slice: Consumed 6.372s CPU time, 125.2M memory peak, 324K read from disk, 13.3M written to disk. May 16 16:38:45.219239 containerd[1573]: time="2025-05-16T16:38:45.216235782Z" level=error msg="ContainerStatus for \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\": not found" May 16 16:38:45.222266 kubelet[2669]: E0516 16:38:45.222206 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\": not found" containerID="bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0" May 16 16:38:45.222375 kubelet[2669]: I0516 16:38:45.222249 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0"} err="failed to get container status \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcc8e793a1678b300c1f4813154978acdd731c35e4bcd8dcf566670999bfb3f0\": not found" May 16 16:38:45.222375 kubelet[2669]: I0516 16:38:45.222329 2669 scope.go:117] "RemoveContainer" containerID="1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c" May 16 16:38:45.223607 containerd[1573]: time="2025-05-16T16:38:45.223582440Z" level=info msg="RemoveContainer for \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\"" May 16 16:38:45.228315 containerd[1573]: time="2025-05-16T16:38:45.228263569Z" level=info msg="RemoveContainer for \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" returns successfully" May 16 16:38:45.228583 kubelet[2669]: I0516 16:38:45.228489 2669 scope.go:117] "RemoveContainer" containerID="78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209" May 16 16:38:45.230412 containerd[1573]: time="2025-05-16T16:38:45.230383033Z" level=info msg="RemoveContainer for \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\"" May 16 16:38:45.239257 containerd[1573]: time="2025-05-16T16:38:45.239212971Z" level=info msg="RemoveContainer for \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\" returns successfully" May 16 16:38:45.239444 kubelet[2669]: I0516 16:38:45.239403 2669 scope.go:117] "RemoveContainer" containerID="702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763" May 16 16:38:45.241687 containerd[1573]: time="2025-05-16T16:38:45.241639621Z" level=info msg="RemoveContainer for \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\"" May 16 16:38:45.245879 containerd[1573]: time="2025-05-16T16:38:45.245851538Z" level=info msg="RemoveContainer for \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\" returns successfully" May 16 16:38:45.245987 kubelet[2669]: I0516 16:38:45.245970 2669 scope.go:117] "RemoveContainer" containerID="a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3" May 16 16:38:45.247099 containerd[1573]: time="2025-05-16T16:38:45.247074978Z" level=info msg="RemoveContainer for \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\"" May 16 16:38:45.250505 containerd[1573]: time="2025-05-16T16:38:45.250474999Z" level=info msg="RemoveContainer for \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\" returns successfully" May 16 16:38:45.250618 kubelet[2669]: I0516 16:38:45.250598 2669 scope.go:117] "RemoveContainer" containerID="7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe" May 16 16:38:45.251695 containerd[1573]: time="2025-05-16T16:38:45.251634108Z" level=info msg="RemoveContainer for \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\"" May 16 16:38:45.254852 containerd[1573]: time="2025-05-16T16:38:45.254815688Z" level=info msg="RemoveContainer for \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\" returns successfully" May 16 16:38:45.254963 kubelet[2669]: I0516 16:38:45.254945 2669 scope.go:117] "RemoveContainer" containerID="1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c" May 16 16:38:45.255088 containerd[1573]: time="2025-05-16T16:38:45.255061640Z" level=error msg="ContainerStatus for \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\": not found" May 16 16:38:45.255192 kubelet[2669]: E0516 16:38:45.255168 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\": not found" containerID="1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c" May 16 16:38:45.255223 kubelet[2669]: I0516 16:38:45.255199 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c"} err="failed to get container status \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\": rpc error: code = NotFound desc = an error occurred when try to find container \"1edf31e05592b0ceb520fa53c0245685aeb67d936f0aafb2975665eae232a39c\": not found" May 16 16:38:45.255223 kubelet[2669]: I0516 16:38:45.255220 2669 scope.go:117] "RemoveContainer" containerID="78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209" May 16 16:38:45.255406 containerd[1573]: time="2025-05-16T16:38:45.255380820Z" level=error msg="ContainerStatus for \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\": not found" May 16 16:38:45.255512 kubelet[2669]: E0516 16:38:45.255492 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\": not found" containerID="78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209" May 16 16:38:45.255544 kubelet[2669]: I0516 16:38:45.255510 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209"} err="failed to get container status \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\": rpc error: code = NotFound desc = an error occurred when try to find container \"78af73b98436905eb4876dbd13f9e02fa471b02e586aa2aa79fe86c1a9e4a209\": not found" May 16 16:38:45.255544 kubelet[2669]: I0516 16:38:45.255522 2669 scope.go:117] "RemoveContainer" containerID="702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763" May 16 16:38:45.255705 containerd[1573]: time="2025-05-16T16:38:45.255662550Z" level=error msg="ContainerStatus for \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\": not found" May 16 16:38:45.255814 kubelet[2669]: E0516 16:38:45.255792 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\": not found" containerID="702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763" May 16 16:38:45.255855 kubelet[2669]: I0516 16:38:45.255818 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763"} err="failed to get container status \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\": rpc error: code = NotFound desc = an error occurred when try to find container \"702e3d5a2df59d345ab3f363df40ea47d467637a2c7c1862aefc0849785b0763\": not found" May 16 16:38:45.255855 kubelet[2669]: I0516 16:38:45.255845 2669 scope.go:117] "RemoveContainer" containerID="a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3" May 16 16:38:45.255993 containerd[1573]: time="2025-05-16T16:38:45.255968895Z" level=error msg="ContainerStatus for \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\": not found" May 16 16:38:45.256081 kubelet[2669]: E0516 16:38:45.256064 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\": not found" containerID="a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3" May 16 16:38:45.256114 kubelet[2669]: I0516 16:38:45.256082 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3"} err="failed to get container status \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9d2131636d946bc89e11b81fe96f09b95e22fb2542ac4ca1c1dee2d427a9bf3\": not found" May 16 16:38:45.256114 kubelet[2669]: I0516 16:38:45.256095 2669 scope.go:117] "RemoveContainer" containerID="7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe" May 16 16:38:45.256235 containerd[1573]: time="2025-05-16T16:38:45.256210589Z" level=error msg="ContainerStatus for \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\": not found" May 16 16:38:45.256307 kubelet[2669]: E0516 16:38:45.256289 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\": not found" containerID="7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe" May 16 16:38:45.256344 kubelet[2669]: I0516 16:38:45.256306 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe"} err="failed to get container status \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ecff77ec0e6f7389e3a33a3e460002a3e8d9bf6bc10eb1b13fd9883dacdf4fe\": not found" May 16 16:38:45.810630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d-shm.mount: Deactivated successfully. May 16 16:38:45.810749 systemd[1]: var-lib-kubelet-pods-9bed3429\x2d0144\x2d4199\x2da882\x2d8a29811d275c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgr78s.mount: Deactivated successfully. May 16 16:38:45.810832 systemd[1]: var-lib-kubelet-pods-14fc5109\x2dadcd\x2d450e\x2d98b0\x2db75f3d15ff78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn5r8k.mount: Deactivated successfully. May 16 16:38:45.810911 systemd[1]: var-lib-kubelet-pods-14fc5109\x2dadcd\x2d450e\x2d98b0\x2db75f3d15ff78-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 16:38:45.810984 systemd[1]: var-lib-kubelet-pods-14fc5109\x2dadcd\x2d450e\x2d98b0\x2db75f3d15ff78-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 16:38:46.726686 sshd[4264]: Connection closed by 10.0.0.1 port 47438 May 16 16:38:46.727009 sshd-session[4262]: pam_unix(sshd:session): session closed for user core May 16 16:38:46.735388 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:47438.service: Deactivated successfully. May 16 16:38:46.737334 systemd[1]: session-24.scope: Deactivated successfully. May 16 16:38:46.738196 systemd-logind[1513]: Session 24 logged out. Waiting for processes to exit. May 16 16:38:46.741298 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:46308.service - OpenSSH per-connection server daemon (10.0.0.1:46308). May 16 16:38:46.742095 systemd-logind[1513]: Removed session 24. May 16 16:38:46.800897 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 46308 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:46.802336 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:46.807213 systemd-logind[1513]: New session 25 of user core. May 16 16:38:46.820844 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 16:38:47.013638 kubelet[2669]: I0516 16:38:47.013508 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fc5109-adcd-450e-98b0-b75f3d15ff78" path="/var/lib/kubelet/pods/14fc5109-adcd-450e-98b0-b75f3d15ff78/volumes" May 16 16:38:47.014530 kubelet[2669]: I0516 16:38:47.014498 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bed3429-0144-4199-a882-8a29811d275c" path="/var/lib/kubelet/pods/9bed3429-0144-4199-a882-8a29811d275c/volumes" May 16 16:38:47.477599 sshd[4416]: Connection closed by 10.0.0.1 port 46308 May 16 16:38:47.478411 sshd-session[4414]: pam_unix(sshd:session): session closed for user core May 16 16:38:47.490346 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:46308.service: Deactivated successfully. May 16 16:38:47.493340 systemd[1]: session-25.scope: Deactivated successfully. May 16 16:38:47.494352 systemd-logind[1513]: Session 25 logged out. Waiting for processes to exit. May 16 16:38:47.497901 kubelet[2669]: I0516 16:38:47.497864 2669 memory_manager.go:355] "RemoveStaleState removing state" podUID="14fc5109-adcd-450e-98b0-b75f3d15ff78" containerName="cilium-agent" May 16 16:38:47.497901 kubelet[2669]: I0516 16:38:47.497891 2669 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bed3429-0144-4199-a882-8a29811d275c" containerName="cilium-operator" May 16 16:38:47.500147 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:46320.service - OpenSSH per-connection server daemon (10.0.0.1:46320). May 16 16:38:47.502316 systemd-logind[1513]: Removed session 25. May 16 16:38:47.514647 systemd[1]: Created slice kubepods-burstable-pode8c1fad8_d3ea_45c7_af85_eb8f656a3a0f.slice - libcontainer container kubepods-burstable-pode8c1fad8_d3ea_45c7_af85_eb8f656a3a0f.slice. May 16 16:38:47.547691 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 46320 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:47.549212 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:47.553336 systemd-logind[1513]: New session 26 of user core. May 16 16:38:47.562828 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 16:38:47.565608 kubelet[2669]: I0516 16:38:47.565570 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-cilium-ipsec-secrets\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565608 kubelet[2669]: I0516 16:38:47.565605 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-bpf-maps\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565716 kubelet[2669]: I0516 16:38:47.565626 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-lib-modules\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565716 kubelet[2669]: I0516 16:38:47.565639 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-xtables-lock\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565716 kubelet[2669]: I0516 16:38:47.565654 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-cilium-config-path\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565716 kubelet[2669]: I0516 16:38:47.565669 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-cilium-cgroup\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565716 kubelet[2669]: I0516 16:38:47.565695 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-clustermesh-secrets\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565716 kubelet[2669]: I0516 16:38:47.565713 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-hubble-tls\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565868 kubelet[2669]: I0516 16:38:47.565727 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-cni-path\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565868 kubelet[2669]: I0516 16:38:47.565740 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-host-proc-sys-net\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565868 kubelet[2669]: I0516 16:38:47.565755 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-hostproc\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565868 kubelet[2669]: I0516 16:38:47.565768 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-etc-cni-netd\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565868 kubelet[2669]: I0516 16:38:47.565783 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-host-proc-sys-kernel\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565868 kubelet[2669]: I0516 16:38:47.565804 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwwqs\" (UniqueName: \"kubernetes.io/projected/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-kube-api-access-nwwqs\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.565993 kubelet[2669]: I0516 16:38:47.565818 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f-cilium-run\") pod \"cilium-7zqx9\" (UID: \"e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f\") " pod="kube-system/cilium-7zqx9" May 16 16:38:47.612822 sshd[4430]: Connection closed by 10.0.0.1 port 46320 May 16 16:38:47.613136 sshd-session[4428]: pam_unix(sshd:session): session closed for user core May 16 16:38:47.625388 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:46320.service: Deactivated successfully. May 16 16:38:47.627282 systemd[1]: session-26.scope: Deactivated successfully. May 16 16:38:47.628105 systemd-logind[1513]: Session 26 logged out. Waiting for processes to exit. May 16 16:38:47.630907 systemd[1]: Started sshd@26-10.0.0.36:22-10.0.0.1:46326.service - OpenSSH per-connection server daemon (10.0.0.1:46326). May 16 16:38:47.631629 systemd-logind[1513]: Removed session 26. May 16 16:38:47.680338 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 46326 ssh2: RSA SHA256:xtDF+SM00BVA4NOIUT0zDz1Cb4IyRmiUgC3yMm9bHIM May 16 16:38:47.681918 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:47.685948 systemd-logind[1513]: New session 27 of user core. May 16 16:38:47.696810 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 16:38:47.819420 kubelet[2669]: E0516 16:38:47.819302 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:47.820096 containerd[1573]: time="2025-05-16T16:38:47.819985593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7zqx9,Uid:e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f,Namespace:kube-system,Attempt:0,}" May 16 16:38:47.835455 containerd[1573]: time="2025-05-16T16:38:47.835423559Z" level=info msg="connecting to shim 86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce" address="unix:///run/containerd/s/c9200262e25a7ab60b26ba73d9dc0caeb94e4e9a8640e01870daade74b6bcc58" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:47.863815 systemd[1]: Started cri-containerd-86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce.scope - libcontainer container 86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce. May 16 16:38:47.888421 containerd[1573]: time="2025-05-16T16:38:47.888388337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7zqx9,Uid:e8c1fad8-d3ea-45c7-af85-eb8f656a3a0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\"" May 16 16:38:47.889294 kubelet[2669]: E0516 16:38:47.889268 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:47.891456 containerd[1573]: time="2025-05-16T16:38:47.891421748Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 16:38:47.899300 containerd[1573]: time="2025-05-16T16:38:47.898763136Z" level=info msg="Container 27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:47.921821 containerd[1573]: time="2025-05-16T16:38:47.921770559Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f\"" May 16 16:38:47.922482 containerd[1573]: time="2025-05-16T16:38:47.922438544Z" level=info msg="StartContainer for \"27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f\"" May 16 16:38:47.923373 containerd[1573]: time="2025-05-16T16:38:47.923305783Z" level=info msg="connecting to shim 27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f" address="unix:///run/containerd/s/c9200262e25a7ab60b26ba73d9dc0caeb94e4e9a8640e01870daade74b6bcc58" protocol=ttrpc version=3 May 16 16:38:47.947804 systemd[1]: Started cri-containerd-27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f.scope - libcontainer container 27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f. May 16 16:38:47.975319 containerd[1573]: time="2025-05-16T16:38:47.975269259Z" level=info msg="StartContainer for \"27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f\" returns successfully" May 16 16:38:47.985102 systemd[1]: cri-containerd-27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f.scope: Deactivated successfully. May 16 16:38:47.986237 containerd[1573]: time="2025-05-16T16:38:47.986198050Z" level=info msg="received exit event container_id:\"27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f\" id:\"27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f\" pid:4508 exited_at:{seconds:1747413527 nanos:985919907}" May 16 16:38:47.986436 containerd[1573]: time="2025-05-16T16:38:47.986248495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f\" id:\"27955ac3d88b2e93dfaf51e70a87ebf1352983a61eddb1d1c3151cdfb950db7f\" pid:4508 exited_at:{seconds:1747413527 nanos:985919907}" May 16 16:38:48.220617 kubelet[2669]: E0516 16:38:48.220567 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:48.222523 containerd[1573]: time="2025-05-16T16:38:48.222495096Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 16:38:48.229456 containerd[1573]: time="2025-05-16T16:38:48.229407957Z" level=info msg="Container 9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:48.235349 containerd[1573]: time="2025-05-16T16:38:48.235316933Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd\"" May 16 16:38:48.235759 containerd[1573]: time="2025-05-16T16:38:48.235700374Z" level=info msg="StartContainer for \"9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd\"" May 16 16:38:48.236470 containerd[1573]: time="2025-05-16T16:38:48.236447639Z" level=info msg="connecting to shim 9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd" address="unix:///run/containerd/s/c9200262e25a7ab60b26ba73d9dc0caeb94e4e9a8640e01870daade74b6bcc58" protocol=ttrpc version=3 May 16 16:38:48.256813 systemd[1]: Started cri-containerd-9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd.scope - libcontainer container 9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd. May 16 16:38:48.285408 containerd[1573]: time="2025-05-16T16:38:48.285371448Z" level=info msg="StartContainer for \"9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd\" returns successfully" May 16 16:38:48.290768 systemd[1]: cri-containerd-9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd.scope: Deactivated successfully. May 16 16:38:48.291306 containerd[1573]: time="2025-05-16T16:38:48.291267710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd\" id:\"9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd\" pid:4553 exited_at:{seconds:1747413528 nanos:291040833}" May 16 16:38:48.291306 containerd[1573]: time="2025-05-16T16:38:48.291289340Z" level=info msg="received exit event container_id:\"9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd\" id:\"9cd234262cf5e2b8b7e4aa77a0ec2ba10928523e761982965152099f97fa38fd\" pid:4553 exited_at:{seconds:1747413528 nanos:291040833}" May 16 16:38:48.673700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1729309655.mount: Deactivated successfully. May 16 16:38:49.053731 kubelet[2669]: E0516 16:38:49.053480 2669 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 16:38:49.224041 kubelet[2669]: E0516 16:38:49.224012 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:49.226700 containerd[1573]: time="2025-05-16T16:38:49.225754401Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 16:38:49.235923 containerd[1573]: time="2025-05-16T16:38:49.235818224Z" level=info msg="Container 9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:49.247386 containerd[1573]: time="2025-05-16T16:38:49.247352412Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58\"" May 16 16:38:49.247879 containerd[1573]: time="2025-05-16T16:38:49.247821372Z" level=info msg="StartContainer for \"9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58\"" May 16 16:38:49.249140 containerd[1573]: time="2025-05-16T16:38:49.249107610Z" level=info msg="connecting to shim 9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58" address="unix:///run/containerd/s/c9200262e25a7ab60b26ba73d9dc0caeb94e4e9a8640e01870daade74b6bcc58" protocol=ttrpc version=3 May 16 16:38:49.265804 systemd[1]: Started cri-containerd-9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58.scope - libcontainer container 9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58. May 16 16:38:49.309996 kubelet[2669]: I0516 16:38:49.309900 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 16 16:38:49.309996 kubelet[2669]: I0516 16:38:49.309937 2669 container_gc.go:86] "Attempting to delete unused containers" May 16 16:38:49.314504 systemd[1]: cri-containerd-9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58.scope: Deactivated successfully. May 16 16:38:49.316349 containerd[1573]: time="2025-05-16T16:38:49.316302244Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58\" id:\"9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58\" pid:4598 exited_at:{seconds:1747413529 nanos:316030813}" May 16 16:38:49.328794 containerd[1573]: time="2025-05-16T16:38:49.328663055Z" level=info msg="received exit event container_id:\"9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58\" id:\"9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58\" pid:4598 exited_at:{seconds:1747413529 nanos:316030813}" May 16 16:38:49.331350 containerd[1573]: time="2025-05-16T16:38:49.331321720Z" level=info msg="StopPodSandbox for \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\"" May 16 16:38:49.331469 containerd[1573]: time="2025-05-16T16:38:49.331447517Z" level=info msg="TearDown network for sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" successfully" May 16 16:38:49.331512 containerd[1573]: time="2025-05-16T16:38:49.331471863Z" level=info msg="StopPodSandbox for \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" returns successfully" May 16 16:38:49.331832 containerd[1573]: time="2025-05-16T16:38:49.331801012Z" level=info msg="RemovePodSandbox for \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\"" May 16 16:38:49.331900 containerd[1573]: time="2025-05-16T16:38:49.331836999Z" level=info msg="Forcibly stopping sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\"" May 16 16:38:49.331927 containerd[1573]: time="2025-05-16T16:38:49.331918442Z" level=info msg="TearDown network for sandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" successfully" May 16 16:38:49.333292 containerd[1573]: time="2025-05-16T16:38:49.333271545Z" level=info msg="Ensure that sandbox 2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d in task-service has been cleanup successfully" May 16 16:38:49.336363 containerd[1573]: time="2025-05-16T16:38:49.336327819Z" level=info msg="RemovePodSandbox \"2bf74516e447701584a0029403971bf1adcf13b0443756904d0315404358854d\" returns successfully" May 16 16:38:49.336696 containerd[1573]: time="2025-05-16T16:38:49.336612695Z" level=info msg="StopPodSandbox for \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\"" May 16 16:38:49.336895 containerd[1573]: time="2025-05-16T16:38:49.336869067Z" level=info msg="TearDown network for sandbox \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" successfully" May 16 16:38:49.336895 containerd[1573]: time="2025-05-16T16:38:49.336885417Z" level=info msg="StopPodSandbox for \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" returns successfully" May 16 16:38:49.337040 containerd[1573]: time="2025-05-16T16:38:49.337002928Z" level=info msg="StartContainer for \"9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58\" returns successfully" May 16 16:38:49.337126 containerd[1573]: time="2025-05-16T16:38:49.337103767Z" level=info msg="RemovePodSandbox for \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\"" May 16 16:38:49.337126 containerd[1573]: time="2025-05-16T16:38:49.337124347Z" level=info msg="Forcibly stopping sandbox \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\"" May 16 16:38:49.337209 containerd[1573]: time="2025-05-16T16:38:49.337171946Z" level=info msg="TearDown network for sandbox \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" successfully" May 16 16:38:49.338567 containerd[1573]: time="2025-05-16T16:38:49.338541490Z" level=info msg="Ensure that sandbox 452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8 in task-service has been cleanup successfully" May 16 16:38:49.342530 containerd[1573]: time="2025-05-16T16:38:49.342452560Z" level=info msg="RemovePodSandbox \"452bcb3a4bdd1eee253c140ca73432f4a837e641d143787fb5e3ef99bcb7c6f8\" returns successfully" May 16 16:38:49.343230 kubelet[2669]: I0516 16:38:49.343211 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 16 16:38:49.349809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dee62b54025fd33cc8c4a0f28b7548e0a230547386ad1e97b323e1eae0a4d58-rootfs.mount: Deactivated successfully. May 16 16:38:49.355339 kubelet[2669]: I0516 16:38:49.355319 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 16 16:38:49.355447 kubelet[2669]: I0516 16:38:49.355387 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-7zqx9","kube-system/coredns-668d6bf9bc-hnvrt","kube-system/coredns-668d6bf9bc-bc5s9","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-r4mfv","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 16 16:38:49.355447 kubelet[2669]: E0516 16:38:49.355416 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7zqx9" May 16 16:38:49.355447 kubelet[2669]: E0516 16:38:49.355428 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:38:49.355447 kubelet[2669]: E0516 16:38:49.355437 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:38:49.355447 kubelet[2669]: E0516 16:38:49.355445 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:38:49.355566 kubelet[2669]: E0516 16:38:49.355453 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-r4mfv" May 16 16:38:49.355566 kubelet[2669]: E0516 16:38:49.355461 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 16 16:38:49.355566 kubelet[2669]: E0516 16:38:49.355469 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 16 16:38:49.355566 kubelet[2669]: I0516 16:38:49.355478 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 16 16:38:50.011631 kubelet[2669]: E0516 16:38:50.011590 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:50.228442 kubelet[2669]: E0516 16:38:50.228373 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:50.230793 containerd[1573]: time="2025-05-16T16:38:50.230738012Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 16:38:50.317928 containerd[1573]: time="2025-05-16T16:38:50.317844521Z" level=info msg="Container faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:50.327472 containerd[1573]: time="2025-05-16T16:38:50.327423894Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955\"" May 16 16:38:50.327977 containerd[1573]: time="2025-05-16T16:38:50.327936718Z" level=info msg="StartContainer for \"faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955\"" May 16 16:38:50.328848 containerd[1573]: time="2025-05-16T16:38:50.328823444Z" level=info msg="connecting to shim faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955" address="unix:///run/containerd/s/c9200262e25a7ab60b26ba73d9dc0caeb94e4e9a8640e01870daade74b6bcc58" protocol=ttrpc version=3 May 16 16:38:50.348791 systemd[1]: Started cri-containerd-faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955.scope - libcontainer container faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955. May 16 16:38:50.375168 systemd[1]: cri-containerd-faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955.scope: Deactivated successfully. May 16 16:38:50.375932 containerd[1573]: time="2025-05-16T16:38:50.375903533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955\" id:\"faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955\" pid:4636 exited_at:{seconds:1747413530 nanos:375467733}" May 16 16:38:50.376999 containerd[1573]: time="2025-05-16T16:38:50.376979084Z" level=info msg="received exit event container_id:\"faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955\" id:\"faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955\" pid:4636 exited_at:{seconds:1747413530 nanos:375467733}" May 16 16:38:50.384033 containerd[1573]: time="2025-05-16T16:38:50.383994327Z" level=info msg="StartContainer for \"faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955\" returns successfully" May 16 16:38:50.396449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faf2eefa708aada6adcf18425391ebdd2298a9aa8ff21e3a9fe129009aa7c955-rootfs.mount: Deactivated successfully. May 16 16:38:50.697630 kubelet[2669]: I0516 16:38:50.697563 2669 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T16:38:50Z","lastTransitionTime":"2025-05-16T16:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 16:38:51.234234 kubelet[2669]: E0516 16:38:51.233814 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:51.236148 containerd[1573]: time="2025-05-16T16:38:51.236096064Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 16:38:51.245510 containerd[1573]: time="2025-05-16T16:38:51.245453167Z" level=info msg="Container 6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:51.253612 containerd[1573]: time="2025-05-16T16:38:51.253564751Z" level=info msg="CreateContainer within sandbox \"86f5cb25ca67bd13e77272fc73bbdd7b6c96e275ad41f9e667441f86a9c590ce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03\"" May 16 16:38:51.254071 containerd[1573]: time="2025-05-16T16:38:51.254039825Z" level=info msg="StartContainer for \"6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03\"" May 16 16:38:51.254974 containerd[1573]: time="2025-05-16T16:38:51.254949253Z" level=info msg="connecting to shim 6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03" address="unix:///run/containerd/s/c9200262e25a7ab60b26ba73d9dc0caeb94e4e9a8640e01870daade74b6bcc58" protocol=ttrpc version=3 May 16 16:38:51.273809 systemd[1]: Started cri-containerd-6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03.scope - libcontainer container 6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03. May 16 16:38:51.306496 containerd[1573]: time="2025-05-16T16:38:51.306453002Z" level=info msg="StartContainer for \"6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03\" returns successfully" May 16 16:38:51.316619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513680568.mount: Deactivated successfully. May 16 16:38:51.373166 containerd[1573]: time="2025-05-16T16:38:51.373120005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03\" id:\"57d8cb4048b858387340f8e43c4725b87bcbeeabae5a9a10055b93c470a56cd5\" pid:4706 exited_at:{seconds:1747413531 nanos:372802008}" May 16 16:38:51.706700 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 16 16:38:52.240527 kubelet[2669]: E0516 16:38:52.240475 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:52.254464 kubelet[2669]: I0516 16:38:52.254396 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7zqx9" podStartSLOduration=5.254375906 podStartE2EDuration="5.254375906s" podCreationTimestamp="2025-05-16 16:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:38:52.254270719 +0000 UTC m=+83.335857241" watchObservedRunningTime="2025-05-16 16:38:52.254375906 +0000 UTC m=+83.335962428" May 16 16:38:53.820898 kubelet[2669]: E0516 16:38:53.820852 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:54.069073 containerd[1573]: time="2025-05-16T16:38:54.069031848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03\" id:\"a1eccbc11423b8ecbec21c255f48e54a795e2f954bc34b7e563dcb287fb6ca64\" pid:5059 exit_status:1 exited_at:{seconds:1747413534 nanos:68524394}" May 16 16:38:54.720653 systemd-networkd[1499]: lxc_health: Link UP May 16 16:38:54.723090 systemd-networkd[1499]: lxc_health: Gained carrier May 16 16:38:55.821374 kubelet[2669]: E0516 16:38:55.821118 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:56.167404 containerd[1573]: time="2025-05-16T16:38:56.167358697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03\" id:\"1969d3891d5b7e3a90248f6e35ef17e9848722632d11d4b8fd6de950482f426a\" pid:5246 exited_at:{seconds:1747413536 nanos:166802481}" May 16 16:38:56.248296 kubelet[2669]: E0516 16:38:56.248265 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:56.513914 systemd-networkd[1499]: lxc_health: Gained IPv6LL May 16 16:38:58.260257 containerd[1573]: time="2025-05-16T16:38:58.260204801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03\" id:\"9a8039ab4a2105edc270270eb520f9e46b29e307b680035be03d0facf69e7f3f\" pid:5279 exited_at:{seconds:1747413538 nanos:259525594}" May 16 16:38:59.376742 kubelet[2669]: I0516 16:38:59.376705 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 16 16:38:59.378595 kubelet[2669]: I0516 16:38:59.378551 2669 container_gc.go:86] "Attempting to delete unused containers" May 16 16:38:59.379996 kubelet[2669]: I0516 16:38:59.379965 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 16 16:38:59.391683 kubelet[2669]: I0516 16:38:59.391648 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 16 16:38:59.391775 kubelet[2669]: I0516 16:38:59.391758 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-hnvrt","kube-system/coredns-668d6bf9bc-bc5s9","kube-system/cilium-7zqx9","kube-system/kube-controller-manager-localhost","kube-system/kube-proxy-r4mfv","kube-system/kube-apiserver-localhost","kube-system/kube-scheduler-localhost"] May 16 16:38:59.391806 kubelet[2669]: E0516 16:38:59.391790 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hnvrt" May 16 16:38:59.391806 kubelet[2669]: E0516 16:38:59.391800 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-bc5s9" May 16 16:38:59.391853 kubelet[2669]: E0516 16:38:59.391811 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7zqx9" May 16 16:38:59.391853 kubelet[2669]: E0516 16:38:59.391819 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:38:59.391853 kubelet[2669]: E0516 16:38:59.391827 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-r4mfv" May 16 16:38:59.391853 kubelet[2669]: E0516 16:38:59.391837 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-localhost" May 16 16:38:59.391853 kubelet[2669]: E0516 16:38:59.391844 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-localhost" May 16 16:38:59.391853 kubelet[2669]: I0516 16:38:59.391853 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 16 16:39:00.347458 containerd[1573]: time="2025-05-16T16:39:00.347392272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03\" id:\"286e9deaea3715a5d10ca096fcdf476fea9c1f2d3c92da98da1225a159e0f7d0\" pid:5304 exited_at:{seconds:1747413540 nanos:347104341}" May 16 16:39:01.011077 kubelet[2669]: E0516 16:39:01.011043 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:39:02.438383 containerd[1573]: time="2025-05-16T16:39:02.438343246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d0d2fc48aa4356bf4bd5afee7b8bcfa4e1cb8d5cf14bb381bc70d6e24c89a03\" id:\"d7ea291c4d233e702d895d2bd9360fa31d83e685bdb8c5fd87fd7f1d7d8cf757\" pid:5328 exited_at:{seconds:1747413542 nanos:438071865}" May 16 16:39:02.444242 sshd[4443]: Connection closed by 10.0.0.1 port 46326 May 16 16:39:02.444590 sshd-session[4437]: pam_unix(sshd:session): session closed for user core May 16 16:39:02.448431 systemd[1]: sshd@26-10.0.0.36:22-10.0.0.1:46326.service: Deactivated successfully. May 16 16:39:02.450505 systemd[1]: session-27.scope: Deactivated successfully. May 16 16:39:02.451269 systemd-logind[1513]: Session 27 logged out. Waiting for processes to exit. May 16 16:39:02.452455 systemd-logind[1513]: Removed session 27.