May 15 12:16:17.876865 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 12:16:17.876896 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:16:17.876908 kernel: BIOS-provided physical RAM map: May 15 12:16:17.876917 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 12:16:17.876925 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 12:16:17.876934 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 12:16:17.876944 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 15 12:16:17.876956 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 15 12:16:17.876972 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 12:16:17.876981 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 12:16:17.876990 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 12:16:17.876999 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 12:16:17.877007 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 12:16:17.877016 kernel: NX (Execute Disable) protection: active May 15 12:16:17.877030 kernel: APIC: Static calls initialized May 15 12:16:17.877040 kernel: SMBIOS 2.8 present. May 15 12:16:17.877054 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 15 12:16:17.877063 kernel: DMI: Memory slots populated: 1/1 May 15 12:16:17.877073 kernel: Hypervisor detected: KVM May 15 12:16:17.877082 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 12:16:17.877092 kernel: kvm-clock: using sched offset of 4372245622 cycles May 15 12:16:17.877111 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 12:16:17.877121 kernel: tsc: Detected 2794.748 MHz processor May 15 12:16:17.877135 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 12:16:17.877145 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 12:16:17.877155 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 15 12:16:17.877165 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 12:16:17.877175 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 12:16:17.877185 kernel: Using GB pages for direct mapping May 15 12:16:17.877194 kernel: ACPI: Early table checksum verification disabled May 15 12:16:17.877204 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 15 12:16:17.877214 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:16:17.877227 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:16:17.877237 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:16:17.877247 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 15 12:16:17.877256 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:16:17.877266 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:16:17.877277 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:16:17.877286 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:16:17.877296 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 15 12:16:17.877313 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 15 12:16:17.877323 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 15 12:16:17.877334 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 15 12:16:17.877344 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 15 12:16:17.877354 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 15 12:16:17.877364 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 15 12:16:17.877377 kernel: No NUMA configuration found May 15 12:16:17.877387 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 15 12:16:17.877397 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 15 12:16:17.877407 kernel: Zone ranges: May 15 12:16:17.877418 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 12:16:17.877428 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 15 12:16:17.877438 kernel: Normal empty May 15 12:16:17.877448 kernel: Device empty May 15 12:16:17.877458 kernel: Movable zone start for each node May 15 12:16:17.877468 kernel: Early memory node ranges May 15 12:16:17.877482 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 12:16:17.877492 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 15 12:16:17.877502 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 15 12:16:17.877512 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 12:16:17.877522 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 12:16:17.877532 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 15 12:16:17.877542 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 12:16:17.877557 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 12:16:17.877567 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 12:16:17.877580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 12:16:17.877590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 12:16:17.877603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 12:16:17.877641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 12:16:17.877653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 12:16:17.877663 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 12:16:17.877673 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 12:16:17.877683 kernel: TSC deadline timer available May 15 12:16:17.877693 kernel: CPU topo: Max. logical packages: 1 May 15 12:16:17.877708 kernel: CPU topo: Max. logical dies: 1 May 15 12:16:17.877718 kernel: CPU topo: Max. dies per package: 1 May 15 12:16:17.877740 kernel: CPU topo: Max. threads per core: 1 May 15 12:16:17.877750 kernel: CPU topo: Num. cores per package: 4 May 15 12:16:17.877771 kernel: CPU topo: Num. threads per package: 4 May 15 12:16:17.877781 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 15 12:16:17.877791 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 12:16:17.877801 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 12:16:17.877812 kernel: kvm-guest: setup PV sched yield May 15 12:16:17.877832 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 12:16:17.877842 kernel: Booting paravirtualized kernel on KVM May 15 12:16:17.877853 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 12:16:17.877863 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 15 12:16:17.877873 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 15 12:16:17.877883 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 15 12:16:17.877893 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 12:16:17.877903 kernel: kvm-guest: PV spinlocks enabled May 15 12:16:17.877913 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 12:16:17.877929 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:16:17.877940 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 12:16:17.877949 kernel: random: crng init done May 15 12:16:17.877959 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 12:16:17.877970 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 12:16:17.877979 kernel: Fallback order for Node 0: 0 May 15 12:16:17.877989 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 15 12:16:17.878000 kernel: Policy zone: DMA32 May 15 12:16:17.878010 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 12:16:17.878023 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 12:16:17.878033 kernel: ftrace: allocating 40065 entries in 157 pages May 15 12:16:17.878043 kernel: ftrace: allocated 157 pages with 5 groups May 15 12:16:17.878054 kernel: Dynamic Preempt: voluntary May 15 12:16:17.878064 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 12:16:17.878075 kernel: rcu: RCU event tracing is enabled. May 15 12:16:17.878087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 12:16:17.878098 kernel: Trampoline variant of Tasks RCU enabled. May 15 12:16:17.878126 kernel: Rude variant of Tasks RCU enabled. May 15 12:16:17.878140 kernel: Tracing variant of Tasks RCU enabled. May 15 12:16:17.878150 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 12:16:17.878160 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 12:16:17.878171 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 12:16:17.878181 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 12:16:17.878191 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 12:16:17.878202 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 12:16:17.878212 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 12:16:17.878233 kernel: Console: colour VGA+ 80x25 May 15 12:16:17.878244 kernel: printk: legacy console [ttyS0] enabled May 15 12:16:17.878255 kernel: ACPI: Core revision 20240827 May 15 12:16:17.878268 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 12:16:17.878279 kernel: APIC: Switch to symmetric I/O mode setup May 15 12:16:17.878289 kernel: x2apic enabled May 15 12:16:17.878303 kernel: APIC: Switched APIC routing to: physical x2apic May 15 12:16:17.878314 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 12:16:17.878325 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 12:16:17.878339 kernel: kvm-guest: setup PV IPIs May 15 12:16:17.878349 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 12:16:17.878360 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 15 12:16:17.878371 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 12:16:17.878382 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 12:16:17.878392 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 12:16:17.878403 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 12:16:17.878414 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 12:16:17.878427 kernel: Spectre V2 : Mitigation: Retpolines May 15 12:16:17.878438 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 12:16:17.878448 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 12:16:17.878459 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 12:16:17.878469 kernel: RETBleed: Mitigation: untrained return thunk May 15 12:16:17.878480 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 12:16:17.878490 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 12:16:17.878501 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 12:16:17.878512 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 12:16:17.878527 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 12:16:17.878537 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 12:16:17.878548 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 12:16:17.878559 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 12:16:17.878569 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 12:16:17.878580 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 12:16:17.878590 kernel: Freeing SMP alternatives memory: 32K May 15 12:16:17.878601 kernel: pid_max: default: 32768 minimum: 301 May 15 12:16:17.878632 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 12:16:17.878655 kernel: landlock: Up and running. May 15 12:16:17.878666 kernel: SELinux: Initializing. May 15 12:16:17.878676 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:16:17.878691 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:16:17.878702 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 12:16:17.878712 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 12:16:17.878723 kernel: ... version: 0 May 15 12:16:17.878733 kernel: ... bit width: 48 May 15 12:16:17.878749 kernel: ... generic registers: 6 May 15 12:16:17.878759 kernel: ... value mask: 0000ffffffffffff May 15 12:16:17.878770 kernel: ... max period: 00007fffffffffff May 15 12:16:17.878781 kernel: ... fixed-purpose events: 0 May 15 12:16:17.878791 kernel: ... event mask: 000000000000003f May 15 12:16:17.878802 kernel: signal: max sigframe size: 1776 May 15 12:16:17.878812 kernel: rcu: Hierarchical SRCU implementation. May 15 12:16:17.878823 kernel: rcu: Max phase no-delay instances is 400. May 15 12:16:17.878834 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 12:16:17.878860 kernel: smp: Bringing up secondary CPUs ... May 15 12:16:17.878871 kernel: smpboot: x86: Booting SMP configuration: May 15 12:16:17.878881 kernel: .... node #0, CPUs: #1 #2 #3 May 15 12:16:17.878891 kernel: smp: Brought up 1 node, 4 CPUs May 15 12:16:17.878902 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 12:16:17.878913 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 136904K reserved, 0K cma-reserved) May 15 12:16:17.878924 kernel: devtmpfs: initialized May 15 12:16:17.878934 kernel: x86/mm: Memory block size: 128MB May 15 12:16:17.878945 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 12:16:17.878960 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 12:16:17.878970 kernel: pinctrl core: initialized pinctrl subsystem May 15 12:16:17.878981 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 12:16:17.878992 kernel: audit: initializing netlink subsys (disabled) May 15 12:16:17.879002 kernel: audit: type=2000 audit(1747311374.620:1): state=initialized audit_enabled=0 res=1 May 15 12:16:17.879013 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 12:16:17.879023 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 12:16:17.879033 kernel: cpuidle: using governor menu May 15 12:16:17.879044 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 12:16:17.879058 kernel: dca service started, version 1.12.1 May 15 12:16:17.879069 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 15 12:16:17.879079 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 12:16:17.879089 kernel: PCI: Using configuration type 1 for base access May 15 12:16:17.879111 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 12:16:17.879121 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 12:16:17.879131 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 12:16:17.879138 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 12:16:17.879146 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 12:16:17.879157 kernel: ACPI: Added _OSI(Module Device) May 15 12:16:17.879165 kernel: ACPI: Added _OSI(Processor Device) May 15 12:16:17.879173 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 12:16:17.879181 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 12:16:17.879188 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 12:16:17.879196 kernel: ACPI: Interpreter enabled May 15 12:16:17.879204 kernel: ACPI: PM: (supports S0 S3 S5) May 15 12:16:17.879211 kernel: ACPI: Using IOAPIC for interrupt routing May 15 12:16:17.879219 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 12:16:17.879229 kernel: PCI: Using E820 reservations for host bridge windows May 15 12:16:17.879237 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 12:16:17.879245 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 12:16:17.879458 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 12:16:17.879603 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 12:16:17.879790 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 12:16:17.879807 kernel: PCI host bridge to bus 0000:00 May 15 12:16:17.879996 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 12:16:17.880160 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 12:16:17.880319 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 12:16:17.880461 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 12:16:17.880642 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 12:16:17.880793 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 15 12:16:17.880937 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 12:16:17.881148 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 15 12:16:17.881324 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 15 12:16:17.881448 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 15 12:16:17.881567 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 15 12:16:17.881711 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 15 12:16:17.881869 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 12:16:17.882049 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 15 12:16:17.882197 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 15 12:16:17.882324 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 15 12:16:17.882443 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 15 12:16:17.882579 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 12:16:17.882770 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 15 12:16:17.882930 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 15 12:16:17.883090 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 15 12:16:17.883291 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 12:16:17.883447 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 15 12:16:17.883606 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 15 12:16:17.883819 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 15 12:16:17.883975 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 15 12:16:17.884168 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 15 12:16:17.884303 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 12:16:17.884433 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 15 12:16:17.884551 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 15 12:16:17.884729 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 15 12:16:17.884916 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 15 12:16:17.885076 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 15 12:16:17.885092 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 12:16:17.885120 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 12:16:17.885132 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 12:16:17.885146 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 12:16:17.885159 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 12:16:17.885172 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 12:16:17.885185 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 12:16:17.885199 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 12:16:17.885212 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 12:16:17.885225 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 12:16:17.885242 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 12:16:17.885255 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 12:16:17.885269 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 12:16:17.885282 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 12:16:17.885295 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 12:16:17.885308 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 12:16:17.885321 kernel: iommu: Default domain type: Translated May 15 12:16:17.885334 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 12:16:17.885347 kernel: PCI: Using ACPI for IRQ routing May 15 12:16:17.885363 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 12:16:17.885374 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 12:16:17.885384 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 15 12:16:17.885545 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 12:16:17.885728 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 12:16:17.885888 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 12:16:17.885904 kernel: vgaarb: loaded May 15 12:16:17.885915 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 12:16:17.885930 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 12:16:17.885941 kernel: clocksource: Switched to clocksource kvm-clock May 15 12:16:17.885952 kernel: VFS: Disk quotas dquot_6.6.0 May 15 12:16:17.885962 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 12:16:17.885973 kernel: pnp: PnP ACPI init May 15 12:16:17.886200 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 12:16:17.886219 kernel: pnp: PnP ACPI: found 6 devices May 15 12:16:17.886230 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 12:16:17.886245 kernel: NET: Registered PF_INET protocol family May 15 12:16:17.886255 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 12:16:17.886266 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 12:16:17.886277 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 12:16:17.886288 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 12:16:17.886298 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 12:16:17.886309 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 12:16:17.886319 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:16:17.886330 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:16:17.886344 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 12:16:17.886355 kernel: NET: Registered PF_XDP protocol family May 15 12:16:17.886500 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 12:16:17.886680 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 12:16:17.886855 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 12:16:17.887000 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 12:16:17.887168 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 12:16:17.887352 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 15 12:16:17.887375 kernel: PCI: CLS 0 bytes, default 64 May 15 12:16:17.887386 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 15 12:16:17.887397 kernel: Initialise system trusted keyrings May 15 12:16:17.887408 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 12:16:17.887419 kernel: Key type asymmetric registered May 15 12:16:17.887429 kernel: Asymmetric key parser 'x509' registered May 15 12:16:17.887440 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 12:16:17.887451 kernel: io scheduler mq-deadline registered May 15 12:16:17.887461 kernel: io scheduler kyber registered May 15 12:16:17.887476 kernel: io scheduler bfq registered May 15 12:16:17.887487 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 12:16:17.887498 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 12:16:17.887509 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 12:16:17.887519 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 12:16:17.887530 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 12:16:17.887541 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 12:16:17.887552 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 12:16:17.887563 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 12:16:17.887577 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 12:16:17.887783 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 12:16:17.887801 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 12:16:17.887950 kernel: rtc_cmos 00:04: registered as rtc0 May 15 12:16:17.888114 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T12:16:17 UTC (1747311377) May 15 12:16:17.888267 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 12:16:17.888283 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 12:16:17.888294 kernel: NET: Registered PF_INET6 protocol family May 15 12:16:17.888310 kernel: Segment Routing with IPv6 May 15 12:16:17.888321 kernel: In-situ OAM (IOAM) with IPv6 May 15 12:16:17.888331 kernel: NET: Registered PF_PACKET protocol family May 15 12:16:17.888341 kernel: Key type dns_resolver registered May 15 12:16:17.888352 kernel: IPI shorthand broadcast: enabled May 15 12:16:17.888363 kernel: sched_clock: Marking stable (3012003165, 176553017)->(3219348017, -30791835) May 15 12:16:17.888374 kernel: registered taskstats version 1 May 15 12:16:17.888384 kernel: Loading compiled-in X.509 certificates May 15 12:16:17.888395 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 12:16:17.888409 kernel: Demotion targets for Node 0: null May 15 12:16:17.888420 kernel: Key type .fscrypt registered May 15 12:16:17.888430 kernel: Key type fscrypt-provisioning registered May 15 12:16:17.888440 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 12:16:17.888451 kernel: ima: Allocated hash algorithm: sha1 May 15 12:16:17.888461 kernel: ima: No architecture policies found May 15 12:16:17.888472 kernel: clk: Disabling unused clocks May 15 12:16:17.888482 kernel: Warning: unable to open an initial console. May 15 12:16:17.888493 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 12:16:17.888507 kernel: Write protecting the kernel read-only data: 24576k May 15 12:16:17.888518 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 12:16:17.888529 kernel: Run /init as init process May 15 12:16:17.888539 kernel: with arguments: May 15 12:16:17.888550 kernel: /init May 15 12:16:17.888560 kernel: with environment: May 15 12:16:17.888571 kernel: HOME=/ May 15 12:16:17.888581 kernel: TERM=linux May 15 12:16:17.888591 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 12:16:17.888608 systemd[1]: Successfully made /usr/ read-only. May 15 12:16:17.888670 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:16:17.888686 systemd[1]: Detected virtualization kvm. May 15 12:16:17.888698 systemd[1]: Detected architecture x86-64. May 15 12:16:17.888709 systemd[1]: Running in initrd. May 15 12:16:17.888723 systemd[1]: No hostname configured, using default hostname. May 15 12:16:17.888736 systemd[1]: Hostname set to . May 15 12:16:17.888747 systemd[1]: Initializing machine ID from VM UUID. May 15 12:16:17.888758 systemd[1]: Queued start job for default target initrd.target. May 15 12:16:17.888770 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:16:17.888782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:16:17.888795 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 12:16:17.888807 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:16:17.888824 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 12:16:17.888837 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 12:16:17.888850 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 12:16:17.888862 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 12:16:17.888873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:16:17.888885 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:16:17.888896 systemd[1]: Reached target paths.target - Path Units. May 15 12:16:17.888911 systemd[1]: Reached target slices.target - Slice Units. May 15 12:16:17.888922 systemd[1]: Reached target swap.target - Swaps. May 15 12:16:17.888934 systemd[1]: Reached target timers.target - Timer Units. May 15 12:16:17.888946 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:16:17.888957 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:16:17.888969 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 12:16:17.888981 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 12:16:17.888993 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:16:17.889004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:16:17.889019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:16:17.889031 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:16:17.889042 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 12:16:17.889054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:16:17.889069 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 12:16:17.889084 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 12:16:17.889096 systemd[1]: Starting systemd-fsck-usr.service... May 15 12:16:17.889121 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:16:17.889133 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:16:17.889145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:16:17.889159 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 12:16:17.889179 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:16:17.889194 systemd[1]: Finished systemd-fsck-usr.service. May 15 12:16:17.889209 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:16:17.889256 systemd-journald[220]: Collecting audit messages is disabled. May 15 12:16:17.889295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:16:17.889311 systemd-journald[220]: Journal started May 15 12:16:17.889342 systemd-journald[220]: Runtime Journal (/run/log/journal/b061e6496f9a4fc6a89636a927a77759) is 6M, max 48.6M, 42.5M free. May 15 12:16:17.875006 systemd-modules-load[222]: Inserted module 'overlay' May 15 12:16:17.924226 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 12:16:17.924264 kernel: Bridge firewalling registered May 15 12:16:17.905587 systemd-modules-load[222]: Inserted module 'br_netfilter' May 15 12:16:17.926740 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:16:17.928527 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:16:17.931330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:16:17.938188 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 12:16:17.942357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:16:17.947489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:16:17.951735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:16:17.958146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:16:17.963830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:16:17.968221 systemd-tmpfiles[246]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 12:16:17.974334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:16:17.977635 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:16:17.979815 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:16:17.983781 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 12:16:18.021754 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:16:18.044316 systemd-resolved[262]: Positive Trust Anchors: May 15 12:16:18.044336 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:16:18.044380 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:16:18.047218 systemd-resolved[262]: Defaulting to hostname 'linux'. May 15 12:16:18.053490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:16:18.054056 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:16:18.146672 kernel: SCSI subsystem initialized May 15 12:16:18.156652 kernel: Loading iSCSI transport class v2.0-870. May 15 12:16:18.173672 kernel: iscsi: registered transport (tcp) May 15 12:16:18.196668 kernel: iscsi: registered transport (qla4xxx) May 15 12:16:18.196764 kernel: QLogic iSCSI HBA Driver May 15 12:16:18.219200 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:16:18.236495 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:16:18.249491 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:16:18.312407 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 12:16:18.322282 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 12:16:18.393666 kernel: raid6: avx2x4 gen() 27537 MB/s May 15 12:16:18.410671 kernel: raid6: avx2x2 gen() 16955 MB/s May 15 12:16:18.427994 kernel: raid6: avx2x1 gen() 13700 MB/s May 15 12:16:18.428043 kernel: raid6: using algorithm avx2x4 gen() 27537 MB/s May 15 12:16:18.445963 kernel: raid6: .... xor() 6438 MB/s, rmw enabled May 15 12:16:18.446011 kernel: raid6: using avx2x2 recovery algorithm May 15 12:16:18.467644 kernel: xor: automatically using best checksumming function avx May 15 12:16:18.644666 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 12:16:18.653575 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 12:16:18.656459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:16:18.693936 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 15 12:16:18.699529 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:16:18.703252 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 12:16:18.736523 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation May 15 12:16:18.769925 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:16:18.772474 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:16:18.847244 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:16:18.849888 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 12:16:18.901648 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 15 12:16:18.913822 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 12:16:18.913978 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 12:16:18.913998 kernel: GPT:9289727 != 19775487 May 15 12:16:18.914008 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 12:16:18.914018 kernel: GPT:9289727 != 19775487 May 15 12:16:18.914027 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 12:16:18.914037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:16:18.914047 kernel: libata version 3.00 loaded. May 15 12:16:18.916688 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 12:16:18.930722 kernel: cryptd: max_cpu_qlen set to 1000 May 15 12:16:18.930774 kernel: ahci 0000:00:1f.2: version 3.0 May 15 12:16:18.992343 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 12:16:18.992364 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 15 12:16:18.992523 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 15 12:16:18.992774 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 12:16:18.992914 kernel: scsi host0: ahci May 15 12:16:18.993097 kernel: scsi host1: ahci May 15 12:16:18.993328 kernel: AES CTR mode by8 optimization enabled May 15 12:16:18.993345 kernel: scsi host2: ahci May 15 12:16:18.993495 kernel: scsi host3: ahci May 15 12:16:18.993658 kernel: scsi host4: ahci May 15 12:16:18.993824 kernel: scsi host5: ahci May 15 12:16:18.993976 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 15 12:16:18.993988 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 15 12:16:18.993999 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 15 12:16:18.994014 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 15 12:16:18.994025 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 15 12:16:18.994035 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 15 12:16:18.949707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:16:18.949839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:16:18.966413 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:16:18.973569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:16:18.975756 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:16:19.019439 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 12:16:19.057948 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 12:16:19.059808 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:16:19.070415 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 12:16:19.071896 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 12:16:19.082221 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 12:16:19.084198 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 12:16:19.125867 disk-uuid[631]: Primary Header is updated. May 15 12:16:19.125867 disk-uuid[631]: Secondary Entries is updated. May 15 12:16:19.125867 disk-uuid[631]: Secondary Header is updated. May 15 12:16:19.132656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:16:19.138661 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:16:19.301346 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 12:16:19.301412 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 12:16:19.301636 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 12:16:19.302643 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 12:16:19.303639 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 12:16:19.304655 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 12:16:19.304680 kernel: ata3.00: applying bridge limits May 15 12:16:19.305639 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 12:16:19.306637 kernel: ata3.00: configured for UDMA/100 May 15 12:16:19.306659 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 12:16:19.370662 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 12:16:19.386487 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 12:16:19.386509 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 12:16:19.817425 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 12:16:19.819257 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:16:19.821177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:16:19.822563 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:16:19.824276 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 12:16:19.870124 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 12:16:20.150644 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:16:20.150831 disk-uuid[632]: The operation has completed successfully. May 15 12:16:20.182875 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 12:16:20.183005 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 12:16:20.232511 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 12:16:20.288022 sh[660]: Success May 15 12:16:20.305787 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 12:16:20.305826 kernel: device-mapper: uevent: version 1.0.3 May 15 12:16:20.306901 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 12:16:20.315642 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 15 12:16:20.348900 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 12:16:20.352318 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 12:16:20.385772 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 12:16:20.404843 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 12:16:20.404931 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (672) May 15 12:16:20.406511 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 12:16:20.406539 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 12:16:20.407637 kernel: BTRFS info (device dm-0): using free-space-tree May 15 12:16:20.414550 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 12:16:20.416204 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 12:16:20.418042 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 12:16:20.418887 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 12:16:20.421157 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 12:16:20.451683 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (703) May 15 12:16:20.451750 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:16:20.451771 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:16:20.453219 kernel: BTRFS info (device vda6): using free-space-tree May 15 12:16:20.461645 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:16:20.462027 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 12:16:20.465235 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 12:16:20.550796 ignition[746]: Ignition 2.21.0 May 15 12:16:20.551550 ignition[746]: Stage: fetch-offline May 15 12:16:20.551586 ignition[746]: no configs at "/usr/lib/ignition/base.d" May 15 12:16:20.551595 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:16:20.551696 ignition[746]: parsed url from cmdline: "" May 15 12:16:20.551700 ignition[746]: no config URL provided May 15 12:16:20.551705 ignition[746]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:16:20.551714 ignition[746]: no config at "/usr/lib/ignition/user.ign" May 15 12:16:20.551735 ignition[746]: op(1): [started] loading QEMU firmware config module May 15 12:16:20.551741 ignition[746]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 12:16:20.559951 ignition[746]: op(1): [finished] loading QEMU firmware config module May 15 12:16:20.560009 ignition[746]: QEMU firmware config was not found. Ignoring... May 15 12:16:20.574627 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:16:20.580014 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:16:20.606824 ignition[746]: parsing config with SHA512: 875b7c23aea8359e573360642f1fa0bd003b7e2164b8d8e64f722617c922024dbd4d0636f489aeedad7b3d5210ddf5c48e563f8966f2c3cf4dc57dc172c00c3b May 15 12:16:20.614837 unknown[746]: fetched base config from "system" May 15 12:16:20.614857 unknown[746]: fetched user config from "qemu" May 15 12:16:20.615349 ignition[746]: fetch-offline: fetch-offline passed May 15 12:16:20.615425 ignition[746]: Ignition finished successfully May 15 12:16:20.619390 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:16:20.634776 systemd-networkd[851]: lo: Link UP May 15 12:16:20.634787 systemd-networkd[851]: lo: Gained carrier May 15 12:16:20.636400 systemd-networkd[851]: Enumeration completed May 15 12:16:20.636599 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:16:20.636773 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:16:20.636778 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:16:20.639783 systemd-networkd[851]: eth0: Link UP May 15 12:16:20.639787 systemd-networkd[851]: eth0: Gained carrier May 15 12:16:20.639795 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:16:20.651073 systemd[1]: Reached target network.target - Network. May 15 12:16:20.653003 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 12:16:20.654697 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 12:16:20.670680 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 12:16:20.697419 ignition[855]: Ignition 2.21.0 May 15 12:16:20.697437 ignition[855]: Stage: kargs May 15 12:16:20.697594 ignition[855]: no configs at "/usr/lib/ignition/base.d" May 15 12:16:20.697605 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:16:20.698531 ignition[855]: kargs: kargs passed May 15 12:16:20.698584 ignition[855]: Ignition finished successfully May 15 12:16:20.708693 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 12:16:20.712302 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 12:16:20.751517 ignition[864]: Ignition 2.21.0 May 15 12:16:20.751533 ignition[864]: Stage: disks May 15 12:16:20.751723 ignition[864]: no configs at "/usr/lib/ignition/base.d" May 15 12:16:20.751738 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:16:20.752452 ignition[864]: disks: disks passed May 15 12:16:20.752497 ignition[864]: Ignition finished successfully May 15 12:16:20.814675 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 12:16:20.815467 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 12:16:20.817276 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 12:16:20.817648 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:16:20.818004 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:16:20.825410 systemd[1]: Reached target basic.target - Basic System. May 15 12:16:20.826936 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 12:16:20.868478 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 12:16:21.247376 systemd-resolved[262]: Detected conflict on linux IN A 10.0.0.46 May 15 12:16:21.247397 systemd-resolved[262]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. May 15 12:16:21.391512 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 12:16:21.393461 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 12:16:21.521664 kernel: EXT4-fs (vda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 12:16:21.522853 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 12:16:21.525251 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 12:16:21.528987 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:16:21.532347 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 12:16:21.534777 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 12:16:21.534841 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 12:16:21.536702 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:16:21.547059 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 12:16:21.550735 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 12:16:21.554180 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (882) May 15 12:16:21.556461 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:16:21.556488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:16:21.556502 kernel: BTRFS info (device vda6): using free-space-tree May 15 12:16:21.562153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:16:21.594663 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory May 15 12:16:21.600291 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory May 15 12:16:21.603953 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory May 15 12:16:21.607632 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory May 15 12:16:21.708350 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 12:16:21.711708 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 12:16:21.715006 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 12:16:21.745769 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 12:16:21.747302 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:16:21.764933 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 12:16:21.779639 ignition[997]: INFO : Ignition 2.21.0 May 15 12:16:21.779639 ignition[997]: INFO : Stage: mount May 15 12:16:21.779639 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:16:21.779639 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:16:21.784186 ignition[997]: INFO : mount: mount passed May 15 12:16:21.784186 ignition[997]: INFO : Ignition finished successfully May 15 12:16:21.785294 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 12:16:21.788536 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 12:16:21.989858 systemd-networkd[851]: eth0: Gained IPv6LL May 15 12:16:22.524788 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:16:22.571647 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1009) May 15 12:16:22.573680 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:16:22.573703 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:16:22.573723 kernel: BTRFS info (device vda6): using free-space-tree May 15 12:16:22.578282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:16:22.610377 ignition[1026]: INFO : Ignition 2.21.0 May 15 12:16:22.610377 ignition[1026]: INFO : Stage: files May 15 12:16:22.622791 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:16:22.622791 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:16:22.622791 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping May 15 12:16:22.622791 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 12:16:22.622791 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 12:16:22.630046 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 12:16:22.631853 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 12:16:22.631853 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 12:16:22.630768 unknown[1026]: wrote ssh authorized keys file for user: core May 15 12:16:22.636163 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 12:16:22.636163 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 12:16:22.672872 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 12:16:22.805899 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 12:16:22.805899 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 12:16:22.817454 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 12:16:23.171296 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 12:16:23.258229 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 12:16:23.258229 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 12:16:23.262796 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 12:16:23.262796 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 12:16:23.262796 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 12:16:23.262796 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:16:23.262796 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:16:23.262796 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:16:23.262796 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:16:23.277993 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:16:23.280216 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:16:23.280216 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 12:16:23.286099 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 12:16:23.286099 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 12:16:23.291790 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 15 12:16:23.588700 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 12:16:24.001771 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 12:16:24.001771 ignition[1026]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 12:16:24.009027 ignition[1026]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:16:24.263239 ignition[1026]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:16:24.263239 ignition[1026]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 12:16:24.263239 ignition[1026]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 12:16:24.263239 ignition[1026]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 12:16:24.270582 ignition[1026]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 12:16:24.270582 ignition[1026]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 12:16:24.270582 ignition[1026]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 12:16:24.587932 ignition[1026]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 12:16:24.593846 ignition[1026]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 12:16:24.595751 ignition[1026]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 12:16:24.595751 ignition[1026]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 12:16:24.595751 ignition[1026]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 12:16:24.600966 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 12:16:24.600966 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 12:16:24.600966 ignition[1026]: INFO : files: files passed May 15 12:16:24.600966 ignition[1026]: INFO : Ignition finished successfully May 15 12:16:24.604294 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 12:16:24.608090 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 12:16:24.611763 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 12:16:24.621159 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 12:16:24.621308 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 12:16:24.626680 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory May 15 12:16:24.631634 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:16:24.633573 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 12:16:24.635697 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:16:24.639606 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:16:24.640161 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 12:16:24.644435 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 12:16:24.690972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 12:16:24.691137 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 12:16:24.692164 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 12:16:24.695204 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 12:16:24.697421 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 12:16:24.698493 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 12:16:24.729833 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:16:24.733816 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 12:16:24.758646 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 12:16:24.761060 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:16:24.762430 systemd[1]: Stopped target timers.target - Timer Units. May 15 12:16:24.764060 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 12:16:24.764224 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:16:24.767767 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 12:16:24.768368 systemd[1]: Stopped target basic.target - Basic System. May 15 12:16:24.768923 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 12:16:24.773152 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:16:24.773513 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 12:16:24.778043 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 12:16:24.780093 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 12:16:24.782428 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:16:24.784568 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 12:16:24.787164 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 12:16:24.787512 systemd[1]: Stopped target swap.target - Swaps. May 15 12:16:24.791262 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 12:16:24.791425 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 12:16:24.794728 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 12:16:24.795328 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:16:24.795657 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 12:16:24.800087 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:16:24.802302 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 12:16:24.802444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 12:16:24.805319 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 12:16:24.805464 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:16:24.807422 systemd[1]: Stopped target paths.target - Path Units. May 15 12:16:24.809558 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 12:16:24.814674 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:16:24.815192 systemd[1]: Stopped target slices.target - Slice Units. May 15 12:16:24.817960 systemd[1]: Stopped target sockets.target - Socket Units. May 15 12:16:24.819646 systemd[1]: iscsid.socket: Deactivated successfully. May 15 12:16:24.819740 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:16:24.821537 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 12:16:24.821642 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:16:24.823651 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 12:16:24.823765 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:16:24.825643 systemd[1]: ignition-files.service: Deactivated successfully. May 15 12:16:24.825750 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 12:16:24.830683 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 12:16:24.833465 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 12:16:24.835509 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 12:16:24.835691 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:16:24.838342 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 12:16:24.838532 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:16:24.845465 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 12:16:24.848905 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 12:16:24.867528 ignition[1081]: INFO : Ignition 2.21.0 May 15 12:16:24.867528 ignition[1081]: INFO : Stage: umount May 15 12:16:24.869572 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:16:24.869572 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:16:24.872343 ignition[1081]: INFO : umount: umount passed May 15 12:16:24.872343 ignition[1081]: INFO : Ignition finished successfully May 15 12:16:24.871111 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 12:16:24.874829 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 12:16:24.874991 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 12:16:24.876519 systemd[1]: Stopped target network.target - Network. May 15 12:16:24.877955 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 12:16:24.878042 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 12:16:24.879703 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 12:16:24.879761 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 12:16:24.881544 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 12:16:24.881605 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 12:16:24.883938 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 12:16:24.883989 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 12:16:24.884704 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 12:16:24.887325 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 12:16:24.889651 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 12:16:24.889779 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 12:16:24.892218 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 12:16:24.892316 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 12:16:24.893818 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 12:16:24.893956 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 12:16:24.899136 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 12:16:24.899826 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 12:16:24.899925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:16:24.904259 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 12:16:24.914215 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 12:16:24.914364 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 12:16:24.918417 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 12:16:24.918755 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 12:16:24.919166 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 12:16:24.919207 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 12:16:24.925523 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 12:16:24.926599 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 12:16:24.926774 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:16:24.929362 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 12:16:24.929445 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 12:16:24.932696 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 12:16:24.932765 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 12:16:24.933298 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:16:24.934566 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 12:16:24.951297 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 12:16:24.951460 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 12:16:24.961823 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 12:16:24.962045 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:16:24.964293 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 12:16:24.964354 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 12:16:24.966490 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 12:16:24.966540 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:16:24.966962 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 12:16:24.967023 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 12:16:24.967789 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 12:16:24.967841 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 12:16:24.976164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 12:16:24.976293 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:16:24.978524 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 12:16:24.981164 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 12:16:24.981331 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:16:24.986306 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 12:16:24.986388 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:16:24.991082 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 12:16:24.991141 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:16:24.995457 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 12:16:24.995509 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:16:24.996170 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:16:24.996245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:16:25.021424 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 12:16:25.021695 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 12:16:25.022819 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 12:16:25.024646 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 12:16:25.061080 systemd[1]: Switching root. May 15 12:16:25.106788 systemd-journald[220]: Journal stopped May 15 12:16:27.289311 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 15 12:16:27.289411 kernel: SELinux: policy capability network_peer_controls=1 May 15 12:16:27.289433 kernel: SELinux: policy capability open_perms=1 May 15 12:16:27.289446 kernel: SELinux: policy capability extended_socket_class=1 May 15 12:16:27.289458 kernel: SELinux: policy capability always_check_network=0 May 15 12:16:27.289469 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 12:16:27.289480 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 12:16:27.289494 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 12:16:27.289505 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 12:16:27.289517 kernel: SELinux: policy capability userspace_initial_context=0 May 15 12:16:27.289533 kernel: audit: type=1403 audit(1747311385.825:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 12:16:27.289553 systemd[1]: Successfully loaded SELinux policy in 55.797ms. May 15 12:16:27.289581 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.445ms. May 15 12:16:27.289596 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:16:27.289608 systemd[1]: Detected virtualization kvm. May 15 12:16:27.289638 systemd[1]: Detected architecture x86-64. May 15 12:16:27.289650 systemd[1]: Detected first boot. May 15 12:16:27.289662 systemd[1]: Initializing machine ID from VM UUID. May 15 12:16:27.289675 zram_generator::config[1126]: No configuration found. May 15 12:16:27.289701 kernel: Guest personality initialized and is inactive May 15 12:16:27.289712 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 12:16:27.289724 kernel: Initialized host personality May 15 12:16:27.289735 kernel: NET: Registered PF_VSOCK protocol family May 15 12:16:27.289747 systemd[1]: Populated /etc with preset unit settings. May 15 12:16:27.289760 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 12:16:27.289772 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 12:16:27.289784 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 12:16:27.289798 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 12:16:27.289811 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 12:16:27.289823 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 12:16:27.289839 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 12:16:27.289879 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 12:16:27.289893 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 12:16:27.289910 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 12:16:27.289922 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 12:16:27.289934 systemd[1]: Created slice user.slice - User and Session Slice. May 15 12:16:27.289951 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:16:27.289963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:16:27.289975 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 12:16:27.289988 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 12:16:27.290007 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 12:16:27.290020 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:16:27.290032 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 12:16:27.290046 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:16:27.290058 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:16:27.290072 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 12:16:27.290084 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 12:16:27.290096 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 12:16:27.290108 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 12:16:27.290120 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:16:27.290132 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:16:27.290144 systemd[1]: Reached target slices.target - Slice Units. May 15 12:16:27.290159 systemd[1]: Reached target swap.target - Swaps. May 15 12:16:27.290173 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 12:16:27.290194 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 12:16:27.290216 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 12:16:27.290228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:16:27.290244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:16:27.290257 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:16:27.290269 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 12:16:27.290280 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 12:16:27.290292 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 12:16:27.290309 systemd[1]: Mounting media.mount - External Media Directory... May 15 12:16:27.290320 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:16:27.290332 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 12:16:27.290344 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 12:16:27.290358 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 12:16:27.290374 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 12:16:27.290388 systemd[1]: Reached target machines.target - Containers. May 15 12:16:27.290400 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 12:16:27.290415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:16:27.290427 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:16:27.290439 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 12:16:27.290451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:16:27.290462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:16:27.290474 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:16:27.290489 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 12:16:27.290501 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:16:27.290513 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 12:16:27.290528 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 12:16:27.290540 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 12:16:27.290552 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 12:16:27.290563 systemd[1]: Stopped systemd-fsck-usr.service. May 15 12:16:27.290576 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:16:27.290588 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:16:27.290600 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:16:27.290630 kernel: loop: module loaded May 15 12:16:27.290649 kernel: fuse: init (API version 7.41) May 15 12:16:27.290684 systemd-journald[1190]: Collecting audit messages is disabled. May 15 12:16:27.290710 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:16:27.290725 systemd-journald[1190]: Journal started May 15 12:16:27.290752 systemd-journald[1190]: Runtime Journal (/run/log/journal/b061e6496f9a4fc6a89636a927a77759) is 6M, max 48.6M, 42.5M free. May 15 12:16:26.538290 systemd[1]: Queued start job for default target multi-user.target. May 15 12:16:26.563954 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 12:16:26.564509 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 12:16:27.344157 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 12:16:27.386646 kernel: ACPI: bus type drm_connector registered May 15 12:16:27.391325 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 12:16:27.400651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:16:27.403759 systemd[1]: verity-setup.service: Deactivated successfully. May 15 12:16:27.403791 systemd[1]: Stopped verity-setup.service. May 15 12:16:27.407663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:16:27.435721 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:16:27.454877 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 12:16:27.456264 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 12:16:27.457827 systemd[1]: Mounted media.mount - External Media Directory. May 15 12:16:27.459229 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 12:16:27.476333 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 12:16:27.477754 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 12:16:27.481181 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:16:27.482874 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 12:16:27.483105 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 12:16:27.584773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:16:27.585099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:16:27.586844 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:16:27.587116 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:16:27.588677 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:16:27.588963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:16:27.590600 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 12:16:27.590890 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 12:16:27.592452 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:16:27.592758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:16:27.594388 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:16:27.596016 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:16:27.597811 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 12:16:27.599507 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 12:16:27.617988 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:16:27.643795 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 12:16:27.646125 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 12:16:27.647289 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 12:16:27.647328 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:16:27.649370 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 12:16:27.656764 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 12:16:27.685908 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:16:27.687878 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 12:16:27.703225 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 12:16:27.705743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:16:27.707425 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 12:16:27.708694 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:16:27.717918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:16:27.722761 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 12:16:27.726821 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:16:27.748932 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:16:27.750712 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 12:16:27.752109 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 12:16:27.752927 systemd-journald[1190]: Time spent on flushing to /var/log/journal/b061e6496f9a4fc6a89636a927a77759 is 16.566ms for 986 entries. May 15 12:16:27.752927 systemd-journald[1190]: System Journal (/var/log/journal/b061e6496f9a4fc6a89636a927a77759) is 8M, max 195.6M, 187.6M free. May 15 12:16:27.803252 systemd-journald[1190]: Received client request to flush runtime journal. May 15 12:16:27.803290 kernel: loop0: detected capacity change from 0 to 113872 May 15 12:16:27.781269 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 12:16:27.783701 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 12:16:27.789075 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 12:16:27.804171 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 12:16:27.805979 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. May 15 12:16:27.806012 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. May 15 12:16:27.807791 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 12:16:27.811121 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:16:27.823840 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:16:27.828667 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 12:16:27.828850 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 12:16:27.884470 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 12:16:27.890973 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 12:16:27.896658 kernel: loop1: detected capacity change from 0 to 205544 May 15 12:16:28.004662 kernel: loop2: detected capacity change from 0 to 146240 May 15 12:16:28.044704 kernel: loop3: detected capacity change from 0 to 113872 May 15 12:16:28.052022 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 12:16:28.057435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:16:28.062840 kernel: loop4: detected capacity change from 0 to 205544 May 15 12:16:28.094640 kernel: loop5: detected capacity change from 0 to 146240 May 15 12:16:28.103932 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 15 12:16:28.104330 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 15 12:16:28.110300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:16:28.117017 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 12:16:28.117665 (sd-merge)[1267]: Merged extensions into '/usr'. May 15 12:16:28.122708 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... May 15 12:16:28.122855 systemd[1]: Reloading... May 15 12:16:28.204673 zram_generator::config[1309]: No configuration found. May 15 12:16:28.258464 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 12:16:28.303339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:16:28.413330 systemd[1]: Reloading finished in 289 ms. May 15 12:16:28.446127 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 12:16:28.447892 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 12:16:28.502594 systemd[1]: Starting ensure-sysext.service... May 15 12:16:28.506286 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:16:28.517362 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... May 15 12:16:28.517383 systemd[1]: Reloading... May 15 12:16:28.541255 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 12:16:28.541418 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 12:16:28.541762 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 12:16:28.542055 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 12:16:28.542994 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 12:16:28.543261 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. May 15 12:16:28.543336 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. May 15 12:16:28.550543 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:16:28.550562 systemd-tmpfiles[1335]: Skipping /boot May 15 12:16:28.602368 zram_generator::config[1364]: No configuration found. May 15 12:16:28.576487 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:16:28.576505 systemd-tmpfiles[1335]: Skipping /boot May 15 12:16:28.708541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:16:28.802122 systemd[1]: Reloading finished in 284 ms. May 15 12:16:28.825778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 12:16:28.834777 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:16:28.849635 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:16:28.853185 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 12:16:28.857448 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 12:16:28.934014 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:16:28.946956 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:16:28.950161 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 12:16:28.961840 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 12:16:28.965152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:16:28.965322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:16:28.968124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:16:28.973905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:16:28.991110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:16:29.076331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:16:29.076769 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:16:29.077038 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:16:29.081776 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 12:16:29.084365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:16:29.084605 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:16:29.090496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:16:29.090773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:16:29.092747 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:16:29.092983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:16:29.098644 systemd-udevd[1407]: Using default interface naming scheme 'v255'. May 15 12:16:29.102973 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 12:16:29.106539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:16:29.118225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:16:29.120029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:16:29.151069 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:16:29.154399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:16:29.155711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:16:29.155900 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:16:29.163594 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 12:16:29.164978 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:16:29.166429 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 12:16:29.168654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:16:29.171717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:16:29.173921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:16:29.174156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:16:29.178944 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 12:16:29.180986 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:16:29.181205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:16:29.184802 augenrules[1447]: No rules May 15 12:16:29.208309 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:16:29.208603 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:16:29.243952 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:16:29.246641 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 12:16:29.268092 systemd[1]: Finished ensure-sysext.service. May 15 12:16:29.277657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:16:29.279612 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:16:29.310267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:16:29.317821 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:16:29.321721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:16:29.325928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:16:29.329683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:16:29.334586 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:16:29.335279 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:16:29.343902 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:16:29.363890 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 12:16:29.365386 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 12:16:29.365440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:16:29.366414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:16:29.366753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:16:29.372475 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:16:29.373071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:16:29.374665 augenrules[1484]: /sbin/augenrules: No change May 15 12:16:29.375170 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:16:29.375873 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:16:29.382453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:16:29.382844 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:16:29.399011 augenrules[1516]: No rules May 15 12:16:29.404852 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:16:29.405244 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:16:29.420341 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 12:16:29.424455 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 12:16:29.426238 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 12:16:29.427768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:16:29.427870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:16:29.466446 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 12:16:29.476642 kernel: mousedev: PS/2 mouse device common for all mice May 15 12:16:29.489646 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 12:16:29.490749 systemd-resolved[1405]: Positive Trust Anchors: May 15 12:16:29.490772 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:16:29.490826 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:16:29.496286 kernel: ACPI: button: Power Button [PWRF] May 15 12:16:29.496395 systemd-resolved[1405]: Defaulting to hostname 'linux'. May 15 12:16:29.498758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:16:29.500430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:16:29.578734 systemd-networkd[1495]: lo: Link UP May 15 12:16:29.579093 systemd-networkd[1495]: lo: Gained carrier May 15 12:16:29.583833 systemd-networkd[1495]: Enumeration completed May 15 12:16:29.583983 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:16:29.584237 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:16:29.584243 systemd-networkd[1495]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:16:29.585195 systemd-networkd[1495]: eth0: Link UP May 15 12:16:29.585463 systemd-networkd[1495]: eth0: Gained carrier May 15 12:16:29.585528 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:16:29.585543 systemd[1]: Reached target network.target - Network. May 15 12:16:29.593148 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 12:16:29.607569 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 12:16:29.611385 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 12:16:29.611792 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 12:16:29.618729 systemd-networkd[1495]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 12:16:29.656450 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 12:16:29.658710 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 12:16:29.659152 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 12:16:29.659211 systemd-timesyncd[1501]: Initial clock synchronization to Thu 2025-05-15 12:16:29.868539 UTC. May 15 12:16:29.662785 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:16:29.664187 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 12:16:29.665689 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 12:16:29.667576 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 12:16:29.669446 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 12:16:29.671121 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 12:16:29.671160 systemd[1]: Reached target paths.target - Path Units. May 15 12:16:29.672498 systemd[1]: Reached target time-set.target - System Time Set. May 15 12:16:29.673911 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 12:16:29.676121 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 12:16:29.677632 systemd[1]: Reached target timers.target - Timer Units. May 15 12:16:29.684143 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 12:16:29.687422 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 12:16:29.695489 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 12:16:29.698981 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 12:16:29.700593 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 12:16:29.708192 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 12:16:29.710122 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 12:16:29.714963 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 12:16:29.718300 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:16:29.719663 systemd[1]: Reached target basic.target - Basic System. May 15 12:16:29.721118 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 12:16:29.721255 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 12:16:29.735794 systemd[1]: Starting containerd.service - containerd container runtime... May 15 12:16:29.741018 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 12:16:29.743906 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 12:16:29.746808 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 12:16:29.751406 kernel: kvm_amd: TSC scaling supported May 15 12:16:29.751460 kernel: kvm_amd: Nested Virtualization enabled May 15 12:16:29.751478 kernel: kvm_amd: Nested Paging enabled May 15 12:16:29.751516 kernel: kvm_amd: LBR virtualization supported May 15 12:16:29.753071 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 12:16:29.754411 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 12:16:29.761869 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 12:16:29.764738 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 12:16:29.769492 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 12:16:29.770084 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 15 12:16:29.770416 kernel: kvm_amd: Virtual GIF supported May 15 12:16:29.772596 jq[1556]: false May 15 12:16:29.774580 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 12:16:29.778008 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 12:16:29.787809 extend-filesystems[1557]: Found loop3 May 15 12:16:29.787809 extend-filesystems[1557]: Found loop4 May 15 12:16:29.787809 extend-filesystems[1557]: Found loop5 May 15 12:16:29.787809 extend-filesystems[1557]: Found sr0 May 15 12:16:29.787809 extend-filesystems[1557]: Found vda May 15 12:16:29.787809 extend-filesystems[1557]: Found vda1 May 15 12:16:29.787809 extend-filesystems[1557]: Found vda2 May 15 12:16:29.787809 extend-filesystems[1557]: Found vda3 May 15 12:16:29.787809 extend-filesystems[1557]: Found usr May 15 12:16:29.787809 extend-filesystems[1557]: Found vda4 May 15 12:16:29.787809 extend-filesystems[1557]: Found vda6 May 15 12:16:29.787809 extend-filesystems[1557]: Found vda7 May 15 12:16:29.824318 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Refreshing passwd entry cache May 15 12:16:29.806921 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 12:16:29.816340 oslogin_cache_refresh[1558]: Refreshing passwd entry cache May 15 12:16:29.825380 extend-filesystems[1557]: Found vda9 May 15 12:16:29.825380 extend-filesystems[1557]: Checking size of /dev/vda9 May 15 12:16:29.820555 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 12:16:29.826469 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 12:16:29.830962 systemd[1]: Starting update-engine.service - Update Engine... May 15 12:16:29.833157 extend-filesystems[1557]: Resized partition /dev/vda9 May 15 12:16:29.838898 extend-filesystems[1575]: resize2fs 1.47.2 (1-Jan-2025) May 15 12:16:29.843269 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 12:16:29.849265 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 12:16:29.852559 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 12:16:29.851548 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 12:16:29.851891 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 12:16:29.854330 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 12:16:29.854666 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 12:16:29.892448 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 12:16:29.904410 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Failure getting users, quitting May 15 12:16:29.904410 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:16:29.904410 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Refreshing group entry cache May 15 12:16:29.904410 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Failure getting groups, quitting May 15 12:16:29.904410 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:16:29.871562 oslogin_cache_refresh[1558]: Failure getting users, quitting May 15 12:16:29.903641 systemd[1]: motdgen.service: Deactivated successfully. May 15 12:16:29.871588 oslogin_cache_refresh[1558]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:16:29.904811 jq[1576]: true May 15 12:16:29.871667 oslogin_cache_refresh[1558]: Refreshing group entry cache May 15 12:16:29.880484 oslogin_cache_refresh[1558]: Failure getting groups, quitting May 15 12:16:29.880497 oslogin_cache_refresh[1558]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:16:29.907899 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 12:16:29.911782 extend-filesystems[1575]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 12:16:29.911782 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 12:16:29.911782 extend-filesystems[1575]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 12:16:29.945866 kernel: EDAC MC: Ver: 3.0.0 May 15 12:16:29.911069 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 12:16:29.932821 dbus-daemon[1554]: [system] SELinux support is enabled May 15 12:16:29.948575 extend-filesystems[1557]: Resized filesystem in /dev/vda9 May 15 12:16:29.911401 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 12:16:29.917495 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 12:16:29.917852 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 12:16:29.953735 update_engine[1571]: I20250515 12:16:29.953570 1571 main.cc:92] Flatcar Update Engine starting May 15 12:16:29.939090 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 12:16:29.955833 update_engine[1571]: I20250515 12:16:29.955636 1571 update_check_scheduler.cc:74] Next update check in 6m10s May 15 12:16:29.958118 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 12:16:29.959252 tar[1578]: linux-amd64/helm May 15 12:16:29.961878 jq[1589]: true May 15 12:16:29.980732 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (Power Button) May 15 12:16:29.980785 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 12:16:29.981205 systemd-logind[1565]: New seat seat0. May 15 12:16:29.985525 systemd[1]: Started systemd-logind.service - User Login Management. May 15 12:16:29.987004 systemd[1]: Started update-engine.service - Update Engine. May 15 12:16:29.989511 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 12:16:29.989749 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 12:16:29.994930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:16:29.996705 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 12:16:29.996862 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 12:16:30.008833 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 12:16:30.102906 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 12:16:30.159327 locksmithd[1603]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 12:16:30.171262 bash[1617]: Updated "/home/core/.ssh/authorized_keys" May 15 12:16:30.170985 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 12:16:30.172959 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 12:16:30.180144 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 12:16:30.185526 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 12:16:30.246437 systemd[1]: issuegen.service: Deactivated successfully. May 15 12:16:30.246875 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 12:16:30.256222 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 12:16:30.310838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:16:30.318853 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 12:16:30.323774 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 12:16:30.326660 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 12:16:30.328183 systemd[1]: Reached target getty.target - Login Prompts. May 15 12:16:30.505505 containerd[1592]: time="2025-05-15T12:16:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 12:16:30.507897 containerd[1592]: time="2025-05-15T12:16:30.507805943Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 12:16:30.520505 containerd[1592]: time="2025-05-15T12:16:30.520440179Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.671µs" May 15 12:16:30.520505 containerd[1592]: time="2025-05-15T12:16:30.520488736Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 12:16:30.520682 containerd[1592]: time="2025-05-15T12:16:30.520516311Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 12:16:30.520830 containerd[1592]: time="2025-05-15T12:16:30.520801674Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 12:16:30.520860 containerd[1592]: time="2025-05-15T12:16:30.520829546Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 12:16:30.520911 containerd[1592]: time="2025-05-15T12:16:30.520863525Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:16:30.524481 containerd[1592]: time="2025-05-15T12:16:30.524410658Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:16:30.524481 containerd[1592]: time="2025-05-15T12:16:30.524462445Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:16:30.524891 containerd[1592]: time="2025-05-15T12:16:30.524851021Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:16:30.524891 containerd[1592]: time="2025-05-15T12:16:30.524873598Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:16:30.524891 containerd[1592]: time="2025-05-15T12:16:30.524886059Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:16:30.524891 containerd[1592]: time="2025-05-15T12:16:30.524895117Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 12:16:30.541372 containerd[1592]: time="2025-05-15T12:16:30.541302228Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 12:16:30.541755 containerd[1592]: time="2025-05-15T12:16:30.541723375Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:16:30.541801 containerd[1592]: time="2025-05-15T12:16:30.541779232Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:16:30.541801 containerd[1592]: time="2025-05-15T12:16:30.541794860Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 12:16:30.542046 containerd[1592]: time="2025-05-15T12:16:30.541834678Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 12:16:30.542111 containerd[1592]: time="2025-05-15T12:16:30.542086413Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 12:16:30.542205 containerd[1592]: time="2025-05-15T12:16:30.542176291Z" level=info msg="metadata content store policy set" policy=shared May 15 12:16:30.548486 containerd[1592]: time="2025-05-15T12:16:30.548421497Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 12:16:30.548672 containerd[1592]: time="2025-05-15T12:16:30.548576023Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 12:16:30.548672 containerd[1592]: time="2025-05-15T12:16:30.548598539Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 12:16:30.548776 containerd[1592]: time="2025-05-15T12:16:30.548629074Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 12:16:30.548857 containerd[1592]: time="2025-05-15T12:16:30.548840557Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 12:16:30.548916 containerd[1592]: time="2025-05-15T12:16:30.548903786Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 12:16:30.548968 containerd[1592]: time="2025-05-15T12:16:30.548956313Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 12:16:30.549023 containerd[1592]: time="2025-05-15T12:16:30.549010957Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 12:16:30.549076 containerd[1592]: time="2025-05-15T12:16:30.549063967Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 12:16:30.549129 containerd[1592]: time="2025-05-15T12:16:30.549113840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 12:16:30.549211 containerd[1592]: time="2025-05-15T12:16:30.549194568Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 12:16:30.549270 containerd[1592]: time="2025-05-15T12:16:30.549258588Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 12:16:30.549485 containerd[1592]: time="2025-05-15T12:16:30.549466248Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 12:16:30.549560 containerd[1592]: time="2025-05-15T12:16:30.549546513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 12:16:30.549617 containerd[1592]: time="2025-05-15T12:16:30.549605218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 12:16:30.549711 containerd[1592]: time="2025-05-15T12:16:30.549696083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 12:16:30.549792 containerd[1592]: time="2025-05-15T12:16:30.549777262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 12:16:30.549850 containerd[1592]: time="2025-05-15T12:16:30.549838169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 12:16:30.549919 containerd[1592]: time="2025-05-15T12:16:30.549906178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 12:16:30.549983 containerd[1592]: time="2025-05-15T12:16:30.549969520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 12:16:30.550051 containerd[1592]: time="2025-05-15T12:16:30.550037993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 12:16:30.550105 containerd[1592]: time="2025-05-15T12:16:30.550093624Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 12:16:30.550174 containerd[1592]: time="2025-05-15T12:16:30.550158334Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 12:16:30.550325 containerd[1592]: time="2025-05-15T12:16:30.550309220Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 12:16:30.550394 containerd[1592]: time="2025-05-15T12:16:30.550381569Z" level=info msg="Start snapshots syncer" May 15 12:16:30.550474 containerd[1592]: time="2025-05-15T12:16:30.550459551Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 12:16:30.550866 containerd[1592]: time="2025-05-15T12:16:30.550785813Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 12:16:30.551186 containerd[1592]: time="2025-05-15T12:16:30.551091975Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 12:16:30.552515 containerd[1592]: time="2025-05-15T12:16:30.552493328Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 12:16:30.552724 containerd[1592]: time="2025-05-15T12:16:30.552704421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 12:16:30.552806 containerd[1592]: time="2025-05-15T12:16:30.552792263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 12:16:30.552892 containerd[1592]: time="2025-05-15T12:16:30.552876713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 12:16:30.552950 containerd[1592]: time="2025-05-15T12:16:30.552938040Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 12:16:30.553003 containerd[1592]: time="2025-05-15T12:16:30.552990464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 12:16:30.553052 containerd[1592]: time="2025-05-15T12:16:30.553040862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 12:16:30.553104 containerd[1592]: time="2025-05-15T12:16:30.553089759Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 12:16:30.553186 containerd[1592]: time="2025-05-15T12:16:30.553171915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 12:16:30.553358 containerd[1592]: time="2025-05-15T12:16:30.553230024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553397772Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553458935Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553476052Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553485543Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553496163Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553504203Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553514638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553527726Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553547589Z" level=info msg="runtime interface created" May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553553264Z" level=info msg="created NRI interface" May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553560862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553571667Z" level=info msg="Connect containerd service" May 15 12:16:30.553668 containerd[1592]: time="2025-05-15T12:16:30.553597966Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 12:16:30.554923 containerd[1592]: time="2025-05-15T12:16:30.554853655Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:16:30.600833 tar[1578]: linux-amd64/LICENSE May 15 12:16:30.600999 tar[1578]: linux-amd64/README.md May 15 12:16:30.625568 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 12:16:30.675082 containerd[1592]: time="2025-05-15T12:16:30.675015729Z" level=info msg="Start subscribing containerd event" May 15 12:16:30.675265 containerd[1592]: time="2025-05-15T12:16:30.675080007Z" level=info msg="Start recovering state" May 15 12:16:30.675265 containerd[1592]: time="2025-05-15T12:16:30.675224107Z" level=info msg="Start event monitor" May 15 12:16:30.675265 containerd[1592]: time="2025-05-15T12:16:30.675240033Z" level=info msg="Start cni network conf syncer for default" May 15 12:16:30.675265 containerd[1592]: time="2025-05-15T12:16:30.675247826Z" level=info msg="Start streaming server" May 15 12:16:30.675265 containerd[1592]: time="2025-05-15T12:16:30.675259012Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 12:16:30.675265 containerd[1592]: time="2025-05-15T12:16:30.675266887Z" level=info msg="runtime interface starting up..." May 15 12:16:30.675457 containerd[1592]: time="2025-05-15T12:16:30.675275226Z" level=info msg="starting plugins..." May 15 12:16:30.675457 containerd[1592]: time="2025-05-15T12:16:30.675292354Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 12:16:30.675457 containerd[1592]: time="2025-05-15T12:16:30.675431550Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 12:16:30.675546 containerd[1592]: time="2025-05-15T12:16:30.675498963Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 12:16:30.675617 containerd[1592]: time="2025-05-15T12:16:30.675583105Z" level=info msg="containerd successfully booted in 0.171417s" May 15 12:16:30.675731 systemd[1]: Started containerd.service - containerd container runtime. May 15 12:16:30.950951 systemd-networkd[1495]: eth0: Gained IPv6LL May 15 12:16:30.955389 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 12:16:30.957508 systemd[1]: Reached target network-online.target - Network is Online. May 15 12:16:30.960805 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 12:16:30.964349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:16:30.967375 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 12:16:31.008004 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 12:16:31.010252 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 12:16:31.010557 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 12:16:31.013151 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 12:16:31.205770 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 12:16:31.208879 systemd[1]: Started sshd@0-10.0.0.46:22-10.0.0.1:34640.service - OpenSSH per-connection server daemon (10.0.0.1:34640). May 15 12:16:31.283954 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 34640 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:31.286058 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:31.301942 systemd-logind[1565]: New session 1 of user core. May 15 12:16:31.303516 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 12:16:31.306556 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 12:16:31.342906 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 12:16:31.347287 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 12:16:31.368460 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 12:16:31.371181 systemd-logind[1565]: New session c1 of user core. May 15 12:16:31.526276 systemd[1690]: Queued start job for default target default.target. May 15 12:16:31.533005 systemd[1690]: Created slice app.slice - User Application Slice. May 15 12:16:31.533034 systemd[1690]: Reached target paths.target - Paths. May 15 12:16:31.533079 systemd[1690]: Reached target timers.target - Timers. May 15 12:16:31.534803 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 12:16:31.550192 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 12:16:31.550363 systemd[1690]: Reached target sockets.target - Sockets. May 15 12:16:31.550420 systemd[1690]: Reached target basic.target - Basic System. May 15 12:16:31.550477 systemd[1690]: Reached target default.target - Main User Target. May 15 12:16:31.550523 systemd[1690]: Startup finished in 171ms. May 15 12:16:31.551248 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 12:16:31.554668 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 12:16:31.623423 systemd[1]: Started sshd@1-10.0.0.46:22-10.0.0.1:34656.service - OpenSSH per-connection server daemon (10.0.0.1:34656). May 15 12:16:31.702120 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 34656 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:31.703814 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:31.708379 systemd-logind[1565]: New session 2 of user core. May 15 12:16:31.715813 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 12:16:31.761522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:16:31.779616 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:16:31.780509 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 12:16:31.783461 systemd[1]: Startup finished in 3.093s (kernel) + 8.178s (initrd) + 6.011s (userspace) = 17.283s. May 15 12:16:31.789511 sshd[1705]: Connection closed by 10.0.0.1 port 34656 May 15 12:16:31.791275 sshd-session[1701]: pam_unix(sshd:session): session closed for user core May 15 12:16:31.803291 systemd[1]: sshd@1-10.0.0.46:22-10.0.0.1:34656.service: Deactivated successfully. May 15 12:16:31.805438 systemd[1]: session-2.scope: Deactivated successfully. May 15 12:16:31.807532 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. May 15 12:16:31.812018 systemd[1]: Started sshd@2-10.0.0.46:22-10.0.0.1:34664.service - OpenSSH per-connection server daemon (10.0.0.1:34664). May 15 12:16:31.813289 systemd-logind[1565]: Removed session 2. May 15 12:16:31.854640 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 34664 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:31.856432 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:31.862154 systemd-logind[1565]: New session 3 of user core. May 15 12:16:31.873817 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 12:16:31.925768 sshd[1725]: Connection closed by 10.0.0.1 port 34664 May 15 12:16:31.926116 sshd-session[1717]: pam_unix(sshd:session): session closed for user core May 15 12:16:31.939518 systemd[1]: sshd@2-10.0.0.46:22-10.0.0.1:34664.service: Deactivated successfully. May 15 12:16:31.941833 systemd[1]: session-3.scope: Deactivated successfully. May 15 12:16:31.942779 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. May 15 12:16:31.946272 systemd[1]: Started sshd@3-10.0.0.46:22-10.0.0.1:34676.service - OpenSSH per-connection server daemon (10.0.0.1:34676). May 15 12:16:31.947025 systemd-logind[1565]: Removed session 3. May 15 12:16:31.998811 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 34676 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:32.000413 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:32.005246 systemd-logind[1565]: New session 4 of user core. May 15 12:16:32.014766 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 12:16:32.071882 sshd[1734]: Connection closed by 10.0.0.1 port 34676 May 15 12:16:32.072047 sshd-session[1732]: pam_unix(sshd:session): session closed for user core May 15 12:16:32.081695 systemd[1]: sshd@3-10.0.0.46:22-10.0.0.1:34676.service: Deactivated successfully. May 15 12:16:32.083982 systemd[1]: session-4.scope: Deactivated successfully. May 15 12:16:32.084834 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. May 15 12:16:32.088253 systemd[1]: Started sshd@4-10.0.0.46:22-10.0.0.1:34680.service - OpenSSH per-connection server daemon (10.0.0.1:34680). May 15 12:16:32.089339 systemd-logind[1565]: Removed session 4. May 15 12:16:32.138419 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 34680 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:32.140435 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:32.147077 systemd-logind[1565]: New session 5 of user core. May 15 12:16:32.153835 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 12:16:32.216310 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 12:16:32.216908 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:16:32.235199 sudo[1745]: pam_unix(sudo:session): session closed for user root May 15 12:16:32.237529 sshd[1744]: Connection closed by 10.0.0.1 port 34680 May 15 12:16:32.240082 sshd-session[1741]: pam_unix(sshd:session): session closed for user core May 15 12:16:32.250094 systemd[1]: sshd@4-10.0.0.46:22-10.0.0.1:34680.service: Deactivated successfully. May 15 12:16:32.252802 systemd[1]: session-5.scope: Deactivated successfully. May 15 12:16:32.253419 kubelet[1709]: E0515 12:16:32.253357 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:16:32.253929 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. May 15 12:16:32.257929 systemd[1]: Started sshd@5-10.0.0.46:22-10.0.0.1:34682.service - OpenSSH per-connection server daemon (10.0.0.1:34682). May 15 12:16:32.258321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:16:32.258505 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:16:32.258875 systemd[1]: kubelet.service: Consumed 1.013s CPU time, 235.5M memory peak. May 15 12:16:32.260430 systemd-logind[1565]: Removed session 5. May 15 12:16:32.314672 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 34682 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:32.316516 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:32.322024 systemd-logind[1565]: New session 6 of user core. May 15 12:16:32.332806 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 12:16:32.389817 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 12:16:32.390277 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:16:32.468891 sudo[1756]: pam_unix(sudo:session): session closed for user root May 15 12:16:32.476287 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 12:16:32.476629 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:16:32.489413 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:16:32.538136 augenrules[1778]: No rules May 15 12:16:32.540285 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:16:32.540589 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:16:32.541974 sudo[1755]: pam_unix(sudo:session): session closed for user root May 15 12:16:32.543872 sshd[1754]: Connection closed by 10.0.0.1 port 34682 May 15 12:16:32.544153 sshd-session[1751]: pam_unix(sshd:session): session closed for user core May 15 12:16:32.556088 systemd[1]: sshd@5-10.0.0.46:22-10.0.0.1:34682.service: Deactivated successfully. May 15 12:16:32.558581 systemd[1]: session-6.scope: Deactivated successfully. May 15 12:16:32.559578 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. May 15 12:16:32.563952 systemd[1]: Started sshd@6-10.0.0.46:22-10.0.0.1:34688.service - OpenSSH per-connection server daemon (10.0.0.1:34688). May 15 12:16:32.564795 systemd-logind[1565]: Removed session 6. May 15 12:16:32.617685 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 34688 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:32.619617 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:32.624736 systemd-logind[1565]: New session 7 of user core. May 15 12:16:32.639955 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 12:16:32.696835 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 12:16:32.697237 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:16:33.314587 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 12:16:33.330003 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 12:16:34.077398 dockerd[1811]: time="2025-05-15T12:16:34.077303120Z" level=info msg="Starting up" May 15 12:16:34.078722 dockerd[1811]: time="2025-05-15T12:16:34.078621548Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 12:16:35.507547 dockerd[1811]: time="2025-05-15T12:16:35.507440612Z" level=info msg="Loading containers: start." May 15 12:16:35.519669 kernel: Initializing XFRM netlink socket May 15 12:16:36.080447 systemd-networkd[1495]: docker0: Link UP May 15 12:16:36.088669 dockerd[1811]: time="2025-05-15T12:16:36.088574719Z" level=info msg="Loading containers: done." May 15 12:16:36.108990 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3654276222-merged.mount: Deactivated successfully. May 15 12:16:36.112660 dockerd[1811]: time="2025-05-15T12:16:36.112551579Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 12:16:36.112812 dockerd[1811]: time="2025-05-15T12:16:36.112733439Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 12:16:36.112914 dockerd[1811]: time="2025-05-15T12:16:36.112895502Z" level=info msg="Initializing buildkit" May 15 12:16:36.189310 dockerd[1811]: time="2025-05-15T12:16:36.189231210Z" level=info msg="Completed buildkit initialization" May 15 12:16:36.198420 dockerd[1811]: time="2025-05-15T12:16:36.198340240Z" level=info msg="Daemon has completed initialization" May 15 12:16:36.198722 dockerd[1811]: time="2025-05-15T12:16:36.198546833Z" level=info msg="API listen on /run/docker.sock" May 15 12:16:36.198823 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 12:16:37.268608 containerd[1592]: time="2025-05-15T12:16:37.268537064Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 12:16:39.462763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1329722688.mount: Deactivated successfully. May 15 12:16:40.910632 containerd[1592]: time="2025-05-15T12:16:40.910550337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:40.912056 containerd[1592]: time="2025-05-15T12:16:40.911987871Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 15 12:16:40.913735 containerd[1592]: time="2025-05-15T12:16:40.913693701Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:40.919446 containerd[1592]: time="2025-05-15T12:16:40.919369317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:40.920634 containerd[1592]: time="2025-05-15T12:16:40.920552617Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 3.651940743s" May 15 12:16:40.920698 containerd[1592]: time="2025-05-15T12:16:40.920605659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 15 12:16:40.922738 containerd[1592]: time="2025-05-15T12:16:40.922689428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 12:16:42.509270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 12:16:42.511359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:16:42.876092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:16:42.899041 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:16:43.207572 kubelet[2084]: E0515 12:16:43.207375 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:16:43.215536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:16:43.215902 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:16:43.216375 systemd[1]: kubelet.service: Consumed 246ms CPU time, 98.1M memory peak. May 15 12:16:44.104594 containerd[1592]: time="2025-05-15T12:16:44.104519671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:44.105422 containerd[1592]: time="2025-05-15T12:16:44.105383921Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 15 12:16:44.106628 containerd[1592]: time="2025-05-15T12:16:44.106553047Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:44.109359 containerd[1592]: time="2025-05-15T12:16:44.109333566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:44.110235 containerd[1592]: time="2025-05-15T12:16:44.110186329Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 3.187463155s" May 15 12:16:44.110235 containerd[1592]: time="2025-05-15T12:16:44.110231917Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 15 12:16:44.110810 containerd[1592]: time="2025-05-15T12:16:44.110770902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 12:16:49.909569 containerd[1592]: time="2025-05-15T12:16:49.909498157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:49.984834 containerd[1592]: time="2025-05-15T12:16:49.984715224Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 15 12:16:50.016025 containerd[1592]: time="2025-05-15T12:16:50.015917840Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:50.085604 containerd[1592]: time="2025-05-15T12:16:50.085506556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:50.087021 containerd[1592]: time="2025-05-15T12:16:50.086938097Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 5.976132106s" May 15 12:16:50.087021 containerd[1592]: time="2025-05-15T12:16:50.086999322Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 15 12:16:50.087837 containerd[1592]: time="2025-05-15T12:16:50.087791378Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 12:16:53.247498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 12:16:53.249423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:16:53.442109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:16:53.466146 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:16:54.275378 kubelet[2110]: E0515 12:16:54.275306 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:16:54.279892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:16:54.280102 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:16:54.280508 systemd[1]: kubelet.service: Consumed 226ms CPU time, 98.2M memory peak. May 15 12:16:55.073223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214720455.mount: Deactivated successfully. May 15 12:16:56.232363 containerd[1592]: time="2025-05-15T12:16:56.232278275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:56.236562 containerd[1592]: time="2025-05-15T12:16:56.236518478Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 15 12:16:56.240953 containerd[1592]: time="2025-05-15T12:16:56.240883316Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:56.248252 containerd[1592]: time="2025-05-15T12:16:56.248190399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:56.248914 containerd[1592]: time="2025-05-15T12:16:56.248860880Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 6.161032306s" May 15 12:16:56.248914 containerd[1592]: time="2025-05-15T12:16:56.248898752Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 15 12:16:56.249443 containerd[1592]: time="2025-05-15T12:16:56.249386533Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 12:16:56.793705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587688035.mount: Deactivated successfully. May 15 12:16:57.621005 containerd[1592]: time="2025-05-15T12:16:57.620935746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:57.623243 containerd[1592]: time="2025-05-15T12:16:57.623173703Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 12:16:57.624403 containerd[1592]: time="2025-05-15T12:16:57.624355137Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:57.626875 containerd[1592]: time="2025-05-15T12:16:57.626820451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:16:57.627968 containerd[1592]: time="2025-05-15T12:16:57.627908104Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.378486076s" May 15 12:16:57.627968 containerd[1592]: time="2025-05-15T12:16:57.627951134Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 12:16:57.628676 containerd[1592]: time="2025-05-15T12:16:57.628644728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 12:16:59.110569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873197932.mount: Deactivated successfully. May 15 12:16:59.343725 containerd[1592]: time="2025-05-15T12:16:59.343657945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:16:59.374743 containerd[1592]: time="2025-05-15T12:16:59.374509645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 12:16:59.388909 containerd[1592]: time="2025-05-15T12:16:59.388783878Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:16:59.401028 containerd[1592]: time="2025-05-15T12:16:59.400938384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:16:59.401835 containerd[1592]: time="2025-05-15T12:16:59.401761836Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.773083874s" May 15 12:16:59.401835 containerd[1592]: time="2025-05-15T12:16:59.401803828Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 12:16:59.402452 containerd[1592]: time="2025-05-15T12:16:59.402395339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 12:17:00.115156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665412794.mount: Deactivated successfully. May 15 12:17:02.019039 containerd[1592]: time="2025-05-15T12:17:02.018917296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:17:02.020539 containerd[1592]: time="2025-05-15T12:17:02.020495221Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 15 12:17:02.021844 containerd[1592]: time="2025-05-15T12:17:02.021795056Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:17:02.028273 containerd[1592]: time="2025-05-15T12:17:02.028201244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:17:02.029119 containerd[1592]: time="2025-05-15T12:17:02.029077380Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.626650226s" May 15 12:17:02.029190 containerd[1592]: time="2025-05-15T12:17:02.029116391Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 12:17:04.497557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 12:17:04.500431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:17:04.718222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:17:04.736051 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:17:04.830029 kubelet[2259]: E0515 12:17:04.829741 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:17:04.834071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:17:04.834266 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:17:04.834642 systemd[1]: kubelet.service: Consumed 264ms CPU time, 95.6M memory peak. May 15 12:17:04.844446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:17:04.844664 systemd[1]: kubelet.service: Consumed 264ms CPU time, 95.6M memory peak. May 15 12:17:04.846907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:17:04.876280 systemd[1]: Reload requested from client PID 2275 ('systemctl') (unit session-7.scope)... May 15 12:17:04.876302 systemd[1]: Reloading... May 15 12:17:04.976786 zram_generator::config[2316]: No configuration found. May 15 12:17:07.065953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:17:07.201217 systemd[1]: Reloading finished in 2324 ms. May 15 12:17:07.267673 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 12:17:07.267801 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 12:17:07.268137 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:17:07.268197 systemd[1]: kubelet.service: Consumed 137ms CPU time, 83.5M memory peak. May 15 12:17:07.270692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:17:07.459872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:17:07.464118 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:17:07.516397 kubelet[2366]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:17:07.516849 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 12:17:07.516849 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:17:07.516849 kubelet[2366]: I0515 12:17:07.516719 2366 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:17:07.824796 kubelet[2366]: I0515 12:17:07.824721 2366 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 12:17:07.824796 kubelet[2366]: I0515 12:17:07.824768 2366 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:17:07.825104 kubelet[2366]: I0515 12:17:07.825073 2366 server.go:929] "Client rotation is on, will bootstrap in background" May 15 12:17:07.912799 kubelet[2366]: I0515 12:17:07.912738 2366 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:17:07.920997 kubelet[2366]: E0515 12:17:07.920930 2366 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:07.964133 kubelet[2366]: I0515 12:17:07.964091 2366 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:17:07.971575 kubelet[2366]: I0515 12:17:07.971541 2366 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:17:07.971701 kubelet[2366]: I0515 12:17:07.971676 2366 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 12:17:07.971857 kubelet[2366]: I0515 12:17:07.971815 2366 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:17:07.972041 kubelet[2366]: I0515 12:17:07.971848 2366 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:17:07.972041 kubelet[2366]: I0515 12:17:07.972036 2366 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:17:07.972181 kubelet[2366]: I0515 12:17:07.972048 2366 container_manager_linux.go:300] "Creating device plugin manager" May 15 12:17:07.972181 kubelet[2366]: I0515 12:17:07.972179 2366 state_mem.go:36] "Initialized new in-memory state store" May 15 12:17:07.977223 kubelet[2366]: I0515 12:17:07.977183 2366 kubelet.go:408] "Attempting to sync node with API server" May 15 12:17:07.977223 kubelet[2366]: I0515 12:17:07.977224 2366 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:17:07.977317 kubelet[2366]: I0515 12:17:07.977268 2366 kubelet.go:314] "Adding apiserver pod source" May 15 12:17:07.977317 kubelet[2366]: I0515 12:17:07.977293 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:17:07.979121 kubelet[2366]: W0515 12:17:07.979029 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:07.979121 kubelet[2366]: E0515 12:17:07.979124 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:07.979296 kubelet[2366]: W0515 12:17:07.979146 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:07.979296 kubelet[2366]: E0515 12:17:07.979206 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:07.998752 kubelet[2366]: I0515 12:17:07.998691 2366 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:17:08.012919 kubelet[2366]: I0515 12:17:08.012883 2366 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:17:08.012967 kubelet[2366]: W0515 12:17:08.012960 2366 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 12:17:08.013593 kubelet[2366]: I0515 12:17:08.013565 2366 server.go:1269] "Started kubelet" May 15 12:17:08.013685 kubelet[2366]: I0515 12:17:08.013650 2366 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:17:08.014631 kubelet[2366]: I0515 12:17:08.014594 2366 server.go:460] "Adding debug handlers to kubelet server" May 15 12:17:08.015043 kubelet[2366]: I0515 12:17:08.015024 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:17:08.015643 kubelet[2366]: I0515 12:17:08.015263 2366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:17:08.015643 kubelet[2366]: I0515 12:17:08.015539 2366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:17:08.015643 kubelet[2366]: I0515 12:17:08.015643 2366 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:17:08.016117 kubelet[2366]: I0515 12:17:08.016081 2366 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 12:17:08.016256 kubelet[2366]: I0515 12:17:08.016225 2366 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 12:17:08.016336 kubelet[2366]: I0515 12:17:08.016318 2366 reconciler.go:26] "Reconciler: start to sync state" May 15 12:17:08.016707 kubelet[2366]: W0515 12:17:08.016652 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:08.016799 kubelet[2366]: E0515 12:17:08.016717 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:08.017403 kubelet[2366]: I0515 12:17:08.017311 2366 factory.go:221] Registration of the systemd container factory successfully May 15 12:17:08.017403 kubelet[2366]: I0515 12:17:08.017386 2366 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:17:08.018509 kubelet[2366]: I0515 12:17:08.018442 2366 factory.go:221] Registration of the containerd container factory successfully May 15 12:17:08.109282 kubelet[2366]: E0515 12:17:08.109064 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:08.110167 kubelet[2366]: E0515 12:17:08.110109 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="200ms" May 15 12:17:08.126918 kubelet[2366]: I0515 12:17:08.126793 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:17:08.128581 kubelet[2366]: I0515 12:17:08.128554 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:17:08.128676 kubelet[2366]: I0515 12:17:08.128597 2366 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 12:17:08.128676 kubelet[2366]: I0515 12:17:08.128633 2366 kubelet.go:2321] "Starting kubelet main sync loop" May 15 12:17:08.128723 kubelet[2366]: E0515 12:17:08.128675 2366 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:17:08.133290 kubelet[2366]: W0515 12:17:08.133245 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:08.133290 kubelet[2366]: E0515 12:17:08.133282 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:08.145592 kubelet[2366]: I0515 12:17:08.145548 2366 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 12:17:08.145592 kubelet[2366]: I0515 12:17:08.145576 2366 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 12:17:08.145592 kubelet[2366]: I0515 12:17:08.145602 2366 state_mem.go:36] "Initialized new in-memory state store" May 15 12:17:08.146112 kubelet[2366]: E0515 12:17:08.143952 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fb27c72ba4c13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 12:17:08.013542419 +0000 UTC m=+0.545250476,LastTimestamp:2025-05-15 12:17:08.013542419 +0000 UTC m=+0.545250476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 12:17:08.209972 kubelet[2366]: E0515 12:17:08.209870 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:08.229924 kubelet[2366]: E0515 12:17:08.229811 2366 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 12:17:08.310141 kubelet[2366]: E0515 12:17:08.310024 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:08.311569 kubelet[2366]: E0515 12:17:08.311530 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="400ms" May 15 12:17:08.411372 kubelet[2366]: E0515 12:17:08.411194 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:08.430876 kubelet[2366]: E0515 12:17:08.430832 2366 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 12:17:08.512014 kubelet[2366]: E0515 12:17:08.511918 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:08.612901 kubelet[2366]: E0515 12:17:08.612828 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:08.712284 kubelet[2366]: E0515 12:17:08.712107 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="800ms" May 15 12:17:08.713176 kubelet[2366]: E0515 12:17:08.713118 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:08.813753 kubelet[2366]: E0515 12:17:08.813679 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:08.831476 kubelet[2366]: E0515 12:17:08.831397 2366 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 12:17:08.913918 kubelet[2366]: E0515 12:17:08.913839 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:08.929943 kubelet[2366]: W0515 12:17:08.929864 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:08.929943 kubelet[2366]: E0515 12:17:08.929940 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:09.014083 kubelet[2366]: E0515 12:17:09.014010 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.115016 kubelet[2366]: E0515 12:17:09.114947 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.216041 kubelet[2366]: E0515 12:17:09.215967 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.226991 kubelet[2366]: W0515 12:17:09.226923 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:09.226991 kubelet[2366]: E0515 12:17:09.226984 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:09.252010 kubelet[2366]: W0515 12:17:09.251934 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:09.252124 kubelet[2366]: E0515 12:17:09.252004 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:09.316733 kubelet[2366]: E0515 12:17:09.316532 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.417753 kubelet[2366]: E0515 12:17:09.417666 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.423285 kubelet[2366]: W0515 12:17:09.423209 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:09.423285 kubelet[2366]: E0515 12:17:09.423276 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:09.513484 kubelet[2366]: E0515 12:17:09.513413 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="1.6s" May 15 12:17:09.518732 kubelet[2366]: E0515 12:17:09.518677 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.619789 kubelet[2366]: E0515 12:17:09.619647 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.632094 kubelet[2366]: E0515 12:17:09.632019 2366 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 12:17:09.641742 kubelet[2366]: E0515 12:17:09.641583 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fb27c72ba4c13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 12:17:08.013542419 +0000 UTC m=+0.545250476,LastTimestamp:2025-05-15 12:17:08.013542419 +0000 UTC m=+0.545250476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 12:17:09.719998 kubelet[2366]: E0515 12:17:09.719942 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.820700 kubelet[2366]: E0515 12:17:09.820644 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.836141 kubelet[2366]: I0515 12:17:09.836063 2366 policy_none.go:49] "None policy: Start" May 15 12:17:09.837081 kubelet[2366]: I0515 12:17:09.837050 2366 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 12:17:09.837081 kubelet[2366]: I0515 12:17:09.837081 2366 state_mem.go:35] "Initializing new in-memory state store" May 15 12:17:09.893752 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 12:17:09.914046 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 12:17:09.918244 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 12:17:09.921309 kubelet[2366]: E0515 12:17:09.921263 2366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:09.938167 kubelet[2366]: I0515 12:17:09.938130 2366 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:17:09.938704 kubelet[2366]: I0515 12:17:09.938387 2366 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:17:09.938704 kubelet[2366]: I0515 12:17:09.938403 2366 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:17:09.938704 kubelet[2366]: I0515 12:17:09.938693 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:17:09.939971 kubelet[2366]: E0515 12:17:09.939926 2366 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 12:17:10.040060 kubelet[2366]: I0515 12:17:10.040014 2366 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:17:10.040441 kubelet[2366]: E0515 12:17:10.040398 2366 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" May 15 12:17:10.055522 kubelet[2366]: E0515 12:17:10.055468 2366 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:10.242691 kubelet[2366]: I0515 12:17:10.242538 2366 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:17:10.242958 kubelet[2366]: E0515 12:17:10.242917 2366 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" May 15 12:17:10.645015 kubelet[2366]: I0515 12:17:10.644955 2366 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:17:10.645455 kubelet[2366]: E0515 12:17:10.645358 2366 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" May 15 12:17:11.114408 kubelet[2366]: E0515 12:17:11.114304 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="3.2s" May 15 12:17:11.242465 systemd[1]: Created slice kubepods-burstable-poddb87c480e1997762f6f89b8aaab17979.slice - libcontainer container kubepods-burstable-poddb87c480e1997762f6f89b8aaab17979.slice. May 15 12:17:11.254456 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 15 12:17:11.264981 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 15 12:17:11.339586 kubelet[2366]: I0515 12:17:11.339494 2366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db87c480e1997762f6f89b8aaab17979-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"db87c480e1997762f6f89b8aaab17979\") " pod="kube-system/kube-apiserver-localhost" May 15 12:17:11.339586 kubelet[2366]: I0515 12:17:11.339557 2366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:11.339586 kubelet[2366]: I0515 12:17:11.339578 2366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:11.339586 kubelet[2366]: I0515 12:17:11.339601 2366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:11.339881 kubelet[2366]: I0515 12:17:11.339646 2366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:11.339881 kubelet[2366]: I0515 12:17:11.339687 2366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 12:17:11.339881 kubelet[2366]: I0515 12:17:11.339711 2366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db87c480e1997762f6f89b8aaab17979-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"db87c480e1997762f6f89b8aaab17979\") " pod="kube-system/kube-apiserver-localhost" May 15 12:17:11.339881 kubelet[2366]: I0515 12:17:11.339736 2366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db87c480e1997762f6f89b8aaab17979-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"db87c480e1997762f6f89b8aaab17979\") " pod="kube-system/kube-apiserver-localhost" May 15 12:17:11.339881 kubelet[2366]: I0515 12:17:11.339765 2366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:11.353394 kubelet[2366]: W0515 12:17:11.353325 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:11.353394 kubelet[2366]: E0515 12:17:11.353398 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:11.447541 kubelet[2366]: I0515 12:17:11.447431 2366 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:17:11.447939 kubelet[2366]: E0515 12:17:11.447891 2366 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" May 15 12:17:11.552387 kubelet[2366]: E0515 12:17:11.552323 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:11.553222 containerd[1592]: time="2025-05-15T12:17:11.553165871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:db87c480e1997762f6f89b8aaab17979,Namespace:kube-system,Attempt:0,}" May 15 12:17:11.563591 kubelet[2366]: E0515 12:17:11.563544 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:11.564190 containerd[1592]: time="2025-05-15T12:17:11.564144863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 15 12:17:11.567510 kubelet[2366]: E0515 12:17:11.567453 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:11.568035 containerd[1592]: time="2025-05-15T12:17:11.567983717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 15 12:17:11.674300 kubelet[2366]: W0515 12:17:11.674244 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:11.674819 kubelet[2366]: E0515 12:17:11.674315 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:12.019731 kubelet[2366]: W0515 12:17:12.019676 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:12.019731 kubelet[2366]: E0515 12:17:12.019729 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:12.027329 kubelet[2366]: W0515 12:17:12.027284 2366 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 15 12:17:12.027329 kubelet[2366]: E0515 12:17:12.027322 2366 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:12.570399 containerd[1592]: time="2025-05-15T12:17:12.570338928Z" level=info msg="connecting to shim a3a2e2c8acb98a8ff3deddec7220813b76c512221cb22b97e50fe1ed8a17c66a" address="unix:///run/containerd/s/8446e2cc4f42af55bb21a270c426bbef1935fe0ed8986bbd0d493df123958010" namespace=k8s.io protocol=ttrpc version=3 May 15 12:17:12.640806 systemd[1]: Started cri-containerd-a3a2e2c8acb98a8ff3deddec7220813b76c512221cb22b97e50fe1ed8a17c66a.scope - libcontainer container a3a2e2c8acb98a8ff3deddec7220813b76c512221cb22b97e50fe1ed8a17c66a. May 15 12:17:12.722991 containerd[1592]: time="2025-05-15T12:17:12.722936195Z" level=info msg="connecting to shim cda18019f78647c4fb65b81295bfdf99d2f387724398f47107f97ab876720c49" address="unix:///run/containerd/s/46915b15d95d55a2b4ef559e956083774844cd0725889ec8b703b7cc24bbeddf" namespace=k8s.io protocol=ttrpc version=3 May 15 12:17:12.750760 systemd[1]: Started cri-containerd-cda18019f78647c4fb65b81295bfdf99d2f387724398f47107f97ab876720c49.scope - libcontainer container cda18019f78647c4fb65b81295bfdf99d2f387724398f47107f97ab876720c49. May 15 12:17:12.786251 containerd[1592]: time="2025-05-15T12:17:12.786196881Z" level=info msg="connecting to shim 80a76b4b6296f068fd7ffa26c27c862cda1bec72754044b4b674ddfd82dda976" address="unix:///run/containerd/s/4bee1928863ccb59d015ce1da1d7744a637b1e05debe0a93c51d0261481f9be0" namespace=k8s.io protocol=ttrpc version=3 May 15 12:17:12.799900 containerd[1592]: time="2025-05-15T12:17:12.799574631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:db87c480e1997762f6f89b8aaab17979,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3a2e2c8acb98a8ff3deddec7220813b76c512221cb22b97e50fe1ed8a17c66a\"" May 15 12:17:12.801405 kubelet[2366]: E0515 12:17:12.801183 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:12.804103 containerd[1592]: time="2025-05-15T12:17:12.804062108Z" level=info msg="CreateContainer within sandbox \"a3a2e2c8acb98a8ff3deddec7220813b76c512221cb22b97e50fe1ed8a17c66a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 12:17:12.822787 systemd[1]: Started cri-containerd-80a76b4b6296f068fd7ffa26c27c862cda1bec72754044b4b674ddfd82dda976.scope - libcontainer container 80a76b4b6296f068fd7ffa26c27c862cda1bec72754044b4b674ddfd82dda976. May 15 12:17:12.940711 containerd[1592]: time="2025-05-15T12:17:12.940648026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cda18019f78647c4fb65b81295bfdf99d2f387724398f47107f97ab876720c49\"" May 15 12:17:12.941663 kubelet[2366]: E0515 12:17:12.941636 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:12.943413 containerd[1592]: time="2025-05-15T12:17:12.943389786Z" level=info msg="CreateContainer within sandbox \"cda18019f78647c4fb65b81295bfdf99d2f387724398f47107f97ab876720c49\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 12:17:12.997374 containerd[1592]: time="2025-05-15T12:17:12.997297577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"80a76b4b6296f068fd7ffa26c27c862cda1bec72754044b4b674ddfd82dda976\"" May 15 12:17:12.998259 kubelet[2366]: E0515 12:17:12.998230 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:12.999968 containerd[1592]: time="2025-05-15T12:17:12.999902003Z" level=info msg="CreateContainer within sandbox \"80a76b4b6296f068fd7ffa26c27c862cda1bec72754044b4b674ddfd82dda976\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 12:17:13.050365 kubelet[2366]: I0515 12:17:13.050294 2366 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:17:13.050761 kubelet[2366]: E0515 12:17:13.050725 2366 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" May 15 12:17:13.433303 containerd[1592]: time="2025-05-15T12:17:13.433230078Z" level=info msg="Container 01199e482c450a417994f68c1ee8c3ccc5433c6a2cbfa8b7af49853022d161b5: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:13.797890 containerd[1592]: time="2025-05-15T12:17:13.797829654Z" level=info msg="Container 7c211dc751a88223e3db613744d16ce3503451ada6c9647a8ec7b145ff6bd841: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:13.929291 containerd[1592]: time="2025-05-15T12:17:13.929236466Z" level=info msg="Container 82dc7b1a762aa66192d648074ef5123da6de8fcaad1098dd460b1cd53badf3b2: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:14.136851 containerd[1592]: time="2025-05-15T12:17:14.136706886Z" level=info msg="CreateContainer within sandbox \"a3a2e2c8acb98a8ff3deddec7220813b76c512221cb22b97e50fe1ed8a17c66a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"01199e482c450a417994f68c1ee8c3ccc5433c6a2cbfa8b7af49853022d161b5\"" May 15 12:17:14.137507 containerd[1592]: time="2025-05-15T12:17:14.137464890Z" level=info msg="StartContainer for \"01199e482c450a417994f68c1ee8c3ccc5433c6a2cbfa8b7af49853022d161b5\"" May 15 12:17:14.138966 containerd[1592]: time="2025-05-15T12:17:14.138936110Z" level=info msg="connecting to shim 01199e482c450a417994f68c1ee8c3ccc5433c6a2cbfa8b7af49853022d161b5" address="unix:///run/containerd/s/8446e2cc4f42af55bb21a270c426bbef1935fe0ed8986bbd0d493df123958010" protocol=ttrpc version=3 May 15 12:17:14.164871 systemd[1]: Started cri-containerd-01199e482c450a417994f68c1ee8c3ccc5433c6a2cbfa8b7af49853022d161b5.scope - libcontainer container 01199e482c450a417994f68c1ee8c3ccc5433c6a2cbfa8b7af49853022d161b5. May 15 12:17:14.256271 kubelet[2366]: E0515 12:17:14.256224 2366 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" May 15 12:17:14.464483 containerd[1592]: time="2025-05-15T12:17:14.464344532Z" level=info msg="CreateContainer within sandbox \"cda18019f78647c4fb65b81295bfdf99d2f387724398f47107f97ab876720c49\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7c211dc751a88223e3db613744d16ce3503451ada6c9647a8ec7b145ff6bd841\"" May 15 12:17:14.464600 containerd[1592]: time="2025-05-15T12:17:14.464517338Z" level=info msg="CreateContainer within sandbox \"80a76b4b6296f068fd7ffa26c27c862cda1bec72754044b4b674ddfd82dda976\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"82dc7b1a762aa66192d648074ef5123da6de8fcaad1098dd460b1cd53badf3b2\"" May 15 12:17:14.464868 containerd[1592]: time="2025-05-15T12:17:14.464846565Z" level=info msg="StartContainer for \"01199e482c450a417994f68c1ee8c3ccc5433c6a2cbfa8b7af49853022d161b5\" returns successfully" May 15 12:17:14.465149 containerd[1592]: time="2025-05-15T12:17:14.465121246Z" level=info msg="StartContainer for \"7c211dc751a88223e3db613744d16ce3503451ada6c9647a8ec7b145ff6bd841\"" May 15 12:17:14.465756 containerd[1592]: time="2025-05-15T12:17:14.465696653Z" level=info msg="StartContainer for \"82dc7b1a762aa66192d648074ef5123da6de8fcaad1098dd460b1cd53badf3b2\"" May 15 12:17:14.466475 containerd[1592]: time="2025-05-15T12:17:14.466399509Z" level=info msg="connecting to shim 7c211dc751a88223e3db613744d16ce3503451ada6c9647a8ec7b145ff6bd841" address="unix:///run/containerd/s/46915b15d95d55a2b4ef559e956083774844cd0725889ec8b703b7cc24bbeddf" protocol=ttrpc version=3 May 15 12:17:14.467246 containerd[1592]: time="2025-05-15T12:17:14.467177896Z" level=info msg="connecting to shim 82dc7b1a762aa66192d648074ef5123da6de8fcaad1098dd460b1cd53badf3b2" address="unix:///run/containerd/s/4bee1928863ccb59d015ce1da1d7744a637b1e05debe0a93c51d0261481f9be0" protocol=ttrpc version=3 May 15 12:17:14.497839 systemd[1]: Started cri-containerd-7c211dc751a88223e3db613744d16ce3503451ada6c9647a8ec7b145ff6bd841.scope - libcontainer container 7c211dc751a88223e3db613744d16ce3503451ada6c9647a8ec7b145ff6bd841. May 15 12:17:14.499928 systemd[1]: Started cri-containerd-82dc7b1a762aa66192d648074ef5123da6de8fcaad1098dd460b1cd53badf3b2.scope - libcontainer container 82dc7b1a762aa66192d648074ef5123da6de8fcaad1098dd460b1cd53badf3b2. May 15 12:17:14.670917 containerd[1592]: time="2025-05-15T12:17:14.670805593Z" level=info msg="StartContainer for \"7c211dc751a88223e3db613744d16ce3503451ada6c9647a8ec7b145ff6bd841\" returns successfully" May 15 12:17:14.671437 containerd[1592]: time="2025-05-15T12:17:14.671364686Z" level=info msg="StartContainer for \"82dc7b1a762aa66192d648074ef5123da6de8fcaad1098dd460b1cd53badf3b2\" returns successfully" May 15 12:17:15.235043 kubelet[2366]: E0515 12:17:15.234993 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:15.238494 kubelet[2366]: E0515 12:17:15.238193 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:15.245823 kubelet[2366]: E0515 12:17:15.245784 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:15.499761 update_engine[1571]: I20250515 12:17:15.499686 1571 update_attempter.cc:509] Updating boot flags... May 15 12:17:15.952745 kubelet[2366]: E0515 12:17:15.950534 2366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 12:17:15.990201 kubelet[2366]: I0515 12:17:15.989920 2366 apiserver.go:52] "Watching apiserver" May 15 12:17:16.017521 kubelet[2366]: I0515 12:17:16.017293 2366 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 12:17:16.248274 kubelet[2366]: E0515 12:17:16.248243 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:16.248423 kubelet[2366]: E0515 12:17:16.248368 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:16.248651 kubelet[2366]: E0515 12:17:16.248604 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:16.252490 kubelet[2366]: I0515 12:17:16.252467 2366 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:17:16.533574 kubelet[2366]: I0515 12:17:16.533440 2366 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 12:17:16.533574 kubelet[2366]: E0515 12:17:16.533488 2366 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 12:17:17.261013 kubelet[2366]: E0515 12:17:17.260965 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:17.261013 kubelet[2366]: E0515 12:17:17.260965 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:18.161556 kubelet[2366]: I0515 12:17:18.161455 2366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.1614277849999999 podStartE2EDuration="1.161427785s" podCreationTimestamp="2025-05-15 12:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:17:18.152186736 +0000 UTC m=+10.683894803" watchObservedRunningTime="2025-05-15 12:17:18.161427785 +0000 UTC m=+10.693135842" May 15 12:17:18.161811 kubelet[2366]: I0515 12:17:18.161634 2366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.161626928 podStartE2EDuration="1.161626928s" podCreationTimestamp="2025-05-15 12:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:17:18.161271752 +0000 UTC m=+10.692979809" watchObservedRunningTime="2025-05-15 12:17:18.161626928 +0000 UTC m=+10.693335005" May 15 12:17:18.251303 kubelet[2366]: E0515 12:17:18.251235 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:18.251667 kubelet[2366]: E0515 12:17:18.251599 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:19.673207 kubelet[2366]: E0515 12:17:19.673152 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:20.254445 kubelet[2366]: E0515 12:17:20.254398 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:20.474495 systemd[1]: Reload requested from client PID 2665 ('systemctl') (unit session-7.scope)... May 15 12:17:20.474525 systemd[1]: Reloading... May 15 12:17:20.587685 zram_generator::config[2714]: No configuration found. May 15 12:17:20.688698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:17:20.833870 systemd[1]: Reloading finished in 358 ms. May 15 12:17:20.868102 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:17:20.884915 systemd[1]: kubelet.service: Deactivated successfully. May 15 12:17:20.885274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:17:20.885339 systemd[1]: kubelet.service: Consumed 1.149s CPU time, 120.2M memory peak. May 15 12:17:20.887494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:17:21.082024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:17:21.095101 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:17:21.139198 kubelet[2753]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:17:21.139198 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 12:17:21.139198 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:17:21.139695 kubelet[2753]: I0515 12:17:21.139270 2753 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:17:21.146909 kubelet[2753]: I0515 12:17:21.146869 2753 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 12:17:21.146909 kubelet[2753]: I0515 12:17:21.146901 2753 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:17:21.147184 kubelet[2753]: I0515 12:17:21.147165 2753 server.go:929] "Client rotation is on, will bootstrap in background" May 15 12:17:21.151041 kubelet[2753]: I0515 12:17:21.150535 2753 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 12:17:21.153264 kubelet[2753]: I0515 12:17:21.153233 2753 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:17:21.157634 kubelet[2753]: I0515 12:17:21.157574 2753 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:17:21.163467 kubelet[2753]: I0515 12:17:21.163419 2753 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:17:21.163585 kubelet[2753]: I0515 12:17:21.163547 2753 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 12:17:21.163746 kubelet[2753]: I0515 12:17:21.163697 2753 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:17:21.163914 kubelet[2753]: I0515 12:17:21.163732 2753 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:17:21.163914 kubelet[2753]: I0515 12:17:21.163910 2753 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:17:21.163914 kubelet[2753]: I0515 12:17:21.163918 2753 container_manager_linux.go:300] "Creating device plugin manager" May 15 12:17:21.164070 kubelet[2753]: I0515 12:17:21.163948 2753 state_mem.go:36] "Initialized new in-memory state store" May 15 12:17:21.164096 kubelet[2753]: I0515 12:17:21.164086 2753 kubelet.go:408] "Attempting to sync node with API server" May 15 12:17:21.164120 kubelet[2753]: I0515 12:17:21.164097 2753 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:17:21.164149 kubelet[2753]: I0515 12:17:21.164125 2753 kubelet.go:314] "Adding apiserver pod source" May 15 12:17:21.164149 kubelet[2753]: I0515 12:17:21.164144 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:17:21.164786 kubelet[2753]: I0515 12:17:21.164767 2753 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:17:21.165173 kubelet[2753]: I0515 12:17:21.165130 2753 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:17:21.165523 kubelet[2753]: I0515 12:17:21.165505 2753 server.go:1269] "Started kubelet" May 15 12:17:21.168131 kubelet[2753]: I0515 12:17:21.168094 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:17:21.175431 kubelet[2753]: I0515 12:17:21.172112 2753 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:17:21.175431 kubelet[2753]: I0515 12:17:21.173576 2753 server.go:460] "Adding debug handlers to kubelet server" May 15 12:17:21.175431 kubelet[2753]: I0515 12:17:21.175104 2753 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:17:21.175431 kubelet[2753]: I0515 12:17:21.175327 2753 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:17:21.175815 kubelet[2753]: I0515 12:17:21.175782 2753 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:17:21.177964 kubelet[2753]: I0515 12:17:21.177854 2753 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 12:17:21.178108 kubelet[2753]: E0515 12:17:21.178072 2753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:17:21.178337 kubelet[2753]: I0515 12:17:21.178310 2753 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 12:17:21.179045 kubelet[2753]: I0515 12:17:21.178478 2753 reconciler.go:26] "Reconciler: start to sync state" May 15 12:17:21.184428 kubelet[2753]: I0515 12:17:21.184363 2753 factory.go:221] Registration of the systemd container factory successfully May 15 12:17:21.184589 kubelet[2753]: I0515 12:17:21.184476 2753 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:17:21.185740 kubelet[2753]: E0515 12:17:21.185684 2753 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:17:21.186420 kubelet[2753]: I0515 12:17:21.186346 2753 factory.go:221] Registration of the containerd container factory successfully May 15 12:17:21.196023 kubelet[2753]: I0515 12:17:21.194759 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:17:21.198884 kubelet[2753]: I0515 12:17:21.197967 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:17:21.198884 kubelet[2753]: I0515 12:17:21.198028 2753 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 12:17:21.198884 kubelet[2753]: I0515 12:17:21.198052 2753 kubelet.go:2321] "Starting kubelet main sync loop" May 15 12:17:21.198884 kubelet[2753]: E0515 12:17:21.198115 2753 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:17:21.221335 sudo[2784]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 12:17:21.222165 sudo[2784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 12:17:21.235835 kubelet[2753]: I0515 12:17:21.235714 2753 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 12:17:21.235835 kubelet[2753]: I0515 12:17:21.235744 2753 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 12:17:21.235835 kubelet[2753]: I0515 12:17:21.235771 2753 state_mem.go:36] "Initialized new in-memory state store" May 15 12:17:21.236217 kubelet[2753]: I0515 12:17:21.235970 2753 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 12:17:21.236217 kubelet[2753]: I0515 12:17:21.235981 2753 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 12:17:21.236217 kubelet[2753]: I0515 12:17:21.236000 2753 policy_none.go:49] "None policy: Start" May 15 12:17:21.236841 kubelet[2753]: I0515 12:17:21.236811 2753 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 12:17:21.236904 kubelet[2753]: I0515 12:17:21.236859 2753 state_mem.go:35] "Initializing new in-memory state store" May 15 12:17:21.237104 kubelet[2753]: I0515 12:17:21.237067 2753 state_mem.go:75] "Updated machine memory state" May 15 12:17:21.244064 kubelet[2753]: I0515 12:17:21.244022 2753 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:17:21.244522 kubelet[2753]: I0515 12:17:21.244278 2753 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:17:21.244522 kubelet[2753]: I0515 12:17:21.244305 2753 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:17:21.244693 kubelet[2753]: I0515 12:17:21.244572 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:17:21.307642 kubelet[2753]: E0515 12:17:21.307513 2753 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 12:17:21.308456 kubelet[2753]: E0515 12:17:21.308394 2753 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 12:17:21.308639 kubelet[2753]: E0515 12:17:21.308506 2753 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 12:17:21.349378 kubelet[2753]: I0515 12:17:21.349256 2753 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 12:17:21.357728 kubelet[2753]: I0515 12:17:21.357681 2753 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 12:17:21.357914 kubelet[2753]: I0515 12:17:21.357814 2753 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 12:17:21.480071 kubelet[2753]: I0515 12:17:21.480023 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:21.480071 kubelet[2753]: I0515 12:17:21.480064 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:21.480071 kubelet[2753]: I0515 12:17:21.480084 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:21.480071 kubelet[2753]: I0515 12:17:21.480101 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db87c480e1997762f6f89b8aaab17979-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"db87c480e1997762f6f89b8aaab17979\") " pod="kube-system/kube-apiserver-localhost" May 15 12:17:21.480351 kubelet[2753]: I0515 12:17:21.480124 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db87c480e1997762f6f89b8aaab17979-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"db87c480e1997762f6f89b8aaab17979\") " pod="kube-system/kube-apiserver-localhost" May 15 12:17:21.480351 kubelet[2753]: I0515 12:17:21.480142 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:21.480351 kubelet[2753]: I0515 12:17:21.480157 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 12:17:21.480351 kubelet[2753]: I0515 12:17:21.480172 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db87c480e1997762f6f89b8aaab17979-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"db87c480e1997762f6f89b8aaab17979\") " pod="kube-system/kube-apiserver-localhost" May 15 12:17:21.480351 kubelet[2753]: I0515 12:17:21.480186 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:17:21.608480 kubelet[2753]: E0515 12:17:21.608350 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:21.608796 kubelet[2753]: E0515 12:17:21.608778 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:21.608967 kubelet[2753]: E0515 12:17:21.608947 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:21.779674 sudo[2784]: pam_unix(sudo:session): session closed for user root May 15 12:17:22.164843 kubelet[2753]: I0515 12:17:22.164812 2753 apiserver.go:52] "Watching apiserver" May 15 12:17:22.178747 kubelet[2753]: I0515 12:17:22.178716 2753 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 12:17:22.218421 kubelet[2753]: E0515 12:17:22.218343 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:22.218824 kubelet[2753]: E0515 12:17:22.218788 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:22.280460 kubelet[2753]: E0515 12:17:22.279966 2753 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 12:17:22.280460 kubelet[2753]: E0515 12:17:22.280206 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:22.330771 kubelet[2753]: I0515 12:17:22.330687 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.330662552 podStartE2EDuration="3.330662552s" podCreationTimestamp="2025-05-15 12:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:17:22.330392762 +0000 UTC m=+1.231029771" watchObservedRunningTime="2025-05-15 12:17:22.330662552 +0000 UTC m=+1.231299541" May 15 12:17:23.220995 kubelet[2753]: E0515 12:17:23.220945 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:23.221486 kubelet[2753]: E0515 12:17:23.221059 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:23.340199 sudo[1790]: pam_unix(sudo:session): session closed for user root May 15 12:17:23.341763 sshd[1789]: Connection closed by 10.0.0.1 port 34688 May 15 12:17:23.344824 sshd-session[1787]: pam_unix(sshd:session): session closed for user core May 15 12:17:23.349829 systemd[1]: sshd@6-10.0.0.46:22-10.0.0.1:34688.service: Deactivated successfully. May 15 12:17:23.352607 systemd[1]: session-7.scope: Deactivated successfully. May 15 12:17:23.352859 systemd[1]: session-7.scope: Consumed 5.136s CPU time, 264.8M memory peak. May 15 12:17:23.354413 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. May 15 12:17:23.356175 systemd-logind[1565]: Removed session 7. May 15 12:17:24.527647 kubelet[2753]: E0515 12:17:24.527576 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:24.767251 kubelet[2753]: I0515 12:17:24.767198 2753 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 12:17:24.767772 containerd[1592]: time="2025-05-15T12:17:24.767724509Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 12:17:24.768206 kubelet[2753]: I0515 12:17:24.768060 2753 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 12:17:25.224808 kubelet[2753]: E0515 12:17:25.224716 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:25.428643 systemd[1]: Created slice kubepods-besteffort-pod341b6057_8725_4fb8_844d_5187b77fb254.slice - libcontainer container kubepods-besteffort-pod341b6057_8725_4fb8_844d_5187b77fb254.slice. May 15 12:17:25.458973 systemd[1]: Created slice kubepods-burstable-pod56d520b0_7fa7_48d6_86c2_8fe391e8d14a.slice - libcontainer container kubepods-burstable-pod56d520b0_7fa7_48d6_86c2_8fe391e8d14a.slice. May 15 12:17:25.504340 kubelet[2753]: I0515 12:17:25.504091 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-bpf-maps\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504340 kubelet[2753]: I0515 12:17:25.504164 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-hostproc\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504340 kubelet[2753]: I0515 12:17:25.504196 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-clustermesh-secrets\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504340 kubelet[2753]: I0515 12:17:25.504222 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/341b6057-8725-4fb8-844d-5187b77fb254-kube-proxy\") pod \"kube-proxy-vdmvz\" (UID: \"341b6057-8725-4fb8-844d-5187b77fb254\") " pod="kube-system/kube-proxy-vdmvz" May 15 12:17:25.504340 kubelet[2753]: I0515 12:17:25.504242 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-config-path\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504340 kubelet[2753]: I0515 12:17:25.504265 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-host-proc-sys-kernel\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504705 kubelet[2753]: I0515 12:17:25.504288 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/341b6057-8725-4fb8-844d-5187b77fb254-lib-modules\") pod \"kube-proxy-vdmvz\" (UID: \"341b6057-8725-4fb8-844d-5187b77fb254\") " pod="kube-system/kube-proxy-vdmvz" May 15 12:17:25.504705 kubelet[2753]: I0515 12:17:25.504467 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-host-proc-sys-net\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504705 kubelet[2753]: I0515 12:17:25.504603 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddp28\" (UniqueName: \"kubernetes.io/projected/341b6057-8725-4fb8-844d-5187b77fb254-kube-api-access-ddp28\") pod \"kube-proxy-vdmvz\" (UID: \"341b6057-8725-4fb8-844d-5187b77fb254\") " pod="kube-system/kube-proxy-vdmvz" May 15 12:17:25.504705 kubelet[2753]: I0515 12:17:25.504653 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-xtables-lock\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504705 kubelet[2753]: I0515 12:17:25.504678 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-cgroup\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504705 kubelet[2753]: I0515 12:17:25.504701 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cni-path\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504939 kubelet[2753]: I0515 12:17:25.504724 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-run\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504939 kubelet[2753]: I0515 12:17:25.504741 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-lib-modules\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504939 kubelet[2753]: I0515 12:17:25.504761 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-hubble-tls\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504939 kubelet[2753]: I0515 12:17:25.504796 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxg8t\" (UniqueName: \"kubernetes.io/projected/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-kube-api-access-xxg8t\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.504939 kubelet[2753]: I0515 12:17:25.504849 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/341b6057-8725-4fb8-844d-5187b77fb254-xtables-lock\") pod \"kube-proxy-vdmvz\" (UID: \"341b6057-8725-4fb8-844d-5187b77fb254\") " pod="kube-system/kube-proxy-vdmvz" May 15 12:17:25.504939 kubelet[2753]: I0515 12:17:25.504867 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-etc-cni-netd\") pod \"cilium-p9d5m\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " pod="kube-system/cilium-p9d5m" May 15 12:17:25.757415 kubelet[2753]: E0515 12:17:25.756926 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:25.758179 containerd[1592]: time="2025-05-15T12:17:25.758010863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdmvz,Uid:341b6057-8725-4fb8-844d-5187b77fb254,Namespace:kube-system,Attempt:0,}" May 15 12:17:25.763363 kubelet[2753]: E0515 12:17:25.763320 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:25.764040 containerd[1592]: time="2025-05-15T12:17:25.763987412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9d5m,Uid:56d520b0-7fa7-48d6-86c2-8fe391e8d14a,Namespace:kube-system,Attempt:0,}" May 15 12:17:25.850947 systemd[1]: Created slice kubepods-besteffort-pod26f75c52_24b3_4e50_84b0_c9f170ef0ed4.slice - libcontainer container kubepods-besteffort-pod26f75c52_24b3_4e50_84b0_c9f170ef0ed4.slice. May 15 12:17:25.909997 kubelet[2753]: I0515 12:17:25.909931 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mx5t\" (UniqueName: \"kubernetes.io/projected/26f75c52-24b3-4e50-84b0-c9f170ef0ed4-kube-api-access-7mx5t\") pod \"cilium-operator-5d85765b45-xjkcb\" (UID: \"26f75c52-24b3-4e50-84b0-c9f170ef0ed4\") " pod="kube-system/cilium-operator-5d85765b45-xjkcb" May 15 12:17:25.910399 kubelet[2753]: I0515 12:17:25.910204 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26f75c52-24b3-4e50-84b0-c9f170ef0ed4-cilium-config-path\") pod \"cilium-operator-5d85765b45-xjkcb\" (UID: \"26f75c52-24b3-4e50-84b0-c9f170ef0ed4\") " pod="kube-system/cilium-operator-5d85765b45-xjkcb" May 15 12:17:25.916327 containerd[1592]: time="2025-05-15T12:17:25.916262141Z" level=info msg="connecting to shim 18a980a275fac035aeaec840185b591c2c787f5edac0d63afad51c8a42a11f95" address="unix:///run/containerd/s/1c8b26b737a38cfedb1929423d84f096e2494e8cded0d32f9bcea180367e5c90" namespace=k8s.io protocol=ttrpc version=3 May 15 12:17:25.918717 containerd[1592]: time="2025-05-15T12:17:25.918592187Z" level=info msg="connecting to shim a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27" address="unix:///run/containerd/s/8332db37873f9e3a00bacfdf98f0729170066e800a3df44bc3ba27a1bcac4cb7" namespace=k8s.io protocol=ttrpc version=3 May 15 12:17:25.947822 systemd[1]: Started cri-containerd-18a980a275fac035aeaec840185b591c2c787f5edac0d63afad51c8a42a11f95.scope - libcontainer container 18a980a275fac035aeaec840185b591c2c787f5edac0d63afad51c8a42a11f95. May 15 12:17:25.951664 systemd[1]: Started cri-containerd-a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27.scope - libcontainer container a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27. May 15 12:17:25.988728 containerd[1592]: time="2025-05-15T12:17:25.988686139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9d5m,Uid:56d520b0-7fa7-48d6-86c2-8fe391e8d14a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\"" May 15 12:17:25.989893 kubelet[2753]: E0515 12:17:25.989847 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:25.990865 containerd[1592]: time="2025-05-15T12:17:25.990833064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdmvz,Uid:341b6057-8725-4fb8-844d-5187b77fb254,Namespace:kube-system,Attempt:0,} returns sandbox id \"18a980a275fac035aeaec840185b591c2c787f5edac0d63afad51c8a42a11f95\"" May 15 12:17:25.991292 containerd[1592]: time="2025-05-15T12:17:25.991261129Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 12:17:25.991803 kubelet[2753]: E0515 12:17:25.991730 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:25.993991 containerd[1592]: time="2025-05-15T12:17:25.993954016Z" level=info msg="CreateContainer within sandbox \"18a980a275fac035aeaec840185b591c2c787f5edac0d63afad51c8a42a11f95\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 12:17:26.007918 containerd[1592]: time="2025-05-15T12:17:26.007792830Z" level=info msg="Container 2e9ef9f9a3f765b5b729884284ce3a52e0a0b991abbec2a9f222b2d64872f0e6: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:26.022635 containerd[1592]: time="2025-05-15T12:17:26.022555104Z" level=info msg="CreateContainer within sandbox \"18a980a275fac035aeaec840185b591c2c787f5edac0d63afad51c8a42a11f95\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2e9ef9f9a3f765b5b729884284ce3a52e0a0b991abbec2a9f222b2d64872f0e6\"" May 15 12:17:26.023252 containerd[1592]: time="2025-05-15T12:17:26.023217487Z" level=info msg="StartContainer for \"2e9ef9f9a3f765b5b729884284ce3a52e0a0b991abbec2a9f222b2d64872f0e6\"" May 15 12:17:26.024924 containerd[1592]: time="2025-05-15T12:17:26.024893463Z" level=info msg="connecting to shim 2e9ef9f9a3f765b5b729884284ce3a52e0a0b991abbec2a9f222b2d64872f0e6" address="unix:///run/containerd/s/1c8b26b737a38cfedb1929423d84f096e2494e8cded0d32f9bcea180367e5c90" protocol=ttrpc version=3 May 15 12:17:26.054890 systemd[1]: Started cri-containerd-2e9ef9f9a3f765b5b729884284ce3a52e0a0b991abbec2a9f222b2d64872f0e6.scope - libcontainer container 2e9ef9f9a3f765b5b729884284ce3a52e0a0b991abbec2a9f222b2d64872f0e6. May 15 12:17:26.143313 containerd[1592]: time="2025-05-15T12:17:26.143249346Z" level=info msg="StartContainer for \"2e9ef9f9a3f765b5b729884284ce3a52e0a0b991abbec2a9f222b2d64872f0e6\" returns successfully" May 15 12:17:26.154729 kubelet[2753]: E0515 12:17:26.154683 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:26.156014 containerd[1592]: time="2025-05-15T12:17:26.155971222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xjkcb,Uid:26f75c52-24b3-4e50-84b0-c9f170ef0ed4,Namespace:kube-system,Attempt:0,}" May 15 12:17:26.182344 containerd[1592]: time="2025-05-15T12:17:26.182256017Z" level=info msg="connecting to shim 0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49" address="unix:///run/containerd/s/94a54f7dc7aa5c2c16088f45f8f35fb027fe6177c8d04925b79291e3727e565c" namespace=k8s.io protocol=ttrpc version=3 May 15 12:17:26.208842 systemd[1]: Started cri-containerd-0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49.scope - libcontainer container 0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49. May 15 12:17:26.233155 kubelet[2753]: E0515 12:17:26.233103 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:26.233823 kubelet[2753]: E0515 12:17:26.233793 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:26.250604 kubelet[2753]: I0515 12:17:26.250501 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vdmvz" podStartSLOduration=1.250465339 podStartE2EDuration="1.250465339s" podCreationTimestamp="2025-05-15 12:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:17:26.24981575 +0000 UTC m=+5.150452759" watchObservedRunningTime="2025-05-15 12:17:26.250465339 +0000 UTC m=+5.151102328" May 15 12:17:26.276319 containerd[1592]: time="2025-05-15T12:17:26.276143343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xjkcb,Uid:26f75c52-24b3-4e50-84b0-c9f170ef0ed4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49\"" May 15 12:17:26.278742 kubelet[2753]: E0515 12:17:26.278700 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:27.993214 kubelet[2753]: E0515 12:17:27.993177 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:28.238840 kubelet[2753]: E0515 12:17:28.238805 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:31.685557 kubelet[2753]: E0515 12:17:31.685498 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:33.439652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846997349.mount: Deactivated successfully. May 15 12:17:36.535214 containerd[1592]: time="2025-05-15T12:17:36.535128200Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:17:36.536054 containerd[1592]: time="2025-05-15T12:17:36.535982074Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 12:17:36.537527 containerd[1592]: time="2025-05-15T12:17:36.537454919Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:17:36.538877 containerd[1592]: time="2025-05-15T12:17:36.538816094Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.547519945s" May 15 12:17:36.538877 containerd[1592]: time="2025-05-15T12:17:36.538857936Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 12:17:36.539945 containerd[1592]: time="2025-05-15T12:17:36.539914911Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 12:17:36.542829 containerd[1592]: time="2025-05-15T12:17:36.542762026Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 12:17:36.553373 containerd[1592]: time="2025-05-15T12:17:36.553310290Z" level=info msg="Container e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:36.565896 containerd[1592]: time="2025-05-15T12:17:36.565840042Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\"" May 15 12:17:36.566692 containerd[1592]: time="2025-05-15T12:17:36.566502959Z" level=info msg="StartContainer for \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\"" May 15 12:17:36.567754 containerd[1592]: time="2025-05-15T12:17:36.567723276Z" level=info msg="connecting to shim e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1" address="unix:///run/containerd/s/8332db37873f9e3a00bacfdf98f0729170066e800a3df44bc3ba27a1bcac4cb7" protocol=ttrpc version=3 May 15 12:17:36.624758 systemd[1]: Started cri-containerd-e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1.scope - libcontainer container e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1. May 15 12:17:36.660972 containerd[1592]: time="2025-05-15T12:17:36.660909184Z" level=info msg="StartContainer for \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\" returns successfully" May 15 12:17:36.672751 systemd[1]: cri-containerd-e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1.scope: Deactivated successfully. May 15 12:17:36.675065 containerd[1592]: time="2025-05-15T12:17:36.675000937Z" level=info msg="received exit event container_id:\"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\" id:\"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\" pid:3170 exited_at:{seconds:1747311456 nanos:674306187}" May 15 12:17:36.675206 containerd[1592]: time="2025-05-15T12:17:36.675155782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\" id:\"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\" pid:3170 exited_at:{seconds:1747311456 nanos:674306187}" May 15 12:17:36.701352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1-rootfs.mount: Deactivated successfully. May 15 12:17:37.257732 kubelet[2753]: E0515 12:17:37.257671 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:38.261884 kubelet[2753]: E0515 12:17:38.261829 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:38.265111 containerd[1592]: time="2025-05-15T12:17:38.265052738Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 12:17:38.282273 containerd[1592]: time="2025-05-15T12:17:38.282205193Z" level=info msg="Container 7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:38.287124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1808979949.mount: Deactivated successfully. May 15 12:17:38.290933 containerd[1592]: time="2025-05-15T12:17:38.290894082Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\"" May 15 12:17:38.291872 containerd[1592]: time="2025-05-15T12:17:38.291794703Z" level=info msg="StartContainer for \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\"" May 15 12:17:38.293139 containerd[1592]: time="2025-05-15T12:17:38.293093919Z" level=info msg="connecting to shim 7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f" address="unix:///run/containerd/s/8332db37873f9e3a00bacfdf98f0729170066e800a3df44bc3ba27a1bcac4cb7" protocol=ttrpc version=3 May 15 12:17:38.323832 systemd[1]: Started cri-containerd-7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f.scope - libcontainer container 7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f. May 15 12:17:38.385485 containerd[1592]: time="2025-05-15T12:17:38.385439720Z" level=info msg="StartContainer for \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\" returns successfully" May 15 12:17:38.406948 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 12:17:38.407302 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 12:17:38.407861 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 12:17:38.409844 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:17:38.411020 containerd[1592]: time="2025-05-15T12:17:38.410974290Z" level=info msg="received exit event container_id:\"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\" id:\"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\" pid:3218 exited_at:{seconds:1747311458 nanos:410769688}" May 15 12:17:38.411112 containerd[1592]: time="2025-05-15T12:17:38.411048426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\" id:\"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\" pid:3218 exited_at:{seconds:1747311458 nanos:410769688}" May 15 12:17:38.411997 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 12:17:38.412545 systemd[1]: cri-containerd-7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f.scope: Deactivated successfully. May 15 12:17:38.430756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f-rootfs.mount: Deactivated successfully. May 15 12:17:38.610106 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:17:39.266207 kubelet[2753]: E0515 12:17:39.266169 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:39.268934 containerd[1592]: time="2025-05-15T12:17:39.268852947Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 12:17:39.289372 containerd[1592]: time="2025-05-15T12:17:39.289151659Z" level=info msg="Container a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:39.302089 containerd[1592]: time="2025-05-15T12:17:39.302021710Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\"" May 15 12:17:39.302912 containerd[1592]: time="2025-05-15T12:17:39.302843684Z" level=info msg="StartContainer for \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\"" May 15 12:17:39.304804 containerd[1592]: time="2025-05-15T12:17:39.304763976Z" level=info msg="connecting to shim a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31" address="unix:///run/containerd/s/8332db37873f9e3a00bacfdf98f0729170066e800a3df44bc3ba27a1bcac4cb7" protocol=ttrpc version=3 May 15 12:17:39.338871 systemd[1]: Started cri-containerd-a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31.scope - libcontainer container a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31. May 15 12:17:39.388505 systemd[1]: cri-containerd-a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31.scope: Deactivated successfully. May 15 12:17:39.390156 containerd[1592]: time="2025-05-15T12:17:39.390118815Z" level=info msg="StartContainer for \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\" returns successfully" May 15 12:17:39.390805 containerd[1592]: time="2025-05-15T12:17:39.390753552Z" level=info msg="received exit event container_id:\"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\" id:\"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\" pid:3265 exited_at:{seconds:1747311459 nanos:390282106}" May 15 12:17:39.390960 containerd[1592]: time="2025-05-15T12:17:39.390870752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\" id:\"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\" pid:3265 exited_at:{seconds:1747311459 nanos:390282106}" May 15 12:17:39.418484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31-rootfs.mount: Deactivated successfully. May 15 12:17:39.981813 containerd[1592]: time="2025-05-15T12:17:39.981769914Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:17:39.982941 containerd[1592]: time="2025-05-15T12:17:39.982913451Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 12:17:39.984291 containerd[1592]: time="2025-05-15T12:17:39.984245817Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:17:39.986945 containerd[1592]: time="2025-05-15T12:17:39.986900932Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.446947677s" May 15 12:17:39.987031 containerd[1592]: time="2025-05-15T12:17:39.986963586Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 12:17:39.988934 containerd[1592]: time="2025-05-15T12:17:39.988902665Z" level=info msg="CreateContainer within sandbox \"0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 12:17:39.997729 containerd[1592]: time="2025-05-15T12:17:39.997687901Z" level=info msg="Container 1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:40.005818 containerd[1592]: time="2025-05-15T12:17:40.005782251Z" level=info msg="CreateContainer within sandbox \"0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\"" May 15 12:17:40.006486 containerd[1592]: time="2025-05-15T12:17:40.006271982Z" level=info msg="StartContainer for \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\"" May 15 12:17:40.007268 containerd[1592]: time="2025-05-15T12:17:40.007237786Z" level=info msg="connecting to shim 1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802" address="unix:///run/containerd/s/94a54f7dc7aa5c2c16088f45f8f35fb027fe6177c8d04925b79291e3727e565c" protocol=ttrpc version=3 May 15 12:17:40.030787 systemd[1]: Started cri-containerd-1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802.scope - libcontainer container 1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802. May 15 12:17:40.065016 containerd[1592]: time="2025-05-15T12:17:40.064971806Z" level=info msg="StartContainer for \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" returns successfully" May 15 12:17:40.269698 kubelet[2753]: E0515 12:17:40.269649 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:40.274372 kubelet[2753]: E0515 12:17:40.274333 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:40.276638 containerd[1592]: time="2025-05-15T12:17:40.276575785Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 12:17:40.465329 kubelet[2753]: I0515 12:17:40.465263 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xjkcb" podStartSLOduration=1.758223032 podStartE2EDuration="15.465244789s" podCreationTimestamp="2025-05-15 12:17:25 +0000 UTC" firstStartedPulling="2025-05-15 12:17:26.28049765 +0000 UTC m=+5.181134639" lastFinishedPulling="2025-05-15 12:17:39.987519397 +0000 UTC m=+18.888156396" observedRunningTime="2025-05-15 12:17:40.464565527 +0000 UTC m=+19.365202516" watchObservedRunningTime="2025-05-15 12:17:40.465244789 +0000 UTC m=+19.365881778" May 15 12:17:40.556933 containerd[1592]: time="2025-05-15T12:17:40.556235706Z" level=info msg="Container 152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:40.727169 containerd[1592]: time="2025-05-15T12:17:40.727106521Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\"" May 15 12:17:40.727870 containerd[1592]: time="2025-05-15T12:17:40.727817446Z" level=info msg="StartContainer for \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\"" May 15 12:17:40.729088 containerd[1592]: time="2025-05-15T12:17:40.729053301Z" level=info msg="connecting to shim 152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a" address="unix:///run/containerd/s/8332db37873f9e3a00bacfdf98f0729170066e800a3df44bc3ba27a1bcac4cb7" protocol=ttrpc version=3 May 15 12:17:40.758916 systemd[1]: Started cri-containerd-152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a.scope - libcontainer container 152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a. May 15 12:17:40.792288 systemd[1]: cri-containerd-152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a.scope: Deactivated successfully. May 15 12:17:40.793122 containerd[1592]: time="2025-05-15T12:17:40.793051109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\" id:\"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\" pid:3356 exited_at:{seconds:1747311460 nanos:792710521}" May 15 12:17:40.891412 containerd[1592]: time="2025-05-15T12:17:40.891275006Z" level=info msg="received exit event container_id:\"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\" id:\"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\" pid:3356 exited_at:{seconds:1747311460 nanos:792710521}" May 15 12:17:40.916102 containerd[1592]: time="2025-05-15T12:17:40.915938323Z" level=info msg="StartContainer for \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\" returns successfully" May 15 12:17:40.944791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a-rootfs.mount: Deactivated successfully. May 15 12:17:41.280944 kubelet[2753]: E0515 12:17:41.280535 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:41.280944 kubelet[2753]: E0515 12:17:41.280585 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:42.285884 kubelet[2753]: E0515 12:17:42.285825 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:42.287775 containerd[1592]: time="2025-05-15T12:17:42.287712469Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 12:17:42.720472 containerd[1592]: time="2025-05-15T12:17:42.720337091Z" level=info msg="Container 6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:42.725292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20551391.mount: Deactivated successfully. May 15 12:17:42.823534 containerd[1592]: time="2025-05-15T12:17:42.823473336Z" level=info msg="CreateContainer within sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\"" May 15 12:17:42.824025 containerd[1592]: time="2025-05-15T12:17:42.823986631Z" level=info msg="StartContainer for \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\"" May 15 12:17:42.824919 containerd[1592]: time="2025-05-15T12:17:42.824886102Z" level=info msg="connecting to shim 6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca" address="unix:///run/containerd/s/8332db37873f9e3a00bacfdf98f0729170066e800a3df44bc3ba27a1bcac4cb7" protocol=ttrpc version=3 May 15 12:17:42.855829 systemd[1]: Started cri-containerd-6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca.scope - libcontainer container 6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca. May 15 12:17:42.963522 containerd[1592]: time="2025-05-15T12:17:42.963483891Z" level=info msg="StartContainer for \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" returns successfully" May 15 12:17:43.039387 containerd[1592]: time="2025-05-15T12:17:43.039332081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" id:\"37a78b5da6b4e5049c7823ea3e16b60f29cfdd348ab0d7b05c033111dc833e5c\" pid:3430 exited_at:{seconds:1747311463 nanos:38566053}" May 15 12:17:43.080136 kubelet[2753]: I0515 12:17:43.080101 2753 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 12:17:43.303730 kubelet[2753]: E0515 12:17:43.303330 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:43.372081 kubelet[2753]: I0515 12:17:43.371828 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p9d5m" podStartSLOduration=7.822829669 podStartE2EDuration="18.371800159s" podCreationTimestamp="2025-05-15 12:17:25 +0000 UTC" firstStartedPulling="2025-05-15 12:17:25.990773213 +0000 UTC m=+4.891410192" lastFinishedPulling="2025-05-15 12:17:36.539743693 +0000 UTC m=+15.440380682" observedRunningTime="2025-05-15 12:17:43.368531867 +0000 UTC m=+22.269168856" watchObservedRunningTime="2025-05-15 12:17:43.371800159 +0000 UTC m=+22.272437148" May 15 12:17:43.400943 systemd[1]: Created slice kubepods-burstable-pod741d12b5_5f6b_4eaf_a41f_2790bceecf75.slice - libcontainer container kubepods-burstable-pod741d12b5_5f6b_4eaf_a41f_2790bceecf75.slice. May 15 12:17:43.414453 systemd[1]: Created slice kubepods-burstable-pod34924a59_3f29_4124_b955_4d3bc8527f2e.slice - libcontainer container kubepods-burstable-pod34924a59_3f29_4124_b955_4d3bc8527f2e.slice. May 15 12:17:43.424496 kubelet[2753]: I0515 12:17:43.424180 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34924a59-3f29-4124-b955-4d3bc8527f2e-config-volume\") pod \"coredns-6f6b679f8f-rtgwm\" (UID: \"34924a59-3f29-4124-b955-4d3bc8527f2e\") " pod="kube-system/coredns-6f6b679f8f-rtgwm" May 15 12:17:43.425157 kubelet[2753]: I0515 12:17:43.425128 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwjcw\" (UniqueName: \"kubernetes.io/projected/34924a59-3f29-4124-b955-4d3bc8527f2e-kube-api-access-lwjcw\") pod \"coredns-6f6b679f8f-rtgwm\" (UID: \"34924a59-3f29-4124-b955-4d3bc8527f2e\") " pod="kube-system/coredns-6f6b679f8f-rtgwm" May 15 12:17:43.425501 kubelet[2753]: I0515 12:17:43.425482 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/741d12b5-5f6b-4eaf-a41f-2790bceecf75-config-volume\") pod \"coredns-6f6b679f8f-mxn54\" (UID: \"741d12b5-5f6b-4eaf-a41f-2790bceecf75\") " pod="kube-system/coredns-6f6b679f8f-mxn54" May 15 12:17:43.425726 kubelet[2753]: I0515 12:17:43.425592 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9lf\" (UniqueName: \"kubernetes.io/projected/741d12b5-5f6b-4eaf-a41f-2790bceecf75-kube-api-access-mp9lf\") pod \"coredns-6f6b679f8f-mxn54\" (UID: \"741d12b5-5f6b-4eaf-a41f-2790bceecf75\") " pod="kube-system/coredns-6f6b679f8f-mxn54" May 15 12:17:43.710337 kubelet[2753]: E0515 12:17:43.710141 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:43.718050 kubelet[2753]: E0515 12:17:43.717965 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:43.718779 containerd[1592]: time="2025-05-15T12:17:43.718660865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rtgwm,Uid:34924a59-3f29-4124-b955-4d3bc8527f2e,Namespace:kube-system,Attempt:0,}" May 15 12:17:43.719815 containerd[1592]: time="2025-05-15T12:17:43.719706280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mxn54,Uid:741d12b5-5f6b-4eaf-a41f-2790bceecf75,Namespace:kube-system,Attempt:0,}" May 15 12:17:44.304215 kubelet[2753]: E0515 12:17:44.304177 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:45.195946 systemd-networkd[1495]: cilium_host: Link UP May 15 12:17:45.196145 systemd-networkd[1495]: cilium_net: Link UP May 15 12:17:45.196341 systemd-networkd[1495]: cilium_net: Gained carrier May 15 12:17:45.196518 systemd-networkd[1495]: cilium_host: Gained carrier May 15 12:17:45.308055 kubelet[2753]: E0515 12:17:45.308026 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:45.311545 systemd-networkd[1495]: cilium_vxlan: Link UP May 15 12:17:45.311556 systemd-networkd[1495]: cilium_vxlan: Gained carrier May 15 12:17:45.333913 systemd-networkd[1495]: cilium_host: Gained IPv6LL May 15 12:17:45.421870 systemd-networkd[1495]: cilium_net: Gained IPv6LL May 15 12:17:45.550658 kernel: NET: Registered PF_ALG protocol family May 15 12:17:46.292321 systemd-networkd[1495]: lxc_health: Link UP May 15 12:17:46.294273 systemd-networkd[1495]: lxc_health: Gained carrier May 15 12:17:46.789875 systemd-networkd[1495]: cilium_vxlan: Gained IPv6LL May 15 12:17:46.887727 kernel: eth0: renamed from tmpcfdea May 15 12:17:46.901645 kernel: eth0: renamed from tmpe72b3 May 15 12:17:46.909744 systemd-networkd[1495]: lxc85f24c8d35a3: Link UP May 15 12:17:46.910996 systemd-networkd[1495]: lxc85f24c8d35a3: Gained carrier May 15 12:17:46.911246 systemd-networkd[1495]: lxc05a6bcca7916: Link UP May 15 12:17:46.914249 systemd-networkd[1495]: lxc05a6bcca7916: Gained carrier May 15 12:17:47.765503 kubelet[2753]: E0515 12:17:47.765451 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:48.197949 systemd-networkd[1495]: lxc_health: Gained IPv6LL May 15 12:17:48.315641 kubelet[2753]: E0515 12:17:48.315490 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:48.581867 systemd-networkd[1495]: lxc05a6bcca7916: Gained IPv6LL May 15 12:17:48.645863 systemd-networkd[1495]: lxc85f24c8d35a3: Gained IPv6LL May 15 12:17:49.316326 kubelet[2753]: E0515 12:17:49.316291 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:49.408276 systemd[1]: Started sshd@7-10.0.0.46:22-10.0.0.1:32854.service - OpenSSH per-connection server daemon (10.0.0.1:32854). May 15 12:17:49.472279 sshd[3900]: Accepted publickey for core from 10.0.0.1 port 32854 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:49.474442 sshd-session[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:49.480581 systemd-logind[1565]: New session 8 of user core. May 15 12:17:49.494945 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 12:17:49.676206 sshd[3902]: Connection closed by 10.0.0.1 port 32854 May 15 12:17:49.676772 sshd-session[3900]: pam_unix(sshd:session): session closed for user core May 15 12:17:49.681958 systemd[1]: sshd@7-10.0.0.46:22-10.0.0.1:32854.service: Deactivated successfully. May 15 12:17:49.684472 systemd[1]: session-8.scope: Deactivated successfully. May 15 12:17:49.685485 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. May 15 12:17:49.687268 systemd-logind[1565]: Removed session 8. May 15 12:17:50.659101 containerd[1592]: time="2025-05-15T12:17:50.659033648Z" level=info msg="connecting to shim cfdeaf78e8ac41e596822b2cfd30475bd6faa9475b13e7fd9fcafa0e90c8d5b2" address="unix:///run/containerd/s/b36ecbba0a9574250318bcaedd3a74dcfd9e23dc3117a5968cbf7efe685f4d62" namespace=k8s.io protocol=ttrpc version=3 May 15 12:17:50.660293 containerd[1592]: time="2025-05-15T12:17:50.660236358Z" level=info msg="connecting to shim e72b3ad426ff994561d8e66b755d594e881b06b1b4739c447c8ed9a35977c963" address="unix:///run/containerd/s/7019cdcdb2d9c16b9855d3a5a9b3ae60fb8c1e12f39f3adda0d92eb409c7acdb" namespace=k8s.io protocol=ttrpc version=3 May 15 12:17:50.695817 systemd[1]: Started cri-containerd-cfdeaf78e8ac41e596822b2cfd30475bd6faa9475b13e7fd9fcafa0e90c8d5b2.scope - libcontainer container cfdeaf78e8ac41e596822b2cfd30475bd6faa9475b13e7fd9fcafa0e90c8d5b2. May 15 12:17:50.698102 systemd[1]: Started cri-containerd-e72b3ad426ff994561d8e66b755d594e881b06b1b4739c447c8ed9a35977c963.scope - libcontainer container e72b3ad426ff994561d8e66b755d594e881b06b1b4739c447c8ed9a35977c963. May 15 12:17:50.712838 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:17:50.714405 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:17:50.753538 containerd[1592]: time="2025-05-15T12:17:50.753481088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rtgwm,Uid:34924a59-3f29-4124-b955-4d3bc8527f2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e72b3ad426ff994561d8e66b755d594e881b06b1b4739c447c8ed9a35977c963\"" May 15 12:17:50.755232 containerd[1592]: time="2025-05-15T12:17:50.755188098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mxn54,Uid:741d12b5-5f6b-4eaf-a41f-2790bceecf75,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfdeaf78e8ac41e596822b2cfd30475bd6faa9475b13e7fd9fcafa0e90c8d5b2\"" May 15 12:17:50.756936 kubelet[2753]: E0515 12:17:50.756906 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:50.758344 kubelet[2753]: E0515 12:17:50.758292 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:50.759773 containerd[1592]: time="2025-05-15T12:17:50.759727518Z" level=info msg="CreateContainer within sandbox \"cfdeaf78e8ac41e596822b2cfd30475bd6faa9475b13e7fd9fcafa0e90c8d5b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:17:50.760442 containerd[1592]: time="2025-05-15T12:17:50.760395588Z" level=info msg="CreateContainer within sandbox \"e72b3ad426ff994561d8e66b755d594e881b06b1b4739c447c8ed9a35977c963\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:17:50.777273 containerd[1592]: time="2025-05-15T12:17:50.777234483Z" level=info msg="Container ade77e5d9c799f8ad55e8fea122ac30077385e1a3279e6c394f4129a32aa53b8: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:50.782167 containerd[1592]: time="2025-05-15T12:17:50.782112792Z" level=info msg="Container 72651a3654b231ebfac274474a406a586925dad237a5b550cc967e7b03b969da: CDI devices from CRI Config.CDIDevices: []" May 15 12:17:50.791354 containerd[1592]: time="2025-05-15T12:17:50.791327517Z" level=info msg="CreateContainer within sandbox \"cfdeaf78e8ac41e596822b2cfd30475bd6faa9475b13e7fd9fcafa0e90c8d5b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ade77e5d9c799f8ad55e8fea122ac30077385e1a3279e6c394f4129a32aa53b8\"" May 15 12:17:50.792017 containerd[1592]: time="2025-05-15T12:17:50.791964676Z" level=info msg="StartContainer for \"ade77e5d9c799f8ad55e8fea122ac30077385e1a3279e6c394f4129a32aa53b8\"" May 15 12:17:50.792841 containerd[1592]: time="2025-05-15T12:17:50.792803227Z" level=info msg="CreateContainer within sandbox \"e72b3ad426ff994561d8e66b755d594e881b06b1b4739c447c8ed9a35977c963\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72651a3654b231ebfac274474a406a586925dad237a5b550cc967e7b03b969da\"" May 15 12:17:50.793425 containerd[1592]: time="2025-05-15T12:17:50.793384578Z" level=info msg="StartContainer for \"72651a3654b231ebfac274474a406a586925dad237a5b550cc967e7b03b969da\"" May 15 12:17:50.793572 containerd[1592]: time="2025-05-15T12:17:50.793541613Z" level=info msg="connecting to shim ade77e5d9c799f8ad55e8fea122ac30077385e1a3279e6c394f4129a32aa53b8" address="unix:///run/containerd/s/b36ecbba0a9574250318bcaedd3a74dcfd9e23dc3117a5968cbf7efe685f4d62" protocol=ttrpc version=3 May 15 12:17:50.794647 containerd[1592]: time="2025-05-15T12:17:50.794462855Z" level=info msg="connecting to shim 72651a3654b231ebfac274474a406a586925dad237a5b550cc967e7b03b969da" address="unix:///run/containerd/s/7019cdcdb2d9c16b9855d3a5a9b3ae60fb8c1e12f39f3adda0d92eb409c7acdb" protocol=ttrpc version=3 May 15 12:17:50.817792 systemd[1]: Started cri-containerd-ade77e5d9c799f8ad55e8fea122ac30077385e1a3279e6c394f4129a32aa53b8.scope - libcontainer container ade77e5d9c799f8ad55e8fea122ac30077385e1a3279e6c394f4129a32aa53b8. May 15 12:17:50.821829 systemd[1]: Started cri-containerd-72651a3654b231ebfac274474a406a586925dad237a5b550cc967e7b03b969da.scope - libcontainer container 72651a3654b231ebfac274474a406a586925dad237a5b550cc967e7b03b969da. May 15 12:17:50.868464 containerd[1592]: time="2025-05-15T12:17:50.868403836Z" level=info msg="StartContainer for \"72651a3654b231ebfac274474a406a586925dad237a5b550cc967e7b03b969da\" returns successfully" May 15 12:17:50.868673 containerd[1592]: time="2025-05-15T12:17:50.868495283Z" level=info msg="StartContainer for \"ade77e5d9c799f8ad55e8fea122ac30077385e1a3279e6c394f4129a32aa53b8\" returns successfully" May 15 12:17:51.322014 kubelet[2753]: E0515 12:17:51.321765 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:51.324582 kubelet[2753]: E0515 12:17:51.324532 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:51.333512 kubelet[2753]: I0515 12:17:51.333432 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rtgwm" podStartSLOduration=26.333414081 podStartE2EDuration="26.333414081s" podCreationTimestamp="2025-05-15 12:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:17:51.333121983 +0000 UTC m=+30.233758972" watchObservedRunningTime="2025-05-15 12:17:51.333414081 +0000 UTC m=+30.234051070" May 15 12:17:51.349385 kubelet[2753]: I0515 12:17:51.348512 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mxn54" podStartSLOduration=26.348494298 podStartE2EDuration="26.348494298s" podCreationTimestamp="2025-05-15 12:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:17:51.348398021 +0000 UTC m=+30.249035010" watchObservedRunningTime="2025-05-15 12:17:51.348494298 +0000 UTC m=+30.249131287" May 15 12:17:51.619196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136018460.mount: Deactivated successfully. May 15 12:17:52.327018 kubelet[2753]: E0515 12:17:52.326960 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:52.327018 kubelet[2753]: E0515 12:17:52.327024 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:53.328967 kubelet[2753]: E0515 12:17:53.328923 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:53.329478 kubelet[2753]: E0515 12:17:53.329031 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:54.698238 systemd[1]: Started sshd@8-10.0.0.46:22-10.0.0.1:60716.service - OpenSSH per-connection server daemon (10.0.0.1:60716). May 15 12:17:54.748152 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 60716 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:54.750080 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:54.755750 systemd-logind[1565]: New session 9 of user core. May 15 12:17:54.765967 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 12:17:54.896838 sshd[4105]: Connection closed by 10.0.0.1 port 60716 May 15 12:17:54.897224 sshd-session[4103]: pam_unix(sshd:session): session closed for user core May 15 12:17:54.902439 systemd[1]: sshd@8-10.0.0.46:22-10.0.0.1:60716.service: Deactivated successfully. May 15 12:17:54.905310 systemd[1]: session-9.scope: Deactivated successfully. May 15 12:17:54.906254 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. May 15 12:17:54.908020 systemd-logind[1565]: Removed session 9. May 15 12:17:59.911247 systemd[1]: Started sshd@9-10.0.0.46:22-10.0.0.1:60724.service - OpenSSH per-connection server daemon (10.0.0.1:60724). May 15 12:17:59.960369 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 60724 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:59.962022 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:59.966862 systemd-logind[1565]: New session 10 of user core. May 15 12:17:59.980866 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 12:18:00.117828 sshd[4124]: Connection closed by 10.0.0.1 port 60724 May 15 12:18:00.118521 sshd-session[4122]: pam_unix(sshd:session): session closed for user core May 15 12:18:00.124285 systemd[1]: sshd@9-10.0.0.46:22-10.0.0.1:60724.service: Deactivated successfully. May 15 12:18:00.126847 systemd[1]: session-10.scope: Deactivated successfully. May 15 12:18:00.127711 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. May 15 12:18:00.129177 systemd-logind[1565]: Removed session 10. May 15 12:18:05.141001 systemd[1]: Started sshd@10-10.0.0.46:22-10.0.0.1:50212.service - OpenSSH per-connection server daemon (10.0.0.1:50212). May 15 12:18:05.201938 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 50212 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:05.203760 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:05.208324 systemd-logind[1565]: New session 11 of user core. May 15 12:18:05.223818 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 12:18:05.351812 sshd[4140]: Connection closed by 10.0.0.1 port 50212 May 15 12:18:05.352226 sshd-session[4138]: pam_unix(sshd:session): session closed for user core May 15 12:18:05.365673 systemd[1]: sshd@10-10.0.0.46:22-10.0.0.1:50212.service: Deactivated successfully. May 15 12:18:05.367767 systemd[1]: session-11.scope: Deactivated successfully. May 15 12:18:05.368724 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. May 15 12:18:05.372176 systemd[1]: Started sshd@11-10.0.0.46:22-10.0.0.1:50222.service - OpenSSH per-connection server daemon (10.0.0.1:50222). May 15 12:18:05.372926 systemd-logind[1565]: Removed session 11. May 15 12:18:05.426757 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 50222 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:05.428059 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:05.433069 systemd-logind[1565]: New session 12 of user core. May 15 12:18:05.442788 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 12:18:05.614053 sshd[4157]: Connection closed by 10.0.0.1 port 50222 May 15 12:18:05.614744 sshd-session[4155]: pam_unix(sshd:session): session closed for user core May 15 12:18:05.628009 systemd[1]: sshd@11-10.0.0.46:22-10.0.0.1:50222.service: Deactivated successfully. May 15 12:18:05.632568 systemd[1]: session-12.scope: Deactivated successfully. May 15 12:18:05.634454 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. May 15 12:18:05.640720 systemd[1]: Started sshd@12-10.0.0.46:22-10.0.0.1:50234.service - OpenSSH per-connection server daemon (10.0.0.1:50234). May 15 12:18:05.641855 systemd-logind[1565]: Removed session 12. May 15 12:18:05.699797 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 50234 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:05.701325 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:05.706208 systemd-logind[1565]: New session 13 of user core. May 15 12:18:05.715808 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 12:18:05.836490 sshd[4171]: Connection closed by 10.0.0.1 port 50234 May 15 12:18:05.836900 sshd-session[4169]: pam_unix(sshd:session): session closed for user core May 15 12:18:05.841039 systemd[1]: sshd@12-10.0.0.46:22-10.0.0.1:50234.service: Deactivated successfully. May 15 12:18:05.843868 systemd[1]: session-13.scope: Deactivated successfully. May 15 12:18:05.846053 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. May 15 12:18:05.847743 systemd-logind[1565]: Removed session 13. May 15 12:18:10.861745 systemd[1]: Started sshd@13-10.0.0.46:22-10.0.0.1:50236.service - OpenSSH per-connection server daemon (10.0.0.1:50236). May 15 12:18:10.920828 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 50236 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:10.922931 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:10.927469 systemd-logind[1565]: New session 14 of user core. May 15 12:18:10.937772 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 12:18:11.056119 sshd[4186]: Connection closed by 10.0.0.1 port 50236 May 15 12:18:11.056431 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 15 12:18:11.061186 systemd[1]: sshd@13-10.0.0.46:22-10.0.0.1:50236.service: Deactivated successfully. May 15 12:18:11.063482 systemd[1]: session-14.scope: Deactivated successfully. May 15 12:18:11.064406 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. May 15 12:18:11.065830 systemd-logind[1565]: Removed session 14. May 15 12:18:16.073263 systemd[1]: Started sshd@14-10.0.0.46:22-10.0.0.1:36516.service - OpenSSH per-connection server daemon (10.0.0.1:36516). May 15 12:18:16.133897 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 36516 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:16.135601 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:16.140553 systemd-logind[1565]: New session 15 of user core. May 15 12:18:16.155793 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 12:18:16.274566 sshd[4201]: Connection closed by 10.0.0.1 port 36516 May 15 12:18:16.274908 sshd-session[4199]: pam_unix(sshd:session): session closed for user core May 15 12:18:16.279464 systemd[1]: sshd@14-10.0.0.46:22-10.0.0.1:36516.service: Deactivated successfully. May 15 12:18:16.281786 systemd[1]: session-15.scope: Deactivated successfully. May 15 12:18:16.282744 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. May 15 12:18:16.284576 systemd-logind[1565]: Removed session 15. May 15 12:18:21.297034 systemd[1]: Started sshd@15-10.0.0.46:22-10.0.0.1:36518.service - OpenSSH per-connection server daemon (10.0.0.1:36518). May 15 12:18:21.359899 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 36518 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:21.361796 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:21.367316 systemd-logind[1565]: New session 16 of user core. May 15 12:18:21.375830 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 12:18:21.495773 sshd[4219]: Connection closed by 10.0.0.1 port 36518 May 15 12:18:21.496229 sshd-session[4217]: pam_unix(sshd:session): session closed for user core May 15 12:18:21.508477 systemd[1]: sshd@15-10.0.0.46:22-10.0.0.1:36518.service: Deactivated successfully. May 15 12:18:21.510420 systemd[1]: session-16.scope: Deactivated successfully. May 15 12:18:21.511311 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. May 15 12:18:21.515083 systemd[1]: Started sshd@16-10.0.0.46:22-10.0.0.1:36522.service - OpenSSH per-connection server daemon (10.0.0.1:36522). May 15 12:18:21.515764 systemd-logind[1565]: Removed session 16. May 15 12:18:21.569831 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 36522 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:21.571735 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:21.576829 systemd-logind[1565]: New session 17 of user core. May 15 12:18:21.586784 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 12:18:21.862840 sshd[4234]: Connection closed by 10.0.0.1 port 36522 May 15 12:18:21.863747 sshd-session[4232]: pam_unix(sshd:session): session closed for user core May 15 12:18:21.883883 systemd[1]: sshd@16-10.0.0.46:22-10.0.0.1:36522.service: Deactivated successfully. May 15 12:18:21.886026 systemd[1]: session-17.scope: Deactivated successfully. May 15 12:18:21.887061 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. May 15 12:18:21.890870 systemd[1]: Started sshd@17-10.0.0.46:22-10.0.0.1:36526.service - OpenSSH per-connection server daemon (10.0.0.1:36526). May 15 12:18:21.891702 systemd-logind[1565]: Removed session 17. May 15 12:18:21.935990 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 36526 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:21.937308 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:21.941453 systemd-logind[1565]: New session 18 of user core. May 15 12:18:21.950785 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 12:18:23.311012 sshd[4247]: Connection closed by 10.0.0.1 port 36526 May 15 12:18:23.311414 sshd-session[4245]: pam_unix(sshd:session): session closed for user core May 15 12:18:23.326412 systemd[1]: sshd@17-10.0.0.46:22-10.0.0.1:36526.service: Deactivated successfully. May 15 12:18:23.328942 systemd[1]: session-18.scope: Deactivated successfully. May 15 12:18:23.330509 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. May 15 12:18:23.334457 systemd[1]: Started sshd@18-10.0.0.46:22-10.0.0.1:36540.service - OpenSSH per-connection server daemon (10.0.0.1:36540). May 15 12:18:23.335273 systemd-logind[1565]: Removed session 18. May 15 12:18:23.388499 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 36540 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:23.389969 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:23.394836 systemd-logind[1565]: New session 19 of user core. May 15 12:18:23.402760 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 12:18:23.643400 sshd[4268]: Connection closed by 10.0.0.1 port 36540 May 15 12:18:23.645640 sshd-session[4266]: pam_unix(sshd:session): session closed for user core May 15 12:18:23.656455 systemd[1]: sshd@18-10.0.0.46:22-10.0.0.1:36540.service: Deactivated successfully. May 15 12:18:23.659136 systemd[1]: session-19.scope: Deactivated successfully. May 15 12:18:23.660316 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. May 15 12:18:23.665052 systemd[1]: Started sshd@19-10.0.0.46:22-10.0.0.1:35950.service - OpenSSH per-connection server daemon (10.0.0.1:35950). May 15 12:18:23.666103 systemd-logind[1565]: Removed session 19. May 15 12:18:23.720110 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 35950 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:23.721792 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:23.727110 systemd-logind[1565]: New session 20 of user core. May 15 12:18:23.736904 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 12:18:23.851514 sshd[4281]: Connection closed by 10.0.0.1 port 35950 May 15 12:18:23.851914 sshd-session[4279]: pam_unix(sshd:session): session closed for user core May 15 12:18:23.855585 systemd[1]: sshd@19-10.0.0.46:22-10.0.0.1:35950.service: Deactivated successfully. May 15 12:18:23.858104 systemd[1]: session-20.scope: Deactivated successfully. May 15 12:18:23.860101 systemd-logind[1565]: Session 20 logged out. Waiting for processes to exit. May 15 12:18:23.862074 systemd-logind[1565]: Removed session 20. May 15 12:18:28.870417 systemd[1]: Started sshd@20-10.0.0.46:22-10.0.0.1:35960.service - OpenSSH per-connection server daemon (10.0.0.1:35960). May 15 12:18:28.924931 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 35960 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:28.926760 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:28.931590 systemd-logind[1565]: New session 21 of user core. May 15 12:18:28.937804 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 12:18:29.053284 sshd[4299]: Connection closed by 10.0.0.1 port 35960 May 15 12:18:29.053807 sshd-session[4297]: pam_unix(sshd:session): session closed for user core May 15 12:18:29.058191 systemd[1]: sshd@20-10.0.0.46:22-10.0.0.1:35960.service: Deactivated successfully. May 15 12:18:29.060500 systemd[1]: session-21.scope: Deactivated successfully. May 15 12:18:29.061462 systemd-logind[1565]: Session 21 logged out. Waiting for processes to exit. May 15 12:18:29.063024 systemd-logind[1565]: Removed session 21. May 15 12:18:34.070512 systemd[1]: Started sshd@21-10.0.0.46:22-10.0.0.1:35974.service - OpenSSH per-connection server daemon (10.0.0.1:35974). May 15 12:18:34.119230 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 35974 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:34.120713 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:34.125248 systemd-logind[1565]: New session 22 of user core. May 15 12:18:34.138911 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 12:18:34.260484 sshd[4318]: Connection closed by 10.0.0.1 port 35974 May 15 12:18:34.260908 sshd-session[4316]: pam_unix(sshd:session): session closed for user core May 15 12:18:34.265602 systemd[1]: sshd@21-10.0.0.46:22-10.0.0.1:35974.service: Deactivated successfully. May 15 12:18:34.267890 systemd[1]: session-22.scope: Deactivated successfully. May 15 12:18:34.268932 systemd-logind[1565]: Session 22 logged out. Waiting for processes to exit. May 15 12:18:34.270262 systemd-logind[1565]: Removed session 22. May 15 12:18:36.199529 kubelet[2753]: E0515 12:18:36.199469 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:39.199263 kubelet[2753]: E0515 12:18:39.199199 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:39.274971 systemd[1]: Started sshd@22-10.0.0.46:22-10.0.0.1:35980.service - OpenSSH per-connection server daemon (10.0.0.1:35980). May 15 12:18:39.329213 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 35980 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:39.330725 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:39.335238 systemd-logind[1565]: New session 23 of user core. May 15 12:18:39.347764 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 12:18:39.457837 sshd[4333]: Connection closed by 10.0.0.1 port 35980 May 15 12:18:39.458065 sshd-session[4331]: pam_unix(sshd:session): session closed for user core May 15 12:18:39.462734 systemd[1]: sshd@22-10.0.0.46:22-10.0.0.1:35980.service: Deactivated successfully. May 15 12:18:39.465091 systemd[1]: session-23.scope: Deactivated successfully. May 15 12:18:39.465876 systemd-logind[1565]: Session 23 logged out. Waiting for processes to exit. May 15 12:18:39.467325 systemd-logind[1565]: Removed session 23. May 15 12:18:44.475742 systemd[1]: Started sshd@23-10.0.0.46:22-10.0.0.1:56292.service - OpenSSH per-connection server daemon (10.0.0.1:56292). May 15 12:18:44.531043 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 56292 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:44.532734 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:44.537748 systemd-logind[1565]: New session 24 of user core. May 15 12:18:44.547755 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 12:18:44.658518 sshd[4348]: Connection closed by 10.0.0.1 port 56292 May 15 12:18:44.658883 sshd-session[4346]: pam_unix(sshd:session): session closed for user core May 15 12:18:44.670572 systemd[1]: sshd@23-10.0.0.46:22-10.0.0.1:56292.service: Deactivated successfully. May 15 12:18:44.672666 systemd[1]: session-24.scope: Deactivated successfully. May 15 12:18:44.673431 systemd-logind[1565]: Session 24 logged out. Waiting for processes to exit. May 15 12:18:44.677043 systemd[1]: Started sshd@24-10.0.0.46:22-10.0.0.1:56306.service - OpenSSH per-connection server daemon (10.0.0.1:56306). May 15 12:18:44.677913 systemd-logind[1565]: Removed session 24. May 15 12:18:44.730502 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 56306 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:44.732226 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:44.737129 systemd-logind[1565]: New session 25 of user core. May 15 12:18:44.746756 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 12:18:46.080345 containerd[1592]: time="2025-05-15T12:18:46.080283908Z" level=info msg="StopContainer for \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" with timeout 30 (s)" May 15 12:18:46.090352 containerd[1592]: time="2025-05-15T12:18:46.090312569Z" level=info msg="Stop container \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" with signal terminated" May 15 12:18:46.102818 systemd[1]: cri-containerd-1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802.scope: Deactivated successfully. May 15 12:18:46.105891 containerd[1592]: time="2025-05-15T12:18:46.105842835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" id:\"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" pid:3322 exited_at:{seconds:1747311526 nanos:105368780}" May 15 12:18:46.132378 containerd[1592]: time="2025-05-15T12:18:46.105843126Z" level=info msg="received exit event container_id:\"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" id:\"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" pid:3322 exited_at:{seconds:1747311526 nanos:105368780}" May 15 12:18:46.132572 containerd[1592]: time="2025-05-15T12:18:46.117126410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" id:\"6702fc584255e4fde1aa56d8704e44280c882e8da26068fddb541cc300c9b144\" pid:4386 exited_at:{seconds:1747311526 nanos:116120982}" May 15 12:18:46.132572 containerd[1592]: time="2025-05-15T12:18:46.120410866Z" level=info msg="StopContainer for \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" with timeout 2 (s)" May 15 12:18:46.132796 containerd[1592]: time="2025-05-15T12:18:46.127412983Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:18:46.133311 containerd[1592]: time="2025-05-15T12:18:46.133274394Z" level=info msg="Stop container \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" with signal terminated" May 15 12:18:46.141826 systemd-networkd[1495]: lxc_health: Link DOWN May 15 12:18:46.142640 systemd-networkd[1495]: lxc_health: Lost carrier May 15 12:18:46.156264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802-rootfs.mount: Deactivated successfully. May 15 12:18:46.167048 systemd[1]: cri-containerd-6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca.scope: Deactivated successfully. May 15 12:18:46.167425 systemd[1]: cri-containerd-6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca.scope: Consumed 7.288s CPU time, 121.9M memory peak, 212K read from disk, 13.3M written to disk. May 15 12:18:46.168609 containerd[1592]: time="2025-05-15T12:18:46.168577511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" id:\"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" pid:3393 exited_at:{seconds:1747311526 nanos:168253303}" May 15 12:18:46.168772 containerd[1592]: time="2025-05-15T12:18:46.168608189Z" level=info msg="received exit event container_id:\"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" id:\"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" pid:3393 exited_at:{seconds:1747311526 nanos:168253303}" May 15 12:18:46.179860 containerd[1592]: time="2025-05-15T12:18:46.179809688Z" level=info msg="StopContainer for \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" returns successfully" May 15 12:18:46.189330 containerd[1592]: time="2025-05-15T12:18:46.189280115Z" level=info msg="StopPodSandbox for \"0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49\"" May 15 12:18:46.189434 containerd[1592]: time="2025-05-15T12:18:46.189378332Z" level=info msg="Container to stop \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:18:46.190804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca-rootfs.mount: Deactivated successfully. May 15 12:18:46.198015 systemd[1]: cri-containerd-0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49.scope: Deactivated successfully. May 15 12:18:46.200058 containerd[1592]: time="2025-05-15T12:18:46.200027556Z" level=info msg="StopContainer for \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" returns successfully" May 15 12:18:46.200157 containerd[1592]: time="2025-05-15T12:18:46.200067233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49\" id:\"0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49\" pid:2997 exit_status:137 exited_at:{seconds:1747311526 nanos:198568674}" May 15 12:18:46.201051 containerd[1592]: time="2025-05-15T12:18:46.200998148Z" level=info msg="StopPodSandbox for \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\"" May 15 12:18:46.201740 containerd[1592]: time="2025-05-15T12:18:46.201710066Z" level=info msg="Container to stop \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:18:46.201740 containerd[1592]: time="2025-05-15T12:18:46.201733872Z" level=info msg="Container to stop \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:18:46.201800 containerd[1592]: time="2025-05-15T12:18:46.201747939Z" level=info msg="Container to stop \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:18:46.201800 containerd[1592]: time="2025-05-15T12:18:46.201759441Z" level=info msg="Container to stop \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:18:46.201800 containerd[1592]: time="2025-05-15T12:18:46.201769099Z" level=info msg="Container to stop \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:18:46.210104 systemd[1]: cri-containerd-a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27.scope: Deactivated successfully. May 15 12:18:46.232781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49-rootfs.mount: Deactivated successfully. May 15 12:18:46.232912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27-rootfs.mount: Deactivated successfully. May 15 12:18:46.296706 kubelet[2753]: E0515 12:18:46.296635 2753 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 12:18:46.315858 containerd[1592]: time="2025-05-15T12:18:46.315800992Z" level=info msg="shim disconnected" id=0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49 namespace=k8s.io May 15 12:18:46.315858 containerd[1592]: time="2025-05-15T12:18:46.315844475Z" level=warning msg="cleaning up after shim disconnected" id=0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49 namespace=k8s.io May 15 12:18:46.316056 containerd[1592]: time="2025-05-15T12:18:46.315856609Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:18:46.317375 containerd[1592]: time="2025-05-15T12:18:46.316259517Z" level=info msg="shim disconnected" id=a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27 namespace=k8s.io May 15 12:18:46.317375 containerd[1592]: time="2025-05-15T12:18:46.316283263Z" level=warning msg="cleaning up after shim disconnected" id=a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27 namespace=k8s.io May 15 12:18:46.317375 containerd[1592]: time="2025-05-15T12:18:46.316291388Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:18:46.340380 containerd[1592]: time="2025-05-15T12:18:46.340066002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" id:\"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" pid:2898 exit_status:137 exited_at:{seconds:1747311526 nanos:210313970}" May 15 12:18:46.342807 containerd[1592]: time="2025-05-15T12:18:46.342631696Z" level=info msg="TearDown network for sandbox \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" successfully" May 15 12:18:46.342807 containerd[1592]: time="2025-05-15T12:18:46.342671522Z" level=info msg="StopPodSandbox for \"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" returns successfully" May 15 12:18:46.343087 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49-shm.mount: Deactivated successfully. May 15 12:18:46.343252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27-shm.mount: Deactivated successfully. May 15 12:18:46.343705 containerd[1592]: time="2025-05-15T12:18:46.343652894Z" level=info msg="TearDown network for sandbox \"0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49\" successfully" May 15 12:18:46.343705 containerd[1592]: time="2025-05-15T12:18:46.343703390Z" level=info msg="StopPodSandbox for \"0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49\" returns successfully" May 15 12:18:46.346254 containerd[1592]: time="2025-05-15T12:18:46.346222777Z" level=info msg="received exit event sandbox_id:\"a15294f48ea5bd862cdc848086ffbb65c1e5bdaf55d8cb8fb66d6674cce73f27\" exit_status:137 exited_at:{seconds:1747311526 nanos:210313970}" May 15 12:18:46.346396 containerd[1592]: time="2025-05-15T12:18:46.346339980Z" level=info msg="received exit event sandbox_id:\"0d2578c119ad9724e0f4a7044fc4e04a77b50d4cd03eac1db8769cc4f94f6d49\" exit_status:137 exited_at:{seconds:1747311526 nanos:198568674}" May 15 12:18:46.421550 kubelet[2753]: I0515 12:18:46.421503 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-bpf-maps\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.421550 kubelet[2753]: I0515 12:18:46.421557 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-config-path\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.421814 kubelet[2753]: I0515 12:18:46.421585 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cni-path\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.421814 kubelet[2753]: I0515 12:18:46.421604 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-run\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.421814 kubelet[2753]: I0515 12:18:46.421633 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-etc-cni-netd\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.421814 kubelet[2753]: I0515 12:18:46.421647 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-lib-modules\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.421814 kubelet[2753]: I0515 12:18:46.421673 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mx5t\" (UniqueName: \"kubernetes.io/projected/26f75c52-24b3-4e50-84b0-c9f170ef0ed4-kube-api-access-7mx5t\") pod \"26f75c52-24b3-4e50-84b0-c9f170ef0ed4\" (UID: \"26f75c52-24b3-4e50-84b0-c9f170ef0ed4\") " May 15 12:18:46.421814 kubelet[2753]: I0515 12:18:46.421692 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-host-proc-sys-net\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.422024 kubelet[2753]: I0515 12:18:46.421707 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-xtables-lock\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.422024 kubelet[2753]: I0515 12:18:46.421728 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-hubble-tls\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.422024 kubelet[2753]: I0515 12:18:46.421741 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-hostproc\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.422024 kubelet[2753]: I0515 12:18:46.421761 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-clustermesh-secrets\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.422024 kubelet[2753]: I0515 12:18:46.421774 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-host-proc-sys-kernel\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.422024 kubelet[2753]: I0515 12:18:46.421796 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26f75c52-24b3-4e50-84b0-c9f170ef0ed4-cilium-config-path\") pod \"26f75c52-24b3-4e50-84b0-c9f170ef0ed4\" (UID: \"26f75c52-24b3-4e50-84b0-c9f170ef0ed4\") " May 15 12:18:46.422212 kubelet[2753]: I0515 12:18:46.421817 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-cgroup\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.422212 kubelet[2753]: I0515 12:18:46.421832 2753 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxg8t\" (UniqueName: \"kubernetes.io/projected/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-kube-api-access-xxg8t\") pod \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\" (UID: \"56d520b0-7fa7-48d6-86c2-8fe391e8d14a\") " May 15 12:18:46.425197 kubelet[2753]: I0515 12:18:46.421692 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.425328 kubelet[2753]: I0515 12:18:46.421692 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.425471 kubelet[2753]: I0515 12:18:46.421725 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.426145 kubelet[2753]: I0515 12:18:46.421736 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.426290 kubelet[2753]: I0515 12:18:46.421751 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.426290 kubelet[2753]: I0515 12:18:46.421724 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cni-path" (OuterVolumeSpecName: "cni-path") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.426290 kubelet[2753]: I0515 12:18:46.425005 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.426290 kubelet[2753]: I0515 12:18:46.425337 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.426290 kubelet[2753]: I0515 12:18:46.425362 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-hostproc" (OuterVolumeSpecName: "hostproc") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.426422 kubelet[2753]: I0515 12:18:46.425378 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:18:46.426422 kubelet[2753]: I0515 12:18:46.425863 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-kube-api-access-xxg8t" (OuterVolumeSpecName: "kube-api-access-xxg8t") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "kube-api-access-xxg8t". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:18:46.426422 kubelet[2753]: I0515 12:18:46.426069 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 12:18:46.426646 kubelet[2753]: I0515 12:18:46.426573 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 12:18:46.426827 kubelet[2753]: I0515 12:18:46.426718 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f75c52-24b3-4e50-84b0-c9f170ef0ed4-kube-api-access-7mx5t" (OuterVolumeSpecName: "kube-api-access-7mx5t") pod "26f75c52-24b3-4e50-84b0-c9f170ef0ed4" (UID: "26f75c52-24b3-4e50-84b0-c9f170ef0ed4"). InnerVolumeSpecName "kube-api-access-7mx5t". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:18:46.428865 kubelet[2753]: I0515 12:18:46.428825 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56d520b0-7fa7-48d6-86c2-8fe391e8d14a" (UID: "56d520b0-7fa7-48d6-86c2-8fe391e8d14a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:18:46.429665 kubelet[2753]: I0515 12:18:46.429623 2753 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26f75c52-24b3-4e50-84b0-c9f170ef0ed4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26f75c52-24b3-4e50-84b0-c9f170ef0ed4" (UID: "26f75c52-24b3-4e50-84b0-c9f170ef0ed4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 12:18:46.432552 kubelet[2753]: I0515 12:18:46.432498 2753 scope.go:117] "RemoveContainer" containerID="6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca" May 15 12:18:46.436041 containerd[1592]: time="2025-05-15T12:18:46.435983389Z" level=info msg="RemoveContainer for \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\"" May 15 12:18:46.443277 systemd[1]: Removed slice kubepods-besteffort-pod26f75c52_24b3_4e50_84b0_c9f170ef0ed4.slice - libcontainer container kubepods-besteffort-pod26f75c52_24b3_4e50_84b0_c9f170ef0ed4.slice. May 15 12:18:46.452540 systemd[1]: Removed slice kubepods-burstable-pod56d520b0_7fa7_48d6_86c2_8fe391e8d14a.slice - libcontainer container kubepods-burstable-pod56d520b0_7fa7_48d6_86c2_8fe391e8d14a.slice. May 15 12:18:46.452950 systemd[1]: kubepods-burstable-pod56d520b0_7fa7_48d6_86c2_8fe391e8d14a.slice: Consumed 7.409s CPU time, 122.2M memory peak, 220K read from disk, 13.3M written to disk. May 15 12:18:46.454234 containerd[1592]: time="2025-05-15T12:18:46.454081624Z" level=info msg="RemoveContainer for \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" returns successfully" May 15 12:18:46.455916 kubelet[2753]: I0515 12:18:46.455883 2753 scope.go:117] "RemoveContainer" containerID="152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a" May 15 12:18:46.458891 containerd[1592]: time="2025-05-15T12:18:46.458449306Z" level=info msg="RemoveContainer for \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\"" May 15 12:18:46.465283 containerd[1592]: time="2025-05-15T12:18:46.465136293Z" level=info msg="RemoveContainer for \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\" returns successfully" May 15 12:18:46.465454 kubelet[2753]: I0515 12:18:46.465423 2753 scope.go:117] "RemoveContainer" containerID="a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31" May 15 12:18:46.468071 containerd[1592]: time="2025-05-15T12:18:46.468030744Z" level=info msg="RemoveContainer for \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\"" May 15 12:18:46.484926 containerd[1592]: time="2025-05-15T12:18:46.484853867Z" level=info msg="RemoveContainer for \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\" returns successfully" May 15 12:18:46.485192 kubelet[2753]: I0515 12:18:46.485150 2753 scope.go:117] "RemoveContainer" containerID="7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f" May 15 12:18:46.487066 containerd[1592]: time="2025-05-15T12:18:46.487040438Z" level=info msg="RemoveContainer for \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\"" May 15 12:18:46.491568 containerd[1592]: time="2025-05-15T12:18:46.491533670Z" level=info msg="RemoveContainer for \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\" returns successfully" May 15 12:18:46.491764 kubelet[2753]: I0515 12:18:46.491734 2753 scope.go:117] "RemoveContainer" containerID="e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1" May 15 12:18:46.493394 containerd[1592]: time="2025-05-15T12:18:46.493358641Z" level=info msg="RemoveContainer for \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\"" May 15 12:18:46.497445 containerd[1592]: time="2025-05-15T12:18:46.497403246Z" level=info msg="RemoveContainer for \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\" returns successfully" May 15 12:18:46.497765 kubelet[2753]: I0515 12:18:46.497602 2753 scope.go:117] "RemoveContainer" containerID="6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca" May 15 12:18:46.497858 containerd[1592]: time="2025-05-15T12:18:46.497812939Z" level=error msg="ContainerStatus for \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\": not found" May 15 12:18:46.501983 kubelet[2753]: E0515 12:18:46.501938 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\": not found" containerID="6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca" May 15 12:18:46.502087 kubelet[2753]: I0515 12:18:46.501971 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca"} err="failed to get container status \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e5b5320d4265a3280a95aa6cdc74ee47cba0687097a069db0bc69fbf4a582ca\": not found" May 15 12:18:46.502087 kubelet[2753]: I0515 12:18:46.502053 2753 scope.go:117] "RemoveContainer" containerID="152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a" May 15 12:18:46.502426 containerd[1592]: time="2025-05-15T12:18:46.502362267Z" level=error msg="ContainerStatus for \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\": not found" May 15 12:18:46.502556 kubelet[2753]: E0515 12:18:46.502518 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\": not found" containerID="152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a" May 15 12:18:46.502556 kubelet[2753]: I0515 12:18:46.502543 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a"} err="failed to get container status \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\": rpc error: code = NotFound desc = an error occurred when try to find container \"152cb6d346b3dd67e0027c35d5037ade6480c0b15ebfc3091bd2f19988aa626a\": not found" May 15 12:18:46.502556 kubelet[2753]: I0515 12:18:46.502554 2753 scope.go:117] "RemoveContainer" containerID="a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31" May 15 12:18:46.502774 containerd[1592]: time="2025-05-15T12:18:46.502727534Z" level=error msg="ContainerStatus for \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\": not found" May 15 12:18:46.502888 kubelet[2753]: E0515 12:18:46.502846 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\": not found" containerID="a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31" May 15 12:18:46.502888 kubelet[2753]: I0515 12:18:46.502878 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31"} err="failed to get container status \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\": rpc error: code = NotFound desc = an error occurred when try to find container \"a905daffc58770685fc1ead681d6236b1a09fc307860dd7948e8f197d120bf31\": not found" May 15 12:18:46.502969 kubelet[2753]: I0515 12:18:46.502897 2753 scope.go:117] "RemoveContainer" containerID="7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f" May 15 12:18:46.503088 containerd[1592]: time="2025-05-15T12:18:46.503060308Z" level=error msg="ContainerStatus for \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\": not found" May 15 12:18:46.503290 kubelet[2753]: E0515 12:18:46.503240 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\": not found" containerID="7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f" May 15 12:18:46.503408 kubelet[2753]: I0515 12:18:46.503287 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f"} err="failed to get container status \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b5fdf4bf3ba6ad44470c6cd4278089a9a966a2d05c5816356bedb44e9f6e72f\": not found" May 15 12:18:46.503408 kubelet[2753]: I0515 12:18:46.503332 2753 scope.go:117] "RemoveContainer" containerID="e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1" May 15 12:18:46.503592 containerd[1592]: time="2025-05-15T12:18:46.503560853Z" level=error msg="ContainerStatus for \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\": not found" May 15 12:18:46.503748 kubelet[2753]: E0515 12:18:46.503722 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\": not found" containerID="e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1" May 15 12:18:46.503786 kubelet[2753]: I0515 12:18:46.503750 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1"} err="failed to get container status \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4dbe4497d9915c474fed32545320cdfda522d88386820bbf88ec3881b77a2d1\": not found" May 15 12:18:46.503786 kubelet[2753]: I0515 12:18:46.503765 2753 scope.go:117] "RemoveContainer" containerID="1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802" May 15 12:18:46.505234 containerd[1592]: time="2025-05-15T12:18:46.505204358Z" level=info msg="RemoveContainer for \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\"" May 15 12:18:46.509294 containerd[1592]: time="2025-05-15T12:18:46.509255968Z" level=info msg="RemoveContainer for \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" returns successfully" May 15 12:18:46.509456 kubelet[2753]: I0515 12:18:46.509436 2753 scope.go:117] "RemoveContainer" containerID="1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802" May 15 12:18:46.509712 containerd[1592]: time="2025-05-15T12:18:46.509646913Z" level=error msg="ContainerStatus for \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\": not found" May 15 12:18:46.509881 kubelet[2753]: E0515 12:18:46.509849 2753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\": not found" containerID="1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802" May 15 12:18:46.509957 kubelet[2753]: I0515 12:18:46.509892 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802"} err="failed to get container status \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d361d25229904af94a59bec3398831d5a9918081e01e6d2b863f562bdf0e802\": not found" May 15 12:18:46.522324 kubelet[2753]: I0515 12:18:46.522269 2753 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522387 kubelet[2753]: I0515 12:18:46.522324 2753 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7mx5t\" (UniqueName: \"kubernetes.io/projected/26f75c52-24b3-4e50-84b0-c9f170ef0ed4-kube-api-access-7mx5t\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522387 kubelet[2753]: I0515 12:18:46.522344 2753 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522387 kubelet[2753]: I0515 12:18:46.522352 2753 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522387 kubelet[2753]: I0515 12:18:46.522361 2753 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522387 kubelet[2753]: I0515 12:18:46.522369 2753 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522387 kubelet[2753]: I0515 12:18:46.522377 2753 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522387 kubelet[2753]: I0515 12:18:46.522386 2753 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522532 kubelet[2753]: I0515 12:18:46.522397 2753 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26f75c52-24b3-4e50-84b0-c9f170ef0ed4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522532 kubelet[2753]: I0515 12:18:46.522409 2753 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xxg8t\" (UniqueName: \"kubernetes.io/projected/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-kube-api-access-xxg8t\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522532 kubelet[2753]: I0515 12:18:46.522417 2753 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522532 kubelet[2753]: I0515 12:18:46.522424 2753 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522532 kubelet[2753]: I0515 12:18:46.522431 2753 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522532 kubelet[2753]: I0515 12:18:46.522438 2753 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522532 kubelet[2753]: I0515 12:18:46.522445 2753 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 12:18:46.522532 kubelet[2753]: I0515 12:18:46.522453 2753 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56d520b0-7fa7-48d6-86c2-8fe391e8d14a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 12:18:47.156928 systemd[1]: var-lib-kubelet-pods-26f75c52\x2d24b3\x2d4e50\x2d84b0\x2dc9f170ef0ed4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7mx5t.mount: Deactivated successfully. May 15 12:18:47.157063 systemd[1]: var-lib-kubelet-pods-56d520b0\x2d7fa7\x2d48d6\x2d86c2\x2d8fe391e8d14a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxxg8t.mount: Deactivated successfully. May 15 12:18:47.157162 systemd[1]: var-lib-kubelet-pods-56d520b0\x2d7fa7\x2d48d6\x2d86c2\x2d8fe391e8d14a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 12:18:47.157266 systemd[1]: var-lib-kubelet-pods-56d520b0\x2d7fa7\x2d48d6\x2d86c2\x2d8fe391e8d14a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 12:18:47.199693 kubelet[2753]: E0515 12:18:47.199609 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:47.202349 kubelet[2753]: I0515 12:18:47.202305 2753 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26f75c52-24b3-4e50-84b0-c9f170ef0ed4" path="/var/lib/kubelet/pods/26f75c52-24b3-4e50-84b0-c9f170ef0ed4/volumes" May 15 12:18:47.203083 kubelet[2753]: I0515 12:18:47.203060 2753 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d520b0-7fa7-48d6-86c2-8fe391e8d14a" path="/var/lib/kubelet/pods/56d520b0-7fa7-48d6-86c2-8fe391e8d14a/volumes" May 15 12:18:48.047304 sshd[4364]: Connection closed by 10.0.0.1 port 56306 May 15 12:18:48.047854 sshd-session[4362]: pam_unix(sshd:session): session closed for user core May 15 12:18:48.065896 systemd[1]: sshd@24-10.0.0.46:22-10.0.0.1:56306.service: Deactivated successfully. May 15 12:18:48.068692 systemd[1]: session-25.scope: Deactivated successfully. May 15 12:18:48.069803 systemd-logind[1565]: Session 25 logged out. Waiting for processes to exit. May 15 12:18:48.072020 systemd-logind[1565]: Removed session 25. May 15 12:18:48.073656 systemd[1]: Started sshd@25-10.0.0.46:22-10.0.0.1:56312.service - OpenSSH per-connection server daemon (10.0.0.1:56312). May 15 12:18:48.126222 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 56312 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:48.128056 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:48.133720 systemd-logind[1565]: New session 26 of user core. May 15 12:18:48.148788 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 12:18:48.903029 sshd[4518]: Connection closed by 10.0.0.1 port 56312 May 15 12:18:48.904996 sshd-session[4516]: pam_unix(sshd:session): session closed for user core May 15 12:18:48.916916 kubelet[2753]: E0515 12:18:48.916866 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d520b0-7fa7-48d6-86c2-8fe391e8d14a" containerName="mount-bpf-fs" May 15 12:18:48.916916 kubelet[2753]: E0515 12:18:48.916900 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26f75c52-24b3-4e50-84b0-c9f170ef0ed4" containerName="cilium-operator" May 15 12:18:48.916916 kubelet[2753]: E0515 12:18:48.916906 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d520b0-7fa7-48d6-86c2-8fe391e8d14a" containerName="cilium-agent" May 15 12:18:48.916916 kubelet[2753]: E0515 12:18:48.916913 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d520b0-7fa7-48d6-86c2-8fe391e8d14a" containerName="mount-cgroup" May 15 12:18:48.916916 kubelet[2753]: E0515 12:18:48.916919 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d520b0-7fa7-48d6-86c2-8fe391e8d14a" containerName="apply-sysctl-overwrites" May 15 12:18:48.916916 kubelet[2753]: E0515 12:18:48.916924 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d520b0-7fa7-48d6-86c2-8fe391e8d14a" containerName="clean-cilium-state" May 15 12:18:48.917575 kubelet[2753]: I0515 12:18:48.916945 2753 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d520b0-7fa7-48d6-86c2-8fe391e8d14a" containerName="cilium-agent" May 15 12:18:48.917575 kubelet[2753]: I0515 12:18:48.916951 2753 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f75c52-24b3-4e50-84b0-c9f170ef0ed4" containerName="cilium-operator" May 15 12:18:48.917306 systemd[1]: sshd@25-10.0.0.46:22-10.0.0.1:56312.service: Deactivated successfully. May 15 12:18:48.921063 systemd[1]: session-26.scope: Deactivated successfully. May 15 12:18:48.922482 systemd-logind[1565]: Session 26 logged out. Waiting for processes to exit. May 15 12:18:48.929855 systemd[1]: Started sshd@26-10.0.0.46:22-10.0.0.1:56324.service - OpenSSH per-connection server daemon (10.0.0.1:56324). May 15 12:18:48.932375 systemd-logind[1565]: Removed session 26. May 15 12:18:48.946911 systemd[1]: Created slice kubepods-burstable-pod1693188f_9f45_4306_a815_e173cabc1aab.slice - libcontainer container kubepods-burstable-pod1693188f_9f45_4306_a815_e173cabc1aab.slice. May 15 12:18:48.985102 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 56324 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:48.986895 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:48.992189 systemd-logind[1565]: New session 27 of user core. May 15 12:18:48.999930 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 12:18:49.035953 kubelet[2753]: I0515 12:18:49.035866 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-bpf-maps\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.035953 kubelet[2753]: I0515 12:18:49.035930 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-hostproc\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.035953 kubelet[2753]: I0515 12:18:49.035957 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1693188f-9f45-4306-a815-e173cabc1aab-cilium-config-path\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036170 kubelet[2753]: I0515 12:18:49.035985 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-host-proc-sys-net\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036170 kubelet[2753]: I0515 12:18:49.036010 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-xtables-lock\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036170 kubelet[2753]: I0515 12:18:49.036035 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1693188f-9f45-4306-a815-e173cabc1aab-cilium-ipsec-secrets\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036170 kubelet[2753]: I0515 12:18:49.036127 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-cilium-run\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036170 kubelet[2753]: I0515 12:18:49.036160 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-cni-path\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036283 kubelet[2753]: I0515 12:18:49.036191 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-lib-modules\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036283 kubelet[2753]: I0515 12:18:49.036215 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-cilium-cgroup\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036283 kubelet[2753]: I0515 12:18:49.036238 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-etc-cni-netd\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036283 kubelet[2753]: I0515 12:18:49.036265 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1693188f-9f45-4306-a815-e173cabc1aab-hubble-tls\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036381 kubelet[2753]: I0515 12:18:49.036299 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv8bz\" (UniqueName: \"kubernetes.io/projected/1693188f-9f45-4306-a815-e173cabc1aab-kube-api-access-lv8bz\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036381 kubelet[2753]: I0515 12:18:49.036355 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1693188f-9f45-4306-a815-e173cabc1aab-clustermesh-secrets\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.036381 kubelet[2753]: I0515 12:18:49.036378 2753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1693188f-9f45-4306-a815-e173cabc1aab-host-proc-sys-kernel\") pod \"cilium-9lbtx\" (UID: \"1693188f-9f45-4306-a815-e173cabc1aab\") " pod="kube-system/cilium-9lbtx" May 15 12:18:49.053234 sshd[4534]: Connection closed by 10.0.0.1 port 56324 May 15 12:18:49.053666 sshd-session[4531]: pam_unix(sshd:session): session closed for user core May 15 12:18:49.068967 systemd[1]: sshd@26-10.0.0.46:22-10.0.0.1:56324.service: Deactivated successfully. May 15 12:18:49.071525 systemd[1]: session-27.scope: Deactivated successfully. May 15 12:18:49.072406 systemd-logind[1565]: Session 27 logged out. Waiting for processes to exit. May 15 12:18:49.075944 systemd[1]: Started sshd@27-10.0.0.46:22-10.0.0.1:56328.service - OpenSSH per-connection server daemon (10.0.0.1:56328). May 15 12:18:49.076515 systemd-logind[1565]: Removed session 27. May 15 12:18:49.137017 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 56328 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:49.139550 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:49.162860 systemd-logind[1565]: New session 28 of user core. May 15 12:18:49.171858 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 12:18:49.253062 kubelet[2753]: E0515 12:18:49.253009 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:49.254923 containerd[1592]: time="2025-05-15T12:18:49.254851544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lbtx,Uid:1693188f-9f45-4306-a815-e173cabc1aab,Namespace:kube-system,Attempt:0,}" May 15 12:18:49.284150 containerd[1592]: time="2025-05-15T12:18:49.284084275Z" level=info msg="connecting to shim f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d" address="unix:///run/containerd/s/b3c69da85b2e52a43447a975c0933288269c6d88116d7e2e6059fb594ec53214" namespace=k8s.io protocol=ttrpc version=3 May 15 12:18:49.321809 systemd[1]: Started cri-containerd-f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d.scope - libcontainer container f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d. May 15 12:18:49.351821 containerd[1592]: time="2025-05-15T12:18:49.351576738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lbtx,Uid:1693188f-9f45-4306-a815-e173cabc1aab,Namespace:kube-system,Attempt:0,} returns sandbox id \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\"" May 15 12:18:49.352816 kubelet[2753]: E0515 12:18:49.352778 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:49.355499 containerd[1592]: time="2025-05-15T12:18:49.355456394Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 12:18:49.363572 containerd[1592]: time="2025-05-15T12:18:49.363523533Z" level=info msg="Container a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0: CDI devices from CRI Config.CDIDevices: []" May 15 12:18:49.372379 containerd[1592]: time="2025-05-15T12:18:49.372326717Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0\"" May 15 12:18:49.372871 containerd[1592]: time="2025-05-15T12:18:49.372850096Z" level=info msg="StartContainer for \"a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0\"" May 15 12:18:49.373826 containerd[1592]: time="2025-05-15T12:18:49.373741257Z" level=info msg="connecting to shim a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0" address="unix:///run/containerd/s/b3c69da85b2e52a43447a975c0933288269c6d88116d7e2e6059fb594ec53214" protocol=ttrpc version=3 May 15 12:18:49.395761 systemd[1]: Started cri-containerd-a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0.scope - libcontainer container a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0. May 15 12:18:49.431978 containerd[1592]: time="2025-05-15T12:18:49.431854907Z" level=info msg="StartContainer for \"a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0\" returns successfully" May 15 12:18:49.441793 systemd[1]: cri-containerd-a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0.scope: Deactivated successfully. May 15 12:18:49.443830 containerd[1592]: time="2025-05-15T12:18:49.443796703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0\" id:\"a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0\" pid:4617 exited_at:{seconds:1747311529 nanos:443149307}" May 15 12:18:49.444495 containerd[1592]: time="2025-05-15T12:18:49.444459198Z" level=info msg="received exit event container_id:\"a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0\" id:\"a35913fb9bf66bdd0838c785dce87e0e002529258549a9efbb8903cd766b85e0\" pid:4617 exited_at:{seconds:1747311529 nanos:443149307}" May 15 12:18:49.445398 kubelet[2753]: E0515 12:18:49.445358 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:50.448589 kubelet[2753]: E0515 12:18:50.448553 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:50.451216 containerd[1592]: time="2025-05-15T12:18:50.450409259Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 12:18:50.460035 containerd[1592]: time="2025-05-15T12:18:50.459988579Z" level=info msg="Container 94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419: CDI devices from CRI Config.CDIDevices: []" May 15 12:18:50.465282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474180769.mount: Deactivated successfully. May 15 12:18:50.468571 containerd[1592]: time="2025-05-15T12:18:50.468528604Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419\"" May 15 12:18:50.469029 containerd[1592]: time="2025-05-15T12:18:50.469005915Z" level=info msg="StartContainer for \"94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419\"" May 15 12:18:50.470422 containerd[1592]: time="2025-05-15T12:18:50.470390007Z" level=info msg="connecting to shim 94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419" address="unix:///run/containerd/s/b3c69da85b2e52a43447a975c0933288269c6d88116d7e2e6059fb594ec53214" protocol=ttrpc version=3 May 15 12:18:50.499790 systemd[1]: Started cri-containerd-94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419.scope - libcontainer container 94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419. May 15 12:18:50.530866 containerd[1592]: time="2025-05-15T12:18:50.530823389Z" level=info msg="StartContainer for \"94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419\" returns successfully" May 15 12:18:50.538144 systemd[1]: cri-containerd-94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419.scope: Deactivated successfully. May 15 12:18:50.538762 containerd[1592]: time="2025-05-15T12:18:50.538642497Z" level=info msg="received exit event container_id:\"94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419\" id:\"94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419\" pid:4661 exited_at:{seconds:1747311530 nanos:538386609}" May 15 12:18:50.538887 containerd[1592]: time="2025-05-15T12:18:50.538843080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419\" id:\"94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419\" pid:4661 exited_at:{seconds:1747311530 nanos:538386609}" May 15 12:18:50.562876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94cd455411f63e29b2757a50aa6825402a61301fbe9c8f299052c71a986f7419-rootfs.mount: Deactivated successfully. May 15 12:18:51.297771 kubelet[2753]: E0515 12:18:51.297703 2753 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 12:18:51.455158 kubelet[2753]: E0515 12:18:51.455125 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:51.457536 containerd[1592]: time="2025-05-15T12:18:51.457479997Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 12:18:51.584925 containerd[1592]: time="2025-05-15T12:18:51.584795879Z" level=info msg="Container ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb: CDI devices from CRI Config.CDIDevices: []" May 15 12:18:51.635947 containerd[1592]: time="2025-05-15T12:18:51.635902300Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb\"" May 15 12:18:51.636511 containerd[1592]: time="2025-05-15T12:18:51.636490033Z" level=info msg="StartContainer for \"ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb\"" May 15 12:18:51.637969 containerd[1592]: time="2025-05-15T12:18:51.637925494Z" level=info msg="connecting to shim ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb" address="unix:///run/containerd/s/b3c69da85b2e52a43447a975c0933288269c6d88116d7e2e6059fb594ec53214" protocol=ttrpc version=3 May 15 12:18:51.665767 systemd[1]: Started cri-containerd-ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb.scope - libcontainer container ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb. May 15 12:18:51.720640 systemd[1]: cri-containerd-ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb.scope: Deactivated successfully. May 15 12:18:51.721554 containerd[1592]: time="2025-05-15T12:18:51.721284877Z" level=info msg="StartContainer for \"ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb\" returns successfully" May 15 12:18:51.721767 containerd[1592]: time="2025-05-15T12:18:51.721712724Z" level=info msg="received exit event container_id:\"ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb\" id:\"ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb\" pid:4706 exited_at:{seconds:1747311531 nanos:721462226}" May 15 12:18:51.722268 containerd[1592]: time="2025-05-15T12:18:51.722205816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb\" id:\"ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb\" pid:4706 exited_at:{seconds:1747311531 nanos:721462226}" May 15 12:18:51.746082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba466b2f7d613aab4cd65e3f3908424f01a16eb91f62664e23d608003e17b1bb-rootfs.mount: Deactivated successfully. May 15 12:18:52.199392 kubelet[2753]: E0515 12:18:52.199329 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:52.459966 kubelet[2753]: E0515 12:18:52.459785 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:52.461664 containerd[1592]: time="2025-05-15T12:18:52.461594000Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 12:18:52.621807 containerd[1592]: time="2025-05-15T12:18:52.621754829Z" level=info msg="Container 7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084: CDI devices from CRI Config.CDIDevices: []" May 15 12:18:52.796033 containerd[1592]: time="2025-05-15T12:18:52.795948370Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084\"" May 15 12:18:52.796963 containerd[1592]: time="2025-05-15T12:18:52.796920406Z" level=info msg="StartContainer for \"7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084\"" May 15 12:18:52.798213 containerd[1592]: time="2025-05-15T12:18:52.798185122Z" level=info msg="connecting to shim 7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084" address="unix:///run/containerd/s/b3c69da85b2e52a43447a975c0933288269c6d88116d7e2e6059fb594ec53214" protocol=ttrpc version=3 May 15 12:18:52.831811 systemd[1]: Started cri-containerd-7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084.scope - libcontainer container 7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084. May 15 12:18:52.861187 systemd[1]: cri-containerd-7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084.scope: Deactivated successfully. May 15 12:18:52.862026 containerd[1592]: time="2025-05-15T12:18:52.861965779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084\" id:\"7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084\" pid:4746 exited_at:{seconds:1747311532 nanos:861403746}" May 15 12:18:52.898171 containerd[1592]: time="2025-05-15T12:18:52.898091357Z" level=info msg="received exit event container_id:\"7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084\" id:\"7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084\" pid:4746 exited_at:{seconds:1747311532 nanos:861403746}" May 15 12:18:52.906906 containerd[1592]: time="2025-05-15T12:18:52.906860757Z" level=info msg="StartContainer for \"7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084\" returns successfully" May 15 12:18:52.920311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b47c3a80d2e61dec6bc4049e2a94c5301ab696efd9c26b36d27ae8622c97084-rootfs.mount: Deactivated successfully. May 15 12:18:53.146366 kubelet[2753]: I0515 12:18:53.146216 2753 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T12:18:53Z","lastTransitionTime":"2025-05-15T12:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 12:18:53.465678 kubelet[2753]: E0515 12:18:53.465526 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:53.467566 containerd[1592]: time="2025-05-15T12:18:53.467513664Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 12:18:53.482407 containerd[1592]: time="2025-05-15T12:18:53.482310761Z" level=info msg="Container e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377: CDI devices from CRI Config.CDIDevices: []" May 15 12:18:53.497425 containerd[1592]: time="2025-05-15T12:18:53.497370389Z" level=info msg="CreateContainer within sandbox \"f77a4274fcab5e89bb99e433f37d5e2506b4639994ef99221b87cc819f3b666d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377\"" May 15 12:18:53.497988 containerd[1592]: time="2025-05-15T12:18:53.497941570Z" level=info msg="StartContainer for \"e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377\"" May 15 12:18:53.499121 containerd[1592]: time="2025-05-15T12:18:53.499091377Z" level=info msg="connecting to shim e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377" address="unix:///run/containerd/s/b3c69da85b2e52a43447a975c0933288269c6d88116d7e2e6059fb594ec53214" protocol=ttrpc version=3 May 15 12:18:53.527856 systemd[1]: Started cri-containerd-e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377.scope - libcontainer container e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377. May 15 12:18:53.568592 containerd[1592]: time="2025-05-15T12:18:53.568529994Z" level=info msg="StartContainer for \"e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377\" returns successfully" May 15 12:18:53.624972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296068307.mount: Deactivated successfully. May 15 12:18:53.648993 containerd[1592]: time="2025-05-15T12:18:53.648933087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377\" id:\"7aeb0a807f082fbb848ee5aa3d36e3d59ee807f755764e1a774d117ac0a5a3c1\" pid:4816 exited_at:{seconds:1747311533 nanos:648341728}" May 15 12:18:54.051658 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 15 12:18:54.199208 kubelet[2753]: E0515 12:18:54.199125 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mxn54" podUID="741d12b5-5f6b-4eaf-a41f-2790bceecf75" May 15 12:18:54.472075 kubelet[2753]: E0515 12:18:54.471949 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:55.480649 kubelet[2753]: E0515 12:18:55.478974 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:55.484896 containerd[1592]: time="2025-05-15T12:18:55.484846299Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377\" id:\"0d7da9dac9400bf30cbc36a541a662dc8488260530802a372791943bdd893a7d\" pid:4890 exit_status:1 exited_at:{seconds:1747311535 nanos:483821821}" May 15 12:18:56.199155 kubelet[2753]: E0515 12:18:56.199070 2753 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mxn54" podUID="741d12b5-5f6b-4eaf-a41f-2790bceecf75" May 15 12:18:57.239695 systemd-networkd[1495]: lxc_health: Link UP May 15 12:18:57.241899 systemd-networkd[1495]: lxc_health: Gained carrier May 15 12:18:57.266329 kubelet[2753]: E0515 12:18:57.265975 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:57.349821 kubelet[2753]: I0515 12:18:57.349761 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9lbtx" podStartSLOduration=9.349742409 podStartE2EDuration="9.349742409s" podCreationTimestamp="2025-05-15 12:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:18:54.491131673 +0000 UTC m=+93.391768662" watchObservedRunningTime="2025-05-15 12:18:57.349742409 +0000 UTC m=+96.250379388" May 15 12:18:57.481317 kubelet[2753]: E0515 12:18:57.481281 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:57.666673 containerd[1592]: time="2025-05-15T12:18:57.666534643Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377\" id:\"8bce61405a7e0fafa2c3695e4295449f7158de8b20b6b847054e8d8c18aac091\" pid:5345 exited_at:{seconds:1747311537 nanos:665691091}" May 15 12:18:58.199644 kubelet[2753]: E0515 12:18:58.199570 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:58.597907 systemd-networkd[1495]: lxc_health: Gained IPv6LL May 15 12:18:59.775973 containerd[1592]: time="2025-05-15T12:18:59.775923576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377\" id:\"06031d0fae5385bca8723f1f594f000c13e8973514aa655933cc1bc19fe2350c\" pid:5382 exited_at:{seconds:1747311539 nanos:774659950}" May 15 12:19:01.956949 containerd[1592]: time="2025-05-15T12:19:01.956853707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377\" id:\"632660b3fa511215938ab96ccc37c4815cdffee85a6960bce819fbc64eafe48f\" pid:5411 exited_at:{seconds:1747311541 nanos:956372276}" May 15 12:19:04.116319 containerd[1592]: time="2025-05-15T12:19:04.116243292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92d64186f3961434d9bd78f7dac8d2bf4f2177617766a8d64fa69a89fcb2377\" id:\"dc3d201001d6fd10dfe8b2ee197ffe591b5bb7ef1a52c354bfcc4b2647107420\" pid:5435 exited_at:{seconds:1747311544 nanos:115680646}" May 15 12:19:04.125966 sshd[4549]: Connection closed by 10.0.0.1 port 56328 May 15 12:19:04.126551 sshd-session[4541]: pam_unix(sshd:session): session closed for user core May 15 12:19:04.132712 systemd[1]: sshd@27-10.0.0.46:22-10.0.0.1:56328.service: Deactivated successfully. May 15 12:19:04.135432 systemd[1]: session-28.scope: Deactivated successfully. May 15 12:19:04.136649 systemd-logind[1565]: Session 28 logged out. Waiting for processes to exit. May 15 12:19:04.138675 systemd-logind[1565]: Removed session 28.