May 10 09:53:57.919427 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat May 10 08:33:52 -00 2025 May 10 09:53:57.919461 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 09:53:57.919473 kernel: BIOS-provided physical RAM map: May 10 09:53:57.919482 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 09:53:57.919490 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 09:53:57.919498 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 09:53:57.919508 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 10 09:53:57.919520 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 10 09:53:57.919528 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 10 09:53:57.919537 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 10 09:53:57.919546 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 09:53:57.919554 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 09:53:57.919563 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 10 09:53:57.919571 kernel: NX (Execute Disable) protection: active May 10 09:53:57.919585 kernel: APIC: Static calls initialized May 10 09:53:57.919597 kernel: SMBIOS 2.8 present. May 10 09:53:57.919607 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 10 09:53:57.919618 kernel: Hypervisor detected: KVM May 10 09:53:57.919627 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 09:53:57.919636 kernel: kvm-clock: using sched offset of 3256733818 cycles May 10 09:53:57.919646 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 09:53:57.919656 kernel: tsc: Detected 2794.748 MHz processor May 10 09:53:57.919666 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 09:53:57.919679 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 09:53:57.919689 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 10 09:53:57.919698 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 10 09:53:57.919708 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 09:53:57.919718 kernel: Using GB pages for direct mapping May 10 09:53:57.919728 kernel: ACPI: Early table checksum verification disabled May 10 09:53:57.919738 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 10 09:53:57.919747 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:53:57.919760 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:53:57.919771 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:53:57.919780 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 10 09:53:57.919801 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:53:57.919810 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:53:57.919820 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:53:57.919830 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:53:57.919839 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 10 09:53:57.919849 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 10 09:53:57.919866 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 10 09:53:57.919876 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 10 09:53:57.919886 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 10 09:53:57.919897 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 10 09:53:57.919907 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 10 09:53:57.919917 kernel: No NUMA configuration found May 10 09:53:57.919930 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 10 09:53:57.919940 kernel: NODE_DATA(0) allocated [mem 0x9cfd4000-0x9cfdbfff] May 10 09:53:57.919951 kernel: Zone ranges: May 10 09:53:57.919961 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 09:53:57.919971 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 10 09:53:57.919981 kernel: Normal empty May 10 09:53:57.919991 kernel: Device empty May 10 09:53:57.920001 kernel: Movable zone start for each node May 10 09:53:57.920011 kernel: Early memory node ranges May 10 09:53:57.920025 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 09:53:57.920035 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 10 09:53:57.920045 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 10 09:53:57.920055 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 09:53:57.920065 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 09:53:57.920075 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 10 09:53:57.920084 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 09:53:57.920195 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 09:53:57.920205 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 09:53:57.920218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 09:53:57.920228 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 09:53:57.920248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 09:53:57.920273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 09:53:57.920297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 09:53:57.920309 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 09:53:57.920319 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 10 09:53:57.920329 kernel: TSC deadline timer available May 10 09:53:57.920339 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 10 09:53:57.920349 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 10 09:53:57.920363 kernel: kvm-guest: KVM setup pv remote TLB flush May 10 09:53:57.920372 kernel: kvm-guest: setup PV sched yield May 10 09:53:57.920386 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 10 09:53:57.920396 kernel: Booting paravirtualized kernel on KVM May 10 09:53:57.920407 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 09:53:57.920417 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 10 09:53:57.920427 kernel: percpu: Embedded 58 pages/cpu s197416 r8192 d31960 u524288 May 10 09:53:57.920437 kernel: pcpu-alloc: s197416 r8192 d31960 u524288 alloc=1*2097152 May 10 09:53:57.920447 kernel: pcpu-alloc: [0] 0 1 2 3 May 10 09:53:57.920460 kernel: kvm-guest: PV spinlocks enabled May 10 09:53:57.920470 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 09:53:57.920482 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 09:53:57.920492 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 09:53:57.920502 kernel: random: crng init done May 10 09:53:57.920513 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 09:53:57.920523 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 09:53:57.920533 kernel: Fallback order for Node 0: 0 May 10 09:53:57.920547 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 10 09:53:57.920556 kernel: Policy zone: DMA32 May 10 09:53:57.920567 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 09:53:57.920577 kernel: Memory: 2436632K/2571752K available (14336K kernel code, 2309K rwdata, 9044K rodata, 53680K init, 1596K bss, 134860K reserved, 0K cma-reserved) May 10 09:53:57.920588 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 10 09:53:57.920598 kernel: ftrace: allocating 38190 entries in 150 pages May 10 09:53:57.920608 kernel: ftrace: allocated 150 pages with 4 groups May 10 09:53:57.920618 kernel: Dynamic Preempt: voluntary May 10 09:53:57.920629 kernel: rcu: Preemptible hierarchical RCU implementation. May 10 09:53:57.920643 kernel: rcu: RCU event tracing is enabled. May 10 09:53:57.920654 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 10 09:53:57.920664 kernel: Trampoline variant of Tasks RCU enabled. May 10 09:53:57.920674 kernel: Rude variant of Tasks RCU enabled. May 10 09:53:57.920684 kernel: Tracing variant of Tasks RCU enabled. May 10 09:53:57.920694 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 09:53:57.920704 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 10 09:53:57.920714 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 10 09:53:57.920724 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 10 09:53:57.920738 kernel: Console: colour VGA+ 80x25 May 10 09:53:57.920748 kernel: printk: console [ttyS0] enabled May 10 09:53:57.920758 kernel: ACPI: Core revision 20230628 May 10 09:53:57.920768 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 10 09:53:57.920778 kernel: APIC: Switch to symmetric I/O mode setup May 10 09:53:57.920788 kernel: x2apic enabled May 10 09:53:57.920808 kernel: APIC: Switched APIC routing to: physical x2apic May 10 09:53:57.920818 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 10 09:53:57.920829 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 10 09:53:57.920852 kernel: kvm-guest: setup PV IPIs May 10 09:53:57.920863 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 10 09:53:57.920873 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 10 09:53:57.920886 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 10 09:53:57.920897 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 10 09:53:57.920907 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 10 09:53:57.920917 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 10 09:53:57.920927 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 09:53:57.920938 kernel: Spectre V2 : Mitigation: Retpolines May 10 09:53:57.920951 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 09:53:57.920962 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 10 09:53:57.920973 kernel: RETBleed: Mitigation: untrained return thunk May 10 09:53:57.920983 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 09:53:57.920994 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 10 09:53:57.921005 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 10 09:53:57.921016 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 10 09:53:57.921030 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 10 09:53:57.921040 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 09:53:57.921051 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 09:53:57.921062 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 09:53:57.921072 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 09:53:57.921083 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 10 09:53:57.921106 kernel: Freeing SMP alternatives memory: 32K May 10 09:53:57.921117 kernel: pid_max: default: 32768 minimum: 301 May 10 09:53:57.921127 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 10 09:53:57.921142 kernel: landlock: Up and running. May 10 09:53:57.921152 kernel: SELinux: Initializing. May 10 09:53:57.921162 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 09:53:57.921173 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 09:53:57.921184 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 10 09:53:57.921195 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 10 09:53:57.921205 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 10 09:53:57.921216 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 10 09:53:57.921227 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 10 09:53:57.921240 kernel: ... version: 0 May 10 09:53:57.921251 kernel: ... bit width: 48 May 10 09:53:57.921261 kernel: ... generic registers: 6 May 10 09:53:57.921271 kernel: ... value mask: 0000ffffffffffff May 10 09:53:57.921282 kernel: ... max period: 00007fffffffffff May 10 09:53:57.921292 kernel: ... fixed-purpose events: 0 May 10 09:53:57.921302 kernel: ... event mask: 000000000000003f May 10 09:53:57.921313 kernel: signal: max sigframe size: 1776 May 10 09:53:57.921332 kernel: rcu: Hierarchical SRCU implementation. May 10 09:53:57.921355 kernel: rcu: Max phase no-delay instances is 400. May 10 09:53:57.921374 kernel: smp: Bringing up secondary CPUs ... May 10 09:53:57.921385 kernel: smpboot: x86: Booting SMP configuration: May 10 09:53:57.921395 kernel: .... node #0, CPUs: #1 #2 #3 May 10 09:53:57.921405 kernel: smp: Brought up 1 node, 4 CPUs May 10 09:53:57.921415 kernel: smpboot: Max logical packages: 1 May 10 09:53:57.921426 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 10 09:53:57.921440 kernel: devtmpfs: initialized May 10 09:53:57.921450 kernel: x86/mm: Memory block size: 128MB May 10 09:53:57.921464 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 09:53:57.921474 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 10 09:53:57.921485 kernel: pinctrl core: initialized pinctrl subsystem May 10 09:53:57.921495 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 09:53:57.921505 kernel: audit: initializing netlink subsys (disabled) May 10 09:53:57.921516 kernel: audit: type=2000 audit(1746870834.787:1): state=initialized audit_enabled=0 res=1 May 10 09:53:57.921526 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 09:53:57.921537 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 09:53:57.921547 kernel: cpuidle: using governor menu May 10 09:53:57.921560 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 09:53:57.921570 kernel: dca service started, version 1.12.1 May 10 09:53:57.921581 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 10 09:53:57.921591 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 10 09:53:57.921604 kernel: PCI: Using configuration type 1 for base access May 10 09:53:57.921616 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 09:53:57.921628 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 10 09:53:57.921638 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 10 09:53:57.921648 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 10 09:53:57.921661 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 10 09:53:57.921672 kernel: ACPI: Added _OSI(Module Device) May 10 09:53:57.921682 kernel: ACPI: Added _OSI(Processor Device) May 10 09:53:57.921692 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 09:53:57.921702 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 09:53:57.921712 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 09:53:57.921723 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 10 09:53:57.921733 kernel: ACPI: Interpreter enabled May 10 09:53:57.921743 kernel: ACPI: PM: (supports S0 S3 S5) May 10 09:53:57.921756 kernel: ACPI: Using IOAPIC for interrupt routing May 10 09:53:57.921767 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 09:53:57.921777 kernel: PCI: Using E820 reservations for host bridge windows May 10 09:53:57.921787 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 10 09:53:57.921807 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 09:53:57.922019 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 09:53:57.922187 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 10 09:53:57.922333 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 10 09:53:57.922351 kernel: PCI host bridge to bus 0000:00 May 10 09:53:57.922503 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 09:53:57.922646 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 09:53:57.922919 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 09:53:57.923057 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 10 09:53:57.923226 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 09:53:57.923370 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 10 09:53:57.923518 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 09:53:57.923695 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 10 09:53:57.924019 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 10 09:53:57.924208 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 10 09:53:57.924365 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 10 09:53:57.924520 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 10 09:53:57.924682 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 09:53:57.924864 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 10 09:53:57.925023 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 10 09:53:57.925202 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 10 09:53:57.925363 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 10 09:53:57.925532 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 10 09:53:57.925691 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 10 09:53:57.925866 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 10 09:53:57.926025 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 10 09:53:57.926228 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 10 09:53:57.926390 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 10 09:53:57.926546 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 10 09:53:57.926703 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 10 09:53:57.926870 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 10 09:53:57.927044 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 10 09:53:57.927291 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 10 09:53:57.927459 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 10 09:53:57.927613 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 10 09:53:57.927765 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 10 09:53:57.927939 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 10 09:53:57.928116 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 10 09:53:57.928132 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 09:53:57.928143 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 09:53:57.928158 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 09:53:57.928168 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 09:53:57.928179 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 10 09:53:57.928189 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 10 09:53:57.928200 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 10 09:53:57.928211 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 10 09:53:57.928226 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 10 09:53:57.928236 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 10 09:53:57.928247 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 10 09:53:57.928257 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 10 09:53:57.928267 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 10 09:53:57.928278 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 10 09:53:57.928288 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 10 09:53:57.928299 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 10 09:53:57.928309 kernel: iommu: Default domain type: Translated May 10 09:53:57.928323 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 09:53:57.928334 kernel: PCI: Using ACPI for IRQ routing May 10 09:53:57.928344 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 09:53:57.928354 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 09:53:57.928365 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 10 09:53:57.928523 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 10 09:53:57.928677 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 10 09:53:57.928845 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 09:53:57.928865 kernel: vgaarb: loaded May 10 09:53:57.928875 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 10 09:53:57.928886 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 10 09:53:57.928897 kernel: clocksource: Switched to clocksource kvm-clock May 10 09:53:57.928907 kernel: VFS: Disk quotas dquot_6.6.0 May 10 09:53:57.928918 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 09:53:57.928928 kernel: pnp: PnP ACPI init May 10 09:53:57.929151 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 10 09:53:57.929173 kernel: pnp: PnP ACPI: found 6 devices May 10 09:53:57.929184 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 09:53:57.929195 kernel: NET: Registered PF_INET protocol family May 10 09:53:57.929205 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 09:53:57.929216 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 09:53:57.929227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 09:53:57.929237 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 09:53:57.929248 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 10 09:53:57.929258 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 09:53:57.929272 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 09:53:57.929283 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 09:53:57.929293 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 09:53:57.929304 kernel: NET: Registered PF_XDP protocol family May 10 09:53:57.929449 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 09:53:57.929591 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 09:53:57.929733 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 09:53:57.929885 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 10 09:53:57.930028 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 10 09:53:57.930247 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 10 09:53:57.930263 kernel: PCI: CLS 0 bytes, default 64 May 10 09:53:57.930274 kernel: Initialise system trusted keyrings May 10 09:53:57.930284 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 09:53:57.930295 kernel: Key type asymmetric registered May 10 09:53:57.930306 kernel: Asymmetric key parser 'x509' registered May 10 09:53:57.930317 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 10 09:53:57.930327 kernel: io scheduler mq-deadline registered May 10 09:53:57.930338 kernel: io scheduler kyber registered May 10 09:53:57.930353 kernel: io scheduler bfq registered May 10 09:53:57.930364 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 09:53:57.930375 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 10 09:53:57.930386 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 10 09:53:57.930397 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 10 09:53:57.930407 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 09:53:57.930418 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 09:53:57.930429 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 09:53:57.930439 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 09:53:57.930453 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 09:53:57.930464 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 09:53:57.930628 kernel: rtc_cmos 00:04: RTC can wake from S4 May 10 09:53:57.930776 kernel: rtc_cmos 00:04: registered as rtc0 May 10 09:53:57.930932 kernel: rtc_cmos 00:04: setting system clock to 2025-05-10T09:53:57 UTC (1746870837) May 10 09:53:57.931078 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 10 09:53:57.931130 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 10 09:53:57.931145 kernel: NET: Registered PF_INET6 protocol family May 10 09:53:57.931156 kernel: Segment Routing with IPv6 May 10 09:53:57.931167 kernel: In-situ OAM (IOAM) with IPv6 May 10 09:53:57.931177 kernel: NET: Registered PF_PACKET protocol family May 10 09:53:57.931188 kernel: Key type dns_resolver registered May 10 09:53:57.931199 kernel: IPI shorthand broadcast: enabled May 10 09:53:57.931210 kernel: sched_clock: Marking stable (3084002250, 113493613)->(3248049372, -50553509) May 10 09:53:57.931220 kernel: registered taskstats version 1 May 10 09:53:57.931231 kernel: Loading compiled-in X.509 certificates May 10 09:53:57.931242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: f8080549509982706805ea0b811f8f4bcb4a274e' May 10 09:53:57.931256 kernel: Key type .fscrypt registered May 10 09:53:57.931266 kernel: Key type fscrypt-provisioning registered May 10 09:53:57.931277 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 09:53:57.931287 kernel: ima: Allocated hash algorithm: sha1 May 10 09:53:57.931298 kernel: ima: No architecture policies found May 10 09:53:57.931308 kernel: clk: Disabling unused clocks May 10 09:53:57.931319 kernel: Warning: unable to open an initial console. May 10 09:53:57.931330 kernel: Freeing unused kernel image (initmem) memory: 53680K May 10 09:53:57.931344 kernel: Write protecting the kernel read-only data: 24576k May 10 09:53:57.931354 kernel: Freeing unused kernel image (rodata/data gap) memory: 1196K May 10 09:53:57.931365 kernel: Run /init as init process May 10 09:53:57.931375 kernel: with arguments: May 10 09:53:57.931385 kernel: /init May 10 09:53:57.931396 kernel: with environment: May 10 09:53:57.931406 kernel: HOME=/ May 10 09:53:57.931416 kernel: TERM=linux May 10 09:53:57.931427 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 09:53:57.931446 systemd[1]: Successfully made /usr/ read-only. May 10 09:53:57.931461 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 10 09:53:57.931473 systemd[1]: Detected virtualization kvm. May 10 09:53:57.931484 systemd[1]: Detected architecture x86-64. May 10 09:53:57.931495 systemd[1]: Running in initrd. May 10 09:53:57.931506 systemd[1]: No hostname configured, using default hostname. May 10 09:53:57.931517 systemd[1]: Hostname set to . May 10 09:53:57.931532 systemd[1]: Initializing machine ID from VM UUID. May 10 09:53:57.931543 systemd[1]: Queued start job for default target initrd.target. May 10 09:53:57.931555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 09:53:57.931581 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 09:53:57.931597 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 10 09:53:57.931609 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 09:53:57.931623 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 10 09:53:57.931636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 10 09:53:57.931650 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 10 09:53:57.931661 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 10 09:53:57.931673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 09:53:57.931685 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 09:53:57.931696 systemd[1]: Reached target paths.target - Path Units. May 10 09:53:57.931710 systemd[1]: Reached target slices.target - Slice Units. May 10 09:53:57.931722 systemd[1]: Reached target swap.target - Swaps. May 10 09:53:57.931733 systemd[1]: Reached target timers.target - Timer Units. May 10 09:53:57.931745 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 10 09:53:57.931756 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 09:53:57.931768 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 09:53:57.931779 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 10 09:53:57.931799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 09:53:57.931812 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 09:53:57.931827 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 09:53:57.931838 systemd[1]: Reached target sockets.target - Socket Units. May 10 09:53:57.931850 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 10 09:53:57.931862 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 09:53:57.931873 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 10 09:53:57.931885 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 10 09:53:57.931897 systemd[1]: Starting systemd-fsck-usr.service... May 10 09:53:57.931909 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 09:53:57.931924 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 09:53:57.931935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:53:57.931947 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 10 09:53:57.931960 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 09:53:57.931972 systemd[1]: Finished systemd-fsck-usr.service. May 10 09:53:57.931987 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 09:53:57.932028 systemd-journald[194]: Collecting audit messages is disabled. May 10 09:53:57.932062 systemd-journald[194]: Journal started May 10 09:53:57.932102 systemd-journald[194]: Runtime Journal (/run/log/journal/b90f671ac5064e24865f860c860fac75) is 6M, max 48.6M, 42.5M free. May 10 09:53:57.918910 systemd-modules-load[196]: Inserted module 'overlay' May 10 09:53:57.959611 systemd[1]: Started systemd-journald.service - Journal Service. May 10 09:53:57.959634 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 09:53:57.959646 kernel: Bridge firewalling registered May 10 09:53:57.947152 systemd-modules-load[196]: Inserted module 'br_netfilter' May 10 09:53:57.959997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 09:53:57.965782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:53:57.968777 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 09:53:57.976448 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 09:53:57.980395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 09:53:57.995972 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 09:53:57.999216 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 09:53:58.005125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 09:53:58.008809 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 09:53:58.010961 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 10 09:53:58.017210 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 09:53:58.020422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 09:53:58.024070 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 10 09:53:58.027438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 09:53:58.066008 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 09:53:58.087033 systemd-resolved[237]: Positive Trust Anchors: May 10 09:53:58.087052 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 09:53:58.087104 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 09:53:58.090043 systemd-resolved[237]: Defaulting to hostname 'linux'. May 10 09:53:58.091319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 09:53:58.099989 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 09:53:58.213130 kernel: SCSI subsystem initialized May 10 09:53:58.222135 kernel: Loading iSCSI transport class v2.0-870. May 10 09:53:58.233130 kernel: iscsi: registered transport (tcp) May 10 09:53:58.259508 kernel: iscsi: registered transport (qla4xxx) May 10 09:53:58.259554 kernel: QLogic iSCSI HBA Driver May 10 09:53:58.283335 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 09:53:58.307497 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 09:53:58.312154 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 09:53:58.379082 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 10 09:53:58.383369 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 10 09:53:58.444130 kernel: raid6: avx2x4 gen() 26458 MB/s May 10 09:53:58.461123 kernel: raid6: avx2x2 gen() 28927 MB/s May 10 09:53:58.478269 kernel: raid6: avx2x1 gen() 25192 MB/s May 10 09:53:58.478299 kernel: raid6: using algorithm avx2x2 gen() 28927 MB/s May 10 09:53:58.496282 kernel: raid6: .... xor() 19560 MB/s, rmw enabled May 10 09:53:58.496314 kernel: raid6: using avx2x2 recovery algorithm May 10 09:53:58.518115 kernel: xor: automatically using best checksumming function avx May 10 09:53:58.672121 kernel: Btrfs loaded, zoned=no, fsverity=no May 10 09:53:58.682067 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 10 09:53:58.684242 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 09:53:58.714971 systemd-udevd[446]: Using default interface naming scheme 'v255'. May 10 09:53:58.721046 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 09:53:58.723867 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 10 09:53:58.751442 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation May 10 09:53:58.785267 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 10 09:53:58.786729 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 09:53:58.871008 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 09:53:58.875991 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 10 09:53:58.923372 kernel: cryptd: max_cpu_qlen set to 1000 May 10 09:53:58.928676 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 10 09:53:58.929022 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 10 09:53:58.944139 kernel: libata version 3.00 loaded. May 10 09:53:58.944246 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 10 09:53:58.944557 kernel: AVX2 version of gcm_enc/dec engaged. May 10 09:53:58.949158 kernel: AES CTR mode by8 optimization enabled May 10 09:53:58.953478 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 09:53:58.953525 kernel: GPT:9289727 != 19775487 May 10 09:53:58.953539 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 09:53:58.954157 kernel: GPT:9289727 != 19775487 May 10 09:53:58.955695 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 09:53:58.955751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:53:58.957124 kernel: ahci 0000:00:1f.2: version 3.0 May 10 09:53:58.962122 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 10 09:53:58.962154 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 10 09:53:58.960610 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 09:53:58.960753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:53:58.971685 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 10 09:53:58.971839 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:53:58.975796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:53:58.978936 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 09:53:58.981323 kernel: scsi host0: ahci May 10 09:53:58.986118 kernel: scsi host1: ahci May 10 09:53:58.988787 kernel: scsi host2: ahci May 10 09:53:58.991108 kernel: scsi host3: ahci May 10 09:53:58.998107 kernel: scsi host4: ahci May 10 09:53:59.001172 kernel: BTRFS: device fsid 447a9416-2d70-470c-8858-df3b82fa5271 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (492) May 10 09:53:59.003113 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (494) May 10 09:53:59.003156 kernel: scsi host5: ahci May 10 09:53:59.005192 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 10 09:53:59.005235 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 10 09:53:59.006466 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 10 09:53:59.006493 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 10 09:53:59.008448 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 10 09:53:59.008478 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 10 09:53:59.026083 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 10 09:53:59.035420 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 10 09:53:59.052365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 10 09:53:59.052485 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 10 09:53:59.063521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 09:53:59.064683 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 10 09:53:59.175248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:53:59.298149 disk-uuid[593]: Primary Header is updated. May 10 09:53:59.298149 disk-uuid[593]: Secondary Entries is updated. May 10 09:53:59.298149 disk-uuid[593]: Secondary Header is updated. May 10 09:53:59.302005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:53:59.319130 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 10 09:53:59.319233 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 10 09:53:59.321118 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 10 09:53:59.322114 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 10 09:53:59.322135 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 10 09:53:59.324056 kernel: ata3.00: applying bridge limits May 10 09:53:59.324079 kernel: ata3.00: configured for UDMA/100 May 10 09:53:59.325553 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 10 09:53:59.328107 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 10 09:53:59.328132 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 10 09:53:59.368117 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 10 09:53:59.368392 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 10 09:53:59.397125 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 10 09:53:59.686512 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 10 09:53:59.688475 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 10 09:53:59.689967 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 09:53:59.692447 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 09:53:59.696416 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 10 09:53:59.721421 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 10 09:54:00.312134 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:54:00.312324 disk-uuid[595]: The operation has completed successfully. May 10 09:54:00.343426 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 09:54:00.343581 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 10 09:54:00.383690 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 10 09:54:00.404826 sh[623]: Success May 10 09:54:00.423265 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 09:54:00.423312 kernel: device-mapper: uevent: version 1.0.3 May 10 09:54:00.424396 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 10 09:54:00.434110 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 10 09:54:00.463976 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 10 09:54:00.466288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 10 09:54:00.483831 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 10 09:54:00.495894 kernel: BTRFS info (device dm-0): first mount of filesystem 447a9416-2d70-470c-8858-df3b82fa5271 May 10 09:54:00.495927 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 10 09:54:00.495938 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 10 09:54:00.497283 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 10 09:54:00.498285 kernel: BTRFS info (device dm-0): using free space tree May 10 09:54:00.503826 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 10 09:54:00.506027 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 10 09:54:00.508331 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 10 09:54:00.511016 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 10 09:54:00.513562 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 10 09:54:00.541175 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:00.541226 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 09:54:00.541237 kernel: BTRFS info (device vda6): using free space tree May 10 09:54:00.545117 kernel: BTRFS info (device vda6): auto enabling async discard May 10 09:54:00.549115 kernel: BTRFS info (device vda6): last unmount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:00.555372 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 10 09:54:00.556480 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 10 09:54:00.709428 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 09:54:00.725408 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 09:54:00.799744 ignition[720]: Ignition 2.21.0 May 10 09:54:00.799758 ignition[720]: Stage: fetch-offline May 10 09:54:00.799795 ignition[720]: no configs at "/usr/lib/ignition/base.d" May 10 09:54:00.799806 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:00.799902 ignition[720]: parsed url from cmdline: "" May 10 09:54:00.799906 ignition[720]: no config URL provided May 10 09:54:00.799912 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" May 10 09:54:00.799929 ignition[720]: no config at "/usr/lib/ignition/user.ign" May 10 09:54:00.799952 ignition[720]: op(1): [started] loading QEMU firmware config module May 10 09:54:00.799957 ignition[720]: op(1): executing: "modprobe" "qemu_fw_cfg" May 10 09:54:00.810895 ignition[720]: op(1): [finished] loading QEMU firmware config module May 10 09:54:00.827623 systemd-networkd[808]: lo: Link UP May 10 09:54:00.827632 systemd-networkd[808]: lo: Gained carrier May 10 09:54:00.829747 systemd-networkd[808]: Enumeration completed May 10 09:54:00.829856 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 09:54:00.830360 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:54:00.830365 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 09:54:00.831328 systemd-networkd[808]: eth0: Link UP May 10 09:54:00.831332 systemd-networkd[808]: eth0: Gained carrier May 10 09:54:00.831341 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:54:00.832224 systemd[1]: Reached target network.target - Network. May 10 09:54:00.859136 systemd-networkd[808]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 09:54:00.866979 ignition[720]: parsing config with SHA512: eb175b94d31ed5fb0622a34748be28680cba0b209c8aab8ad022a5285a3d7b96eef2ff77af125c38dc81cf8f171eeae616b92656f30b7d150f96b309a4d1b0eb May 10 09:54:00.870553 unknown[720]: fetched base config from "system" May 10 09:54:00.870568 unknown[720]: fetched user config from "qemu" May 10 09:54:00.870907 ignition[720]: fetch-offline: fetch-offline passed May 10 09:54:00.870976 ignition[720]: Ignition finished successfully May 10 09:54:00.873811 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 10 09:54:00.875800 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 10 09:54:00.876600 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 10 09:54:00.907550 ignition[817]: Ignition 2.21.0 May 10 09:54:00.907564 ignition[817]: Stage: kargs May 10 09:54:00.907879 ignition[817]: no configs at "/usr/lib/ignition/base.d" May 10 09:54:00.907895 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:00.910013 ignition[817]: kargs: kargs passed May 10 09:54:00.910111 ignition[817]: Ignition finished successfully May 10 09:54:00.914797 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 10 09:54:00.916513 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 10 09:54:00.958163 ignition[825]: Ignition 2.21.0 May 10 09:54:00.958187 ignition[825]: Stage: disks May 10 09:54:00.958777 ignition[825]: no configs at "/usr/lib/ignition/base.d" May 10 09:54:00.958804 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:00.960037 ignition[825]: disks: disks passed May 10 09:54:00.960109 ignition[825]: Ignition finished successfully May 10 09:54:00.964237 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 10 09:54:00.966767 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 10 09:54:00.969184 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 09:54:00.970818 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 09:54:00.972347 systemd[1]: Reached target sysinit.target - System Initialization. May 10 09:54:00.972812 systemd[1]: Reached target basic.target - Basic System. May 10 09:54:00.974846 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 10 09:54:01.005983 systemd-fsck[835]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 10 09:54:01.014007 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 10 09:54:01.017684 systemd[1]: Mounting sysroot.mount - /sysroot... May 10 09:54:01.180122 kernel: EXT4-fs (vda9): mounted filesystem f8cce592-76ea-4219-9560-1ef21b28761f r/w with ordered data mode. Quota mode: none. May 10 09:54:01.181439 systemd[1]: Mounted sysroot.mount - /sysroot. May 10 09:54:01.182329 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 10 09:54:01.185536 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 09:54:01.188280 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 10 09:54:01.189506 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 10 09:54:01.189554 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 09:54:01.189580 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 10 09:54:01.205283 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 10 09:54:01.207048 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 10 09:54:01.213376 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (843) May 10 09:54:01.213416 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:01.213431 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 09:54:01.213443 kernel: BTRFS info (device vda6): using free space tree May 10 09:54:01.216114 kernel: BTRFS info (device vda6): auto enabling async discard May 10 09:54:01.222656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 09:54:01.258082 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory May 10 09:54:01.263216 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory May 10 09:54:01.293037 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory May 10 09:54:01.298459 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory May 10 09:54:01.394792 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 10 09:54:01.399378 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 10 09:54:01.402149 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 10 09:54:01.422125 kernel: BTRFS info (device vda6): last unmount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:01.434710 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 10 09:54:01.493660 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 10 09:54:01.511470 ignition[957]: INFO : Ignition 2.21.0 May 10 09:54:01.511470 ignition[957]: INFO : Stage: mount May 10 09:54:01.514046 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 09:54:01.514046 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:01.516225 ignition[957]: INFO : mount: mount passed May 10 09:54:01.516225 ignition[957]: INFO : Ignition finished successfully May 10 09:54:01.520678 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 10 09:54:01.523787 systemd[1]: Starting ignition-files.service - Ignition (files)... May 10 09:54:01.546485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 09:54:01.569218 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (969) May 10 09:54:01.569290 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:01.571233 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 09:54:01.571267 kernel: BTRFS info (device vda6): using free space tree May 10 09:54:01.577120 kernel: BTRFS info (device vda6): auto enabling async discard May 10 09:54:01.578383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 09:54:01.632993 ignition[986]: INFO : Ignition 2.21.0 May 10 09:54:01.632993 ignition[986]: INFO : Stage: files May 10 09:54:01.635272 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 09:54:01.635272 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:01.637757 ignition[986]: DEBUG : files: compiled without relabeling support, skipping May 10 09:54:01.639344 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 09:54:01.639344 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 09:54:01.642790 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 09:54:01.644214 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 09:54:01.644214 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 09:54:01.643684 unknown[986]: wrote ssh authorized keys file for user: core May 10 09:54:01.648301 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 09:54:01.648301 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 09:54:01.699544 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 09:54:02.072307 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 09:54:02.074639 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 09:54:02.074639 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 09:54:02.441342 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 09:54:02.559305 systemd-networkd[808]: eth0: Gained IPv6LL May 10 09:54:02.621372 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 09:54:02.621372 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 09:54:02.625901 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 10 09:54:03.019667 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 10 09:54:03.677304 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 09:54:03.677304 ignition[986]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 10 09:54:03.681812 ignition[986]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 09:54:03.681812 ignition[986]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 09:54:03.681812 ignition[986]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 10 09:54:03.681812 ignition[986]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 10 09:54:03.681812 ignition[986]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 09:54:03.681812 ignition[986]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 09:54:03.681812 ignition[986]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 10 09:54:03.681812 ignition[986]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 10 09:54:03.719999 ignition[986]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 10 09:54:03.728737 ignition[986]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 10 09:54:03.730819 ignition[986]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 10 09:54:03.730819 ignition[986]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 10 09:54:03.730819 ignition[986]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 10 09:54:03.730819 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 09:54:03.730819 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 09:54:03.730819 ignition[986]: INFO : files: files passed May 10 09:54:03.730819 ignition[986]: INFO : Ignition finished successfully May 10 09:54:03.742195 systemd[1]: Finished ignition-files.service - Ignition (files). May 10 09:54:03.746653 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 10 09:54:03.750292 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 10 09:54:03.764332 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 09:54:03.764474 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 10 09:54:03.767795 initrd-setup-root-after-ignition[1016]: grep: /sysroot/oem/oem-release: No such file or directory May 10 09:54:03.771717 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 09:54:03.773385 initrd-setup-root-after-ignition[1018]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 10 09:54:03.775900 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 09:54:03.779182 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 09:54:03.779448 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 10 09:54:03.785672 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 10 09:54:03.852337 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 09:54:03.853619 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 10 09:54:03.856843 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 10 09:54:03.859383 systemd[1]: Reached target initrd.target - Initrd Default Target. May 10 09:54:03.862004 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 10 09:54:03.865425 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 10 09:54:03.895827 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 09:54:03.899977 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 10 09:54:03.925635 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 10 09:54:03.928639 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 09:54:03.931398 systemd[1]: Stopped target timers.target - Timer Units. May 10 09:54:03.933625 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 09:54:03.934845 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 09:54:03.937843 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 10 09:54:03.940227 systemd[1]: Stopped target basic.target - Basic System. May 10 09:54:03.942349 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 10 09:54:03.944884 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 10 09:54:03.947578 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 10 09:54:03.950188 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 10 09:54:03.952790 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 10 09:54:03.955191 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 10 09:54:03.958074 systemd[1]: Stopped target sysinit.target - System Initialization. May 10 09:54:03.960511 systemd[1]: Stopped target local-fs.target - Local File Systems. May 10 09:54:03.962865 systemd[1]: Stopped target swap.target - Swaps. May 10 09:54:03.964781 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 09:54:03.966056 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 10 09:54:03.968921 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 10 09:54:03.971495 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 09:54:03.974375 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 10 09:54:03.975486 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 09:54:03.978502 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 09:54:03.979745 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 10 09:54:03.982439 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 09:54:03.983727 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 10 09:54:03.986521 systemd[1]: Stopped target paths.target - Path Units. May 10 09:54:03.988590 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 09:54:03.991165 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 09:54:03.991350 systemd[1]: Stopped target slices.target - Slice Units. May 10 09:54:03.995338 systemd[1]: Stopped target sockets.target - Socket Units. May 10 09:54:03.997426 systemd[1]: iscsid.socket: Deactivated successfully. May 10 09:54:03.997559 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 10 09:54:03.999538 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 09:54:03.999667 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 09:54:04.001642 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 09:54:04.001816 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 09:54:04.004081 systemd[1]: ignition-files.service: Deactivated successfully. May 10 09:54:04.004235 systemd[1]: Stopped ignition-files.service - Ignition (files). May 10 09:54:04.008533 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 10 09:54:04.009894 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 09:54:04.010051 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 10 09:54:04.013112 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 10 09:54:04.014232 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 09:54:04.014398 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 10 09:54:04.016406 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 09:54:04.016563 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 10 09:54:04.026458 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 09:54:04.026604 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 10 09:54:04.052686 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 09:54:04.130388 ignition[1042]: INFO : Ignition 2.21.0 May 10 09:54:04.130388 ignition[1042]: INFO : Stage: umount May 10 09:54:04.132779 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 09:54:04.132779 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:04.132779 ignition[1042]: INFO : umount: umount passed May 10 09:54:04.132779 ignition[1042]: INFO : Ignition finished successfully May 10 09:54:04.136202 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 09:54:04.136380 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 10 09:54:04.138763 systemd[1]: Stopped target network.target - Network. May 10 09:54:04.140836 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 09:54:04.140958 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 10 09:54:04.143206 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 09:54:04.143273 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 10 09:54:04.145371 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 09:54:04.145434 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 10 09:54:04.147596 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 10 09:54:04.147669 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 10 09:54:04.150148 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 10 09:54:04.152552 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 10 09:54:04.156653 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 09:54:04.156868 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 10 09:54:04.162378 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 10 09:54:04.162695 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 09:54:04.162830 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 10 09:54:04.168019 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 10 09:54:04.169274 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 10 09:54:04.170925 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 09:54:04.170981 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 10 09:54:04.174404 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 10 09:54:04.175710 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 09:54:04.175773 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 09:54:04.177285 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 09:54:04.177345 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 09:54:04.179744 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 09:54:04.179805 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 10 09:54:04.182153 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 10 09:54:04.182205 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 09:54:04.184981 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 09:54:04.191308 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 09:54:04.191382 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 10 09:54:04.231362 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 09:54:04.231556 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 09:54:04.234676 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 09:54:04.234798 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 10 09:54:04.237779 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 09:54:04.237844 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 10 09:54:04.239599 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 09:54:04.239648 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 10 09:54:04.242332 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 09:54:04.242388 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 10 09:54:04.244771 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 09:54:04.244830 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 10 09:54:04.247036 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 09:54:04.247084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 09:54:04.250417 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 10 09:54:04.263794 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 10 09:54:04.263856 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 10 09:54:04.265395 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 09:54:04.265443 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 09:54:04.266903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 09:54:04.266952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:54:04.270643 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 10 09:54:04.270716 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 10 09:54:04.270780 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 09:54:04.281437 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 09:54:04.281549 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 10 09:54:04.345240 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 09:54:04.345373 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 10 09:54:04.347632 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 10 09:54:04.349449 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 09:54:04.349521 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 10 09:54:04.352858 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 10 09:54:04.374045 systemd[1]: Switching root. May 10 09:54:04.398812 systemd-journald[194]: Journal stopped May 10 09:54:05.839839 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 10 09:54:05.839899 kernel: SELinux: policy capability network_peer_controls=1 May 10 09:54:05.839918 kernel: SELinux: policy capability open_perms=1 May 10 09:54:05.839930 kernel: SELinux: policy capability extended_socket_class=1 May 10 09:54:05.839942 kernel: SELinux: policy capability always_check_network=0 May 10 09:54:05.839956 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 09:54:05.839974 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 09:54:05.839985 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 09:54:05.839998 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 09:54:05.840015 kernel: audit: type=1403 audit(1746870844.854:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 09:54:05.840027 systemd[1]: Successfully loaded SELinux policy in 46.021ms. May 10 09:54:05.840048 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.616ms. May 10 09:54:05.840066 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 10 09:54:05.840078 systemd[1]: Detected virtualization kvm. May 10 09:54:05.840106 systemd[1]: Detected architecture x86-64. May 10 09:54:05.840118 systemd[1]: Detected first boot. May 10 09:54:05.840130 systemd[1]: Initializing machine ID from VM UUID. May 10 09:54:05.840142 zram_generator::config[1089]: No configuration found. May 10 09:54:05.840156 kernel: Guest personality initialized and is inactive May 10 09:54:05.840174 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 10 09:54:05.840186 kernel: Initialized host personality May 10 09:54:05.840197 kernel: NET: Registered PF_VSOCK protocol family May 10 09:54:05.840211 systemd[1]: Populated /etc with preset unit settings. May 10 09:54:05.840224 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 10 09:54:05.840237 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 09:54:05.840249 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 10 09:54:05.840261 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 09:54:05.840273 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 09:54:05.840286 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 09:54:05.840298 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 09:54:05.840310 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 09:54:05.840325 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 09:54:05.840339 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 09:54:05.840352 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 09:54:05.840364 systemd[1]: Created slice user.slice - User and Session Slice. May 10 09:54:05.840378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 09:54:05.840391 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 09:54:05.840403 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 09:54:05.840415 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 09:54:05.840430 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 09:54:05.840442 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 09:54:05.840455 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 10 09:54:05.840467 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 09:54:05.840481 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 09:54:05.840494 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 10 09:54:05.840506 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 10 09:54:05.840519 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 10 09:54:05.840533 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 09:54:05.840546 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 09:54:05.840558 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 09:54:05.840571 systemd[1]: Reached target slices.target - Slice Units. May 10 09:54:05.840591 systemd[1]: Reached target swap.target - Swaps. May 10 09:54:05.840603 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 09:54:05.840615 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 09:54:05.840628 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 10 09:54:05.840640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 09:54:05.840655 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 09:54:05.840668 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 09:54:05.840680 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 09:54:05.840693 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 09:54:05.840705 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 09:54:05.840717 systemd[1]: Mounting media.mount - External Media Directory... May 10 09:54:05.840730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:05.840742 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 09:54:05.840754 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 09:54:05.840769 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 09:54:05.840782 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 09:54:05.840794 systemd[1]: Reached target machines.target - Containers. May 10 09:54:05.840806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 09:54:05.840818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:54:05.840830 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 09:54:05.840843 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 09:54:05.840855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 09:54:05.840870 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 09:54:05.840882 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 09:54:05.840894 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 09:54:05.840906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 09:54:05.840918 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 09:54:05.840936 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 09:54:05.840948 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 10 09:54:05.840960 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 09:54:05.840972 systemd[1]: Stopped systemd-fsck-usr.service. May 10 09:54:05.840987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:54:05.841000 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 09:54:05.841012 kernel: loop: module loaded May 10 09:54:05.841023 kernel: fuse: init (API version 7.39) May 10 09:54:05.841035 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 09:54:05.841048 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 09:54:05.841066 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 09:54:05.841078 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 10 09:54:05.841102 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 09:54:05.841117 systemd[1]: verity-setup.service: Deactivated successfully. May 10 09:54:05.841130 systemd[1]: Stopped verity-setup.service. May 10 09:54:05.841142 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:05.841154 kernel: ACPI: bus type drm_connector registered May 10 09:54:05.841169 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 09:54:05.841182 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 09:54:05.841194 systemd[1]: Mounted media.mount - External Media Directory. May 10 09:54:05.841209 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 09:54:05.841221 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 09:54:05.841233 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 09:54:05.841248 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 09:54:05.841261 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 09:54:05.841275 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 09:54:05.841291 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 09:54:05.841303 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 09:54:05.841315 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 09:54:05.841328 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 09:54:05.841340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 09:54:05.841353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 09:54:05.841368 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 09:54:05.841380 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 09:54:05.841393 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 09:54:05.841405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 09:54:05.841417 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 09:54:05.841430 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 09:54:05.841442 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 09:54:05.841454 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 09:54:05.841467 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 09:54:05.841482 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 09:54:05.841495 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 09:54:05.841507 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 09:54:05.841543 systemd-journald[1153]: Collecting audit messages is disabled. May 10 09:54:05.841571 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 10 09:54:05.841593 systemd-journald[1153]: Journal started May 10 09:54:05.841616 systemd-journald[1153]: Runtime Journal (/run/log/journal/b90f671ac5064e24865f860c860fac75) is 6M, max 48.6M, 42.5M free. May 10 09:54:05.449832 systemd[1]: Queued start job for default target multi-user.target. May 10 09:54:05.464131 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 10 09:54:05.464670 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 09:54:05.861119 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 09:54:05.861209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:54:05.864927 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 09:54:05.867701 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 09:54:05.871842 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 09:54:05.871875 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 09:54:05.877623 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 09:54:05.879399 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 09:54:05.885646 systemd[1]: Started systemd-journald.service - Journal Service. May 10 09:54:05.888862 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 10 09:54:05.890440 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 09:54:05.893053 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 09:54:05.936182 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 09:54:05.944727 kernel: loop0: detected capacity change from 0 to 113872 May 10 09:54:05.963814 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 09:54:05.971282 systemd-journald[1153]: Time spent on flushing to /var/log/journal/b90f671ac5064e24865f860c860fac75 is 24.451ms for 977 entries. May 10 09:54:05.971282 systemd-journald[1153]: System Journal (/var/log/journal/b90f671ac5064e24865f860c860fac75) is 8M, max 195.6M, 187.6M free. May 10 09:54:06.228884 systemd-journald[1153]: Received client request to flush runtime journal. May 10 09:54:06.228978 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 09:54:06.229014 kernel: loop1: detected capacity change from 0 to 146240 May 10 09:54:06.229049 kernel: loop2: detected capacity change from 0 to 210664 May 10 09:54:06.229073 kernel: loop3: detected capacity change from 0 to 113872 May 10 09:54:06.229125 kernel: loop4: detected capacity change from 0 to 146240 May 10 09:54:05.996327 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 09:54:06.053397 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 09:54:06.056587 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 09:54:06.064295 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 10 09:54:06.067256 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 09:54:06.103866 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 09:54:06.163278 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 09:54:06.165896 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 09:54:06.231119 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 09:54:06.246485 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. May 10 09:54:06.247149 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. May 10 09:54:06.257650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 09:54:06.266784 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 10 09:54:06.272114 kernel: loop5: detected capacity change from 0 to 210664 May 10 09:54:06.279113 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 10 09:54:06.279921 (sd-merge)[1226]: Merged extensions into '/usr'. May 10 09:54:06.380604 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 10 09:54:06.380629 systemd[1]: Reloading... May 10 09:54:06.488819 zram_generator::config[1256]: No configuration found. May 10 09:54:06.636641 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 09:54:06.696788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:54:06.782470 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 09:54:06.782701 systemd[1]: Reloading finished in 401 ms. May 10 09:54:06.807399 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 09:54:06.809399 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 09:54:06.829180 systemd[1]: Starting ensure-sysext.service... May 10 09:54:06.831694 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 09:54:06.843161 systemd[1]: Reload requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... May 10 09:54:06.843180 systemd[1]: Reloading... May 10 09:54:06.953442 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 10 09:54:06.953482 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 10 09:54:06.953875 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 09:54:06.954175 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 09:54:06.955200 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 09:54:06.955491 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. May 10 09:54:06.955583 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. May 10 09:54:06.962788 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. May 10 09:54:06.962919 systemd-tmpfiles[1295]: Skipping /boot May 10 09:54:06.984718 zram_generator::config[1325]: No configuration found. May 10 09:54:06.986720 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. May 10 09:54:06.986820 systemd-tmpfiles[1295]: Skipping /boot May 10 09:54:07.098695 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:54:07.180920 systemd[1]: Reloading finished in 337 ms. May 10 09:54:07.199636 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 09:54:07.219121 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 09:54:07.230961 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 09:54:07.234689 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 09:54:07.237448 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 09:54:07.258077 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 09:54:07.262326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 09:54:07.267697 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 09:54:07.272406 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:07.272607 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:54:07.278878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 09:54:07.285782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 09:54:07.290027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 09:54:07.292877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:54:07.293003 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:54:07.295613 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 09:54:07.296860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:07.305740 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 09:54:07.308030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 09:54:07.308766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 09:54:07.311713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 09:54:07.311931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 09:54:07.313847 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 09:54:07.314178 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 09:54:07.321085 systemd-udevd[1365]: Using default interface naming scheme 'v255'. May 10 09:54:07.323969 augenrules[1393]: No rules May 10 09:54:07.325748 systemd[1]: audit-rules.service: Deactivated successfully. May 10 09:54:07.326014 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 09:54:07.329655 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 09:54:07.340654 systemd[1]: Finished ensure-sysext.service. May 10 09:54:07.343615 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 09:54:07.346723 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:07.348692 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 09:54:07.349899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:54:07.354280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 09:54:07.359512 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 09:54:07.373313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 09:54:07.384316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 09:54:07.386285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:54:07.386332 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:54:07.389455 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 10 09:54:07.394410 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 09:54:07.395520 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 09:54:07.395560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:07.395892 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 09:54:07.398459 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 09:54:07.401517 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 09:54:07.407333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 09:54:07.413417 augenrules[1403]: /sbin/augenrules: No change May 10 09:54:07.427057 augenrules[1459]: No rules May 10 09:54:07.427756 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 09:54:07.430910 systemd[1]: audit-rules.service: Deactivated successfully. May 10 09:54:07.431202 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 09:54:07.444441 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 09:54:07.444683 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 09:54:07.448628 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 09:54:07.458455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 09:54:07.458710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 09:54:07.460493 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 09:54:07.460708 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 09:54:07.463733 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 10 09:54:07.463896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 09:54:07.463975 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 09:54:07.486124 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1432) May 10 09:54:07.548469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 09:54:07.621138 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 10 09:54:07.623523 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 09:54:07.629427 kernel: mousedev: PS/2 mouse device common for all mice May 10 09:54:07.640406 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 10 09:54:07.640739 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 10 09:54:07.640934 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 10 09:54:07.644106 kernel: ACPI: button: Power Button [PWRF] May 10 09:54:07.657927 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 09:54:07.674972 systemd-resolved[1364]: Positive Trust Anchors: May 10 09:54:07.674987 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 09:54:07.675020 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 09:54:07.680932 systemd-resolved[1364]: Defaulting to hostname 'linux'. May 10 09:54:07.685591 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 09:54:07.687218 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 09:54:07.712703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:54:07.821461 kernel: kvm_amd: TSC scaling supported May 10 09:54:07.821550 kernel: kvm_amd: Nested Virtualization enabled May 10 09:54:07.821568 kernel: kvm_amd: Nested Paging enabled May 10 09:54:07.821583 kernel: kvm_amd: LBR virtualization supported May 10 09:54:07.826436 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 10 09:54:07.826470 kernel: kvm_amd: Virtual GIF supported May 10 09:54:07.849663 systemd-networkd[1464]: lo: Link UP May 10 09:54:07.849687 systemd-networkd[1464]: lo: Gained carrier May 10 09:54:07.853398 systemd-networkd[1464]: Enumeration completed May 10 09:54:07.853604 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 09:54:07.853829 systemd[1]: Reached target network.target - Network. May 10 09:54:07.854017 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:54:07.854024 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 09:54:07.855631 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 10 09:54:07.857605 systemd-networkd[1464]: eth0: Link UP May 10 09:54:07.857612 systemd-networkd[1464]: eth0: Gained carrier May 10 09:54:07.857638 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:54:07.859514 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 09:54:07.866413 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 10 09:54:07.866616 systemd[1]: Reached target time-set.target - System Time Set. May 10 09:54:07.875232 systemd-networkd[1464]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 09:54:07.880573 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. May 10 09:54:08.302316 systemd-resolved[1364]: Clock change detected. Flushing caches. May 10 09:54:08.302618 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 10 09:54:08.303936 systemd-timesyncd[1446]: Initial clock synchronization to Sat 2025-05-10 09:54:08.302103 UTC. May 10 09:54:08.319897 kernel: EDAC MC: Ver: 3.0.0 May 10 09:54:08.329632 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 10 09:54:08.366162 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:54:08.367913 systemd[1]: Reached target sysinit.target - System Initialization. May 10 09:54:08.369445 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 09:54:08.370901 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 09:54:08.372270 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 10 09:54:08.373748 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 09:54:08.375016 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 09:54:08.376482 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 09:54:08.377928 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 09:54:08.377980 systemd[1]: Reached target paths.target - Path Units. May 10 09:54:08.379036 systemd[1]: Reached target timers.target - Timer Units. May 10 09:54:08.381498 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 09:54:08.384481 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 09:54:08.388193 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 10 09:54:08.389738 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 10 09:54:08.391120 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 10 09:54:08.396129 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 09:54:08.397684 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 10 09:54:08.399711 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 09:54:08.401726 systemd[1]: Reached target sockets.target - Socket Units. May 10 09:54:08.402848 systemd[1]: Reached target basic.target - Basic System. May 10 09:54:08.403890 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 09:54:08.403926 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 09:54:08.405335 systemd[1]: Starting containerd.service - containerd container runtime... May 10 09:54:08.407545 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 09:54:08.409992 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 09:54:08.412556 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 09:54:08.415717 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 09:54:08.416787 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 09:54:08.419060 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 10 09:54:08.420687 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 09:54:08.422025 jq[1516]: false May 10 09:54:08.424090 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 09:54:08.427413 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 09:54:08.431680 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 09:54:08.439012 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing passwd entry cache May 10 09:54:08.439028 oslogin_cache_refresh[1518]: Refreshing passwd entry cache May 10 09:54:08.444614 extend-filesystems[1517]: Found loop3 May 10 09:54:08.444614 extend-filesystems[1517]: Found loop4 May 10 09:54:08.444614 extend-filesystems[1517]: Found loop5 May 10 09:54:08.444614 extend-filesystems[1517]: Found sr0 May 10 09:54:08.444614 extend-filesystems[1517]: Found vda May 10 09:54:08.444614 extend-filesystems[1517]: Found vda1 May 10 09:54:08.444614 extend-filesystems[1517]: Found vda2 May 10 09:54:08.444614 extend-filesystems[1517]: Found vda3 May 10 09:54:08.444614 extend-filesystems[1517]: Found usr May 10 09:54:08.444614 extend-filesystems[1517]: Found vda4 May 10 09:54:08.444614 extend-filesystems[1517]: Found vda6 May 10 09:54:08.444614 extend-filesystems[1517]: Found vda7 May 10 09:54:08.444614 extend-filesystems[1517]: Found vda9 May 10 09:54:08.444614 extend-filesystems[1517]: Checking size of /dev/vda9 May 10 09:54:08.458913 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting users, quitting May 10 09:54:08.458913 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 10 09:54:08.458913 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing group entry cache May 10 09:54:08.453769 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 09:54:08.448538 oslogin_cache_refresh[1518]: Failure getting users, quitting May 10 09:54:08.448566 oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 10 09:54:08.448639 oslogin_cache_refresh[1518]: Refreshing group entry cache May 10 09:54:08.459676 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 09:54:08.460509 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 09:54:08.462968 extend-filesystems[1517]: Resized partition /dev/vda9 May 10 09:54:08.461076 oslogin_cache_refresh[1518]: Failure getting groups, quitting May 10 09:54:08.464778 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting groups, quitting May 10 09:54:08.464778 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 10 09:54:08.463066 systemd[1]: Starting update-engine.service - Update Engine... May 10 09:54:08.461095 oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 10 09:54:08.469615 extend-filesystems[1538]: resize2fs 1.47.2 (1-Jan-2025) May 10 09:54:08.472133 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 09:54:08.478789 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 09:54:08.481675 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 10 09:54:08.481420 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 09:54:08.481665 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 09:54:08.482064 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 10 09:54:08.482310 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 10 09:54:08.483989 systemd[1]: motdgen.service: Deactivated successfully. May 10 09:54:08.484225 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 09:54:08.485874 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1419) May 10 09:54:08.489417 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 09:54:08.489670 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 09:54:08.508640 jq[1539]: true May 10 09:54:08.510020 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 09:54:08.521300 update_engine[1536]: I20250510 09:54:08.521191 1536 main.cc:92] Flatcar Update Engine starting May 10 09:54:08.530050 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 10 09:54:08.533028 jq[1551]: true May 10 09:54:08.553160 tar[1542]: linux-amd64/helm May 10 09:54:08.882747 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 09:54:08.882490 dbus-daemon[1514]: [system] SELinux support is enabled May 10 09:54:08.969179 update_engine[1536]: I20250510 09:54:08.885542 1536 update_check_scheduler.cc:74] Next update check in 9m32s May 10 09:54:08.886173 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 09:54:08.886196 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 09:54:08.887502 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 09:54:08.887517 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 09:54:08.888837 systemd[1]: Started update-engine.service - Update Engine. May 10 09:54:08.891311 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 09:54:08.924262 locksmithd[1572]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 09:54:08.970359 systemd-logind[1532]: Watching system buttons on /dev/input/event2 (Power Button) May 10 09:54:08.970385 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 09:54:08.970660 systemd-logind[1532]: New seat seat0. May 10 09:54:08.971567 systemd[1]: Started systemd-logind.service - User Login Management. May 10 09:54:08.993260 sshd_keygen[1537]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 09:54:09.020787 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 09:54:09.072981 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 09:54:09.093364 systemd[1]: issuegen.service: Deactivated successfully. May 10 09:54:09.093678 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 09:54:09.126900 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 09:54:09.126900 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 May 10 09:54:09.126900 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 10 09:54:09.132009 extend-filesystems[1517]: Resized filesystem in /dev/vda9 May 10 09:54:09.129482 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 09:54:09.133350 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 09:54:09.133677 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 09:54:09.151277 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 09:54:09.181879 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 09:54:09.185034 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 10 09:54:09.186562 systemd[1]: Reached target getty.target - Login Prompts. May 10 09:54:09.310499 containerd[1546]: time="2025-05-10T09:54:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 10 09:54:09.311076 systemd-networkd[1464]: eth0: Gained IPv6LL May 10 09:54:09.312280 containerd[1546]: time="2025-05-10T09:54:09.311285948Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 10 09:54:09.318397 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 09:54:09.320467 systemd[1]: Reached target network-online.target - Network is Online. May 10 09:54:09.322571 containerd[1546]: time="2025-05-10T09:54:09.322494348Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.32µs" May 10 09:54:09.322571 containerd[1546]: time="2025-05-10T09:54:09.322544922Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 10 09:54:09.322571 containerd[1546]: time="2025-05-10T09:54:09.322564950Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 10 09:54:09.322818 containerd[1546]: time="2025-05-10T09:54:09.322796985Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 10 09:54:09.322818 containerd[1546]: time="2025-05-10T09:54:09.322816452Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 10 09:54:09.322942 containerd[1546]: time="2025-05-10T09:54:09.322843432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 10 09:54:09.322974 containerd[1546]: time="2025-05-10T09:54:09.322956985Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 10 09:54:09.323006 containerd[1546]: time="2025-05-10T09:54:09.322971963Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 10 09:54:09.324504 containerd[1546]: time="2025-05-10T09:54:09.323289399Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 10 09:54:09.324551 containerd[1546]: time="2025-05-10T09:54:09.324510729Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 10 09:54:09.324580 containerd[1546]: time="2025-05-10T09:54:09.324556124Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 10 09:54:09.324580 containerd[1546]: time="2025-05-10T09:54:09.324567576Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 10 09:54:09.324713 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 10 09:54:09.324965 containerd[1546]: time="2025-05-10T09:54:09.324708490Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 10 09:54:09.328233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:09.329141 containerd[1546]: time="2025-05-10T09:54:09.328385726Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 10 09:54:09.329141 containerd[1546]: time="2025-05-10T09:54:09.328449866Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 10 09:54:09.329141 containerd[1546]: time="2025-05-10T09:54:09.328464784Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 10 09:54:09.329141 containerd[1546]: time="2025-05-10T09:54:09.328503527Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 10 09:54:09.329141 containerd[1546]: time="2025-05-10T09:54:09.328771189Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 10 09:54:09.329141 containerd[1546]: time="2025-05-10T09:54:09.328872018Z" level=info msg="metadata content store policy set" policy=shared May 10 09:54:09.341185 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 09:54:09.351321 tar[1542]: linux-amd64/LICENSE May 10 09:54:09.351394 tar[1542]: linux-amd64/README.md May 10 09:54:09.364438 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 09:54:09.393045 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 09:54:09.394593 systemd[1]: coreos-metadata.service: Deactivated successfully. May 10 09:54:09.394837 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 10 09:54:09.397884 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 09:54:09.839021 bash[1571]: Updated "/home/core/.ssh/authorized_keys" May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.838966447Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839056837Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839077415Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839094197Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839110888Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839130365Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839148428Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839164088Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839177303Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839191940Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839204113Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839264316Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 10 09:54:09.839513 containerd[1546]: time="2025-05-10T09:54:09.839487504Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839518302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839548248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839564298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839579777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839594194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839610214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839624291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839639800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839655579Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 10 09:54:09.839748 containerd[1546]: time="2025-05-10T09:54:09.839670608Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 10 09:54:09.839949 containerd[1546]: time="2025-05-10T09:54:09.839758543Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 10 09:54:09.839949 containerd[1546]: time="2025-05-10T09:54:09.839781836Z" level=info msg="Start snapshots syncer" May 10 09:54:09.839949 containerd[1546]: time="2025-05-10T09:54:09.839822923Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 10 09:54:09.840813 containerd[1546]: time="2025-05-10T09:54:09.840728882Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 10 09:54:09.840939 containerd[1546]: time="2025-05-10T09:54:09.840821045Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 10 09:54:09.841731 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.841826090Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842062503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842109822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842126854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842139828Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842156279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842169955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842183791Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842215420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842230849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842245717Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842299969Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842318854Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 10 09:54:09.843115 containerd[1546]: time="2025-05-10T09:54:09.842331979Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.842344362Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.842355092Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.842368497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.842382834Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.842405446Z" level=info msg="runtime interface created" May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.842413241Z" level=info msg="created NRI interface" May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.842424933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.842438929Z" level=info msg="Connect containerd service" May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.842471941Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 09:54:09.843454 containerd[1546]: time="2025-05-10T09:54:09.843313349Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 09:54:09.844529 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 10 09:54:09.957026 containerd[1546]: time="2025-05-10T09:54:09.956960298Z" level=info msg="Start subscribing containerd event" May 10 09:54:09.957026 containerd[1546]: time="2025-05-10T09:54:09.957038585Z" level=info msg="Start recovering state" May 10 09:54:09.957310 containerd[1546]: time="2025-05-10T09:54:09.957151176Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 09:54:09.957310 containerd[1546]: time="2025-05-10T09:54:09.957175471Z" level=info msg="Start event monitor" May 10 09:54:09.957310 containerd[1546]: time="2025-05-10T09:54:09.957196060Z" level=info msg="Start cni network conf syncer for default" May 10 09:54:09.957310 containerd[1546]: time="2025-05-10T09:54:09.957204075Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 09:54:09.957310 containerd[1546]: time="2025-05-10T09:54:09.957217751Z" level=info msg="Start streaming server" May 10 09:54:09.957310 containerd[1546]: time="2025-05-10T09:54:09.957235414Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 10 09:54:09.957310 containerd[1546]: time="2025-05-10T09:54:09.957244110Z" level=info msg="runtime interface starting up..." May 10 09:54:09.957310 containerd[1546]: time="2025-05-10T09:54:09.957263516Z" level=info msg="starting plugins..." May 10 09:54:09.957310 containerd[1546]: time="2025-05-10T09:54:09.957285678Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 10 09:54:09.957604 systemd[1]: Started containerd.service - containerd container runtime. May 10 09:54:09.958999 containerd[1546]: time="2025-05-10T09:54:09.957779444Z" level=info msg="containerd successfully booted in 0.647862s" May 10 09:54:10.280075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:10.281978 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 09:54:10.284692 systemd[1]: Startup finished in 3.223s (kernel) + 7.146s (initrd) + 5.059s (userspace) = 15.428s. May 10 09:54:10.315320 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 09:54:10.764124 kubelet[1643]: E0510 09:54:10.764047 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 09:54:10.769283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 09:54:10.769500 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 09:54:10.769947 systemd[1]: kubelet.service: Consumed 990ms CPU time, 245.4M memory peak. May 10 09:54:11.663887 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 09:54:11.665547 systemd[1]: Started sshd@0-10.0.0.32:22-10.0.0.1:52658.service - OpenSSH per-connection server daemon (10.0.0.1:52658). May 10 09:54:11.733213 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 52658 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:11.735390 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:11.742464 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 09:54:11.743758 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 09:54:11.751045 systemd-logind[1532]: New session 1 of user core. May 10 09:54:11.769357 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 09:54:11.772974 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 09:54:11.790584 (systemd)[1661]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 09:54:11.793137 systemd-logind[1532]: New session c1 of user core. May 10 09:54:11.945757 systemd[1661]: Queued start job for default target default.target. May 10 09:54:11.955213 systemd[1661]: Created slice app.slice - User Application Slice. May 10 09:54:11.955237 systemd[1661]: Reached target paths.target - Paths. May 10 09:54:11.955279 systemd[1661]: Reached target timers.target - Timers. May 10 09:54:11.956935 systemd[1661]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 09:54:11.968549 systemd[1661]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 09:54:11.968683 systemd[1661]: Reached target sockets.target - Sockets. May 10 09:54:11.968727 systemd[1661]: Reached target basic.target - Basic System. May 10 09:54:11.968769 systemd[1661]: Reached target default.target - Main User Target. May 10 09:54:11.968802 systemd[1661]: Startup finished in 168ms. May 10 09:54:11.969378 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 09:54:11.971132 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 09:54:12.039175 systemd[1]: Started sshd@1-10.0.0.32:22-10.0.0.1:52672.service - OpenSSH per-connection server daemon (10.0.0.1:52672). May 10 09:54:12.090979 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 52672 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:12.092570 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:12.097672 systemd-logind[1532]: New session 2 of user core. May 10 09:54:12.111078 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 09:54:12.165733 sshd[1674]: Connection closed by 10.0.0.1 port 52672 May 10 09:54:12.166023 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 10 09:54:12.174914 systemd[1]: sshd@1-10.0.0.32:22-10.0.0.1:52672.service: Deactivated successfully. May 10 09:54:12.177312 systemd[1]: session-2.scope: Deactivated successfully. May 10 09:54:12.179136 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. May 10 09:54:12.180672 systemd[1]: Started sshd@2-10.0.0.32:22-10.0.0.1:52678.service - OpenSSH per-connection server daemon (10.0.0.1:52678). May 10 09:54:12.181500 systemd-logind[1532]: Removed session 2. May 10 09:54:12.227746 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 52678 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:12.229232 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:12.233824 systemd-logind[1532]: New session 3 of user core. May 10 09:54:12.249100 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 09:54:12.298269 sshd[1682]: Connection closed by 10.0.0.1 port 52678 May 10 09:54:12.298638 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 10 09:54:12.312975 systemd[1]: sshd@2-10.0.0.32:22-10.0.0.1:52678.service: Deactivated successfully. May 10 09:54:12.315057 systemd[1]: session-3.scope: Deactivated successfully. May 10 09:54:12.316930 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. May 10 09:54:12.318330 systemd[1]: Started sshd@3-10.0.0.32:22-10.0.0.1:52690.service - OpenSSH per-connection server daemon (10.0.0.1:52690). May 10 09:54:12.319347 systemd-logind[1532]: Removed session 3. May 10 09:54:12.372356 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 52690 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:12.373699 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:12.378194 systemd-logind[1532]: New session 4 of user core. May 10 09:54:12.389057 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 09:54:12.444574 sshd[1690]: Connection closed by 10.0.0.1 port 52690 May 10 09:54:12.444998 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 10 09:54:12.453999 systemd[1]: sshd@3-10.0.0.32:22-10.0.0.1:52690.service: Deactivated successfully. May 10 09:54:12.456048 systemd[1]: session-4.scope: Deactivated successfully. May 10 09:54:12.457881 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. May 10 09:54:12.459454 systemd[1]: Started sshd@4-10.0.0.32:22-10.0.0.1:52692.service - OpenSSH per-connection server daemon (10.0.0.1:52692). May 10 09:54:12.460308 systemd-logind[1532]: Removed session 4. May 10 09:54:12.506921 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 52692 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:12.508476 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:12.513186 systemd-logind[1532]: New session 5 of user core. May 10 09:54:12.524195 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 09:54:12.587049 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 09:54:12.587403 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:54:12.608058 sudo[1699]: pam_unix(sudo:session): session closed for user root May 10 09:54:12.610267 sshd[1698]: Connection closed by 10.0.0.1 port 52692 May 10 09:54:12.610696 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 10 09:54:12.630387 systemd[1]: sshd@4-10.0.0.32:22-10.0.0.1:52692.service: Deactivated successfully. May 10 09:54:12.632787 systemd[1]: session-5.scope: Deactivated successfully. May 10 09:54:12.633805 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. May 10 09:54:12.636828 systemd[1]: Started sshd@5-10.0.0.32:22-10.0.0.1:52706.service - OpenSSH per-connection server daemon (10.0.0.1:52706). May 10 09:54:12.637458 systemd-logind[1532]: Removed session 5. May 10 09:54:12.689642 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 52706 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:12.691716 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:12.697132 systemd-logind[1532]: New session 6 of user core. May 10 09:54:12.707003 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 09:54:12.761389 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 09:54:12.761713 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:54:12.765487 sudo[1709]: pam_unix(sudo:session): session closed for user root May 10 09:54:12.771992 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 10 09:54:12.772324 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:54:12.782078 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 09:54:12.830762 augenrules[1731]: No rules May 10 09:54:12.832793 systemd[1]: audit-rules.service: Deactivated successfully. May 10 09:54:12.833242 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 09:54:12.834722 sudo[1708]: pam_unix(sudo:session): session closed for user root May 10 09:54:12.836398 sshd[1707]: Connection closed by 10.0.0.1 port 52706 May 10 09:54:12.836728 sshd-session[1704]: pam_unix(sshd:session): session closed for user core May 10 09:54:12.846430 systemd[1]: sshd@5-10.0.0.32:22-10.0.0.1:52706.service: Deactivated successfully. May 10 09:54:12.848325 systemd[1]: session-6.scope: Deactivated successfully. May 10 09:54:12.850132 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. May 10 09:54:12.851563 systemd[1]: Started sshd@6-10.0.0.32:22-10.0.0.1:52720.service - OpenSSH per-connection server daemon (10.0.0.1:52720). May 10 09:54:12.852851 systemd-logind[1532]: Removed session 6. May 10 09:54:12.904798 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 52720 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:12.906379 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:12.911070 systemd-logind[1532]: New session 7 of user core. May 10 09:54:12.920971 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 09:54:12.974590 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 09:54:12.974937 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:54:13.616334 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 09:54:13.631248 (dockerd)[1763]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 09:54:13.914579 dockerd[1763]: time="2025-05-10T09:54:13.914422432Z" level=info msg="Starting up" May 10 09:54:13.915306 dockerd[1763]: time="2025-05-10T09:54:13.915268278Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 10 09:54:14.859919 dockerd[1763]: time="2025-05-10T09:54:14.859839396Z" level=info msg="Loading containers: start." May 10 09:54:14.876894 kernel: Initializing XFRM netlink socket May 10 09:54:15.137870 systemd-networkd[1464]: docker0: Link UP May 10 09:54:15.144607 dockerd[1763]: time="2025-05-10T09:54:15.144562474Z" level=info msg="Loading containers: done." May 10 09:54:15.160501 dockerd[1763]: time="2025-05-10T09:54:15.160456801Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 09:54:15.160669 dockerd[1763]: time="2025-05-10T09:54:15.160537623Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 10 09:54:15.160669 dockerd[1763]: time="2025-05-10T09:54:15.160638331Z" level=info msg="Initializing buildkit" May 10 09:54:15.190670 dockerd[1763]: time="2025-05-10T09:54:15.190611542Z" level=info msg="Completed buildkit initialization" May 10 09:54:15.194523 dockerd[1763]: time="2025-05-10T09:54:15.194491709Z" level=info msg="Daemon has completed initialization" May 10 09:54:15.194596 dockerd[1763]: time="2025-05-10T09:54:15.194554647Z" level=info msg="API listen on /run/docker.sock" May 10 09:54:15.194726 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 09:54:16.163518 containerd[1546]: time="2025-05-10T09:54:16.163454199Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 09:54:16.988600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736094997.mount: Deactivated successfully. May 10 09:54:19.343913 containerd[1546]: time="2025-05-10T09:54:19.343785435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:19.367908 containerd[1546]: time="2025-05-10T09:54:19.367835518Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 10 09:54:19.369783 containerd[1546]: time="2025-05-10T09:54:19.369747684Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:19.372808 containerd[1546]: time="2025-05-10T09:54:19.372775332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:19.374009 containerd[1546]: time="2025-05-10T09:54:19.373875585Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 3.210351505s" May 10 09:54:19.374081 containerd[1546]: time="2025-05-10T09:54:19.374007282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 10 09:54:19.397627 containerd[1546]: time="2025-05-10T09:54:19.397585329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 09:54:20.788562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 09:54:20.791313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:21.042255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:21.081420 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 09:54:21.156071 kubelet[2057]: E0510 09:54:21.155874 2057 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 09:54:21.165161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 09:54:21.165449 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 09:54:21.166495 systemd[1]: kubelet.service: Consumed 319ms CPU time, 97.2M memory peak. May 10 09:54:22.174423 containerd[1546]: time="2025-05-10T09:54:22.174339264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:22.175358 containerd[1546]: time="2025-05-10T09:54:22.175319092Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 10 09:54:22.176782 containerd[1546]: time="2025-05-10T09:54:22.176732883Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:22.179637 containerd[1546]: time="2025-05-10T09:54:22.179596984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:22.180721 containerd[1546]: time="2025-05-10T09:54:22.180667813Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.783036677s" May 10 09:54:22.180721 containerd[1546]: time="2025-05-10T09:54:22.180710523Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 10 09:54:22.209466 containerd[1546]: time="2025-05-10T09:54:22.209414934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 09:54:23.559103 containerd[1546]: time="2025-05-10T09:54:23.559031086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:23.559900 containerd[1546]: time="2025-05-10T09:54:23.559832879Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 10 09:54:23.561114 containerd[1546]: time="2025-05-10T09:54:23.561068787Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:23.563767 containerd[1546]: time="2025-05-10T09:54:23.563695884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:23.564747 containerd[1546]: time="2025-05-10T09:54:23.564717830Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.35525667s" May 10 09:54:23.564789 containerd[1546]: time="2025-05-10T09:54:23.564748909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 10 09:54:23.607648 containerd[1546]: time="2025-05-10T09:54:23.607600911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 09:54:25.307494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045527420.mount: Deactivated successfully. May 10 09:54:26.291238 containerd[1546]: time="2025-05-10T09:54:26.291164661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:26.292384 containerd[1546]: time="2025-05-10T09:54:26.292357478Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 10 09:54:26.294169 containerd[1546]: time="2025-05-10T09:54:26.294142616Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:26.296492 containerd[1546]: time="2025-05-10T09:54:26.296428212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:26.297145 containerd[1546]: time="2025-05-10T09:54:26.297092057Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.689446192s" May 10 09:54:26.297179 containerd[1546]: time="2025-05-10T09:54:26.297148233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 10 09:54:26.320909 containerd[1546]: time="2025-05-10T09:54:26.320847397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 09:54:26.856185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868242164.mount: Deactivated successfully. May 10 09:54:27.732895 containerd[1546]: time="2025-05-10T09:54:27.732789032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:27.733734 containerd[1546]: time="2025-05-10T09:54:27.733665465Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 10 09:54:27.735149 containerd[1546]: time="2025-05-10T09:54:27.735091520Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:27.738614 containerd[1546]: time="2025-05-10T09:54:27.738550717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:27.740208 containerd[1546]: time="2025-05-10T09:54:27.740165495Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.419256101s" May 10 09:54:27.740208 containerd[1546]: time="2025-05-10T09:54:27.740208265Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 09:54:27.766963 containerd[1546]: time="2025-05-10T09:54:27.766911483Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 09:54:28.306666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064494103.mount: Deactivated successfully. May 10 09:54:28.313061 containerd[1546]: time="2025-05-10T09:54:28.313018329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:28.313902 containerd[1546]: time="2025-05-10T09:54:28.313873923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 10 09:54:28.315115 containerd[1546]: time="2025-05-10T09:54:28.315068463Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:28.317914 containerd[1546]: time="2025-05-10T09:54:28.317881148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:28.318535 containerd[1546]: time="2025-05-10T09:54:28.318511530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 551.558399ms" May 10 09:54:28.318568 containerd[1546]: time="2025-05-10T09:54:28.318542779Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 10 09:54:28.342408 containerd[1546]: time="2025-05-10T09:54:28.342365325Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 09:54:28.987586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482533451.mount: Deactivated successfully. May 10 09:54:31.288070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 09:54:31.290228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:31.546842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:31.569373 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 09:54:31.670652 kubelet[2226]: E0510 09:54:31.670580 2226 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 09:54:31.676368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 09:54:31.676655 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 09:54:31.677216 systemd[1]: kubelet.service: Consumed 304ms CPU time, 95.3M memory peak. May 10 09:54:31.950061 containerd[1546]: time="2025-05-10T09:54:31.949886164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:31.966031 containerd[1546]: time="2025-05-10T09:54:31.965931475Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 10 09:54:31.967764 containerd[1546]: time="2025-05-10T09:54:31.967686496Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:31.974238 containerd[1546]: time="2025-05-10T09:54:31.974182819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:31.975659 containerd[1546]: time="2025-05-10T09:54:31.975594266Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.633186041s" May 10 09:54:31.975659 containerd[1546]: time="2025-05-10T09:54:31.975657414Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 10 09:54:34.966838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:34.967115 systemd[1]: kubelet.service: Consumed 304ms CPU time, 95.3M memory peak. May 10 09:54:34.970401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:35.007968 systemd[1]: Reload requested from client PID 2333 ('systemctl') (unit session-7.scope)... May 10 09:54:35.008000 systemd[1]: Reloading... May 10 09:54:35.094975 zram_generator::config[2375]: No configuration found. May 10 09:54:35.284483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:54:35.413120 systemd[1]: Reloading finished in 404 ms. May 10 09:54:35.476439 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:35.478704 systemd[1]: kubelet.service: Deactivated successfully. May 10 09:54:35.479029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:35.479082 systemd[1]: kubelet.service: Consumed 150ms CPU time, 83.7M memory peak. May 10 09:54:35.480924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:35.673665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:35.686232 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 09:54:35.725922 kubelet[2425]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 09:54:35.725922 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 09:54:35.725922 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 09:54:35.727395 kubelet[2425]: I0510 09:54:35.727349 2425 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 09:54:36.264933 kubelet[2425]: I0510 09:54:36.264846 2425 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 09:54:36.264933 kubelet[2425]: I0510 09:54:36.264912 2425 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 09:54:36.265195 kubelet[2425]: I0510 09:54:36.265165 2425 server.go:927] "Client rotation is on, will bootstrap in background" May 10 09:54:36.286161 kubelet[2425]: I0510 09:54:36.286096 2425 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 09:54:36.289061 kubelet[2425]: E0510 09:54:36.288976 2425 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:36.305799 kubelet[2425]: I0510 09:54:36.305752 2425 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 09:54:36.307417 kubelet[2425]: I0510 09:54:36.307356 2425 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 09:54:36.307612 kubelet[2425]: I0510 09:54:36.307397 2425 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 09:54:36.308189 kubelet[2425]: I0510 09:54:36.308155 2425 topology_manager.go:138] "Creating topology manager with none policy" May 10 09:54:36.308189 kubelet[2425]: I0510 09:54:36.308171 2425 container_manager_linux.go:301] "Creating device plugin manager" May 10 09:54:36.308340 kubelet[2425]: I0510 09:54:36.308311 2425 state_mem.go:36] "Initialized new in-memory state store" May 10 09:54:36.309273 kubelet[2425]: I0510 09:54:36.309238 2425 kubelet.go:400] "Attempting to sync node with API server" May 10 09:54:36.309273 kubelet[2425]: I0510 09:54:36.309255 2425 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 09:54:36.309273 kubelet[2425]: I0510 09:54:36.309275 2425 kubelet.go:312] "Adding apiserver pod source" May 10 09:54:36.309380 kubelet[2425]: I0510 09:54:36.309295 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 09:54:36.310303 kubelet[2425]: W0510 09:54:36.310104 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:36.310303 kubelet[2425]: E0510 09:54:36.310188 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:36.310914 kubelet[2425]: W0510 09:54:36.310848 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:36.310961 kubelet[2425]: E0510 09:54:36.310916 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:36.314609 kubelet[2425]: I0510 09:54:36.314578 2425 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 10 09:54:36.316044 kubelet[2425]: I0510 09:54:36.316004 2425 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 09:54:36.316101 kubelet[2425]: W0510 09:54:36.316060 2425 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 09:54:36.316891 kubelet[2425]: I0510 09:54:36.316718 2425 server.go:1264] "Started kubelet" May 10 09:54:36.318115 kubelet[2425]: I0510 09:54:36.318093 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 09:54:36.318160 kubelet[2425]: I0510 09:54:36.318105 2425 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 09:54:36.319135 kubelet[2425]: I0510 09:54:36.318492 2425 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 09:54:36.319135 kubelet[2425]: I0510 09:54:36.318489 2425 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 09:54:36.319526 kubelet[2425]: I0510 09:54:36.319502 2425 server.go:455] "Adding debug handlers to kubelet server" May 10 09:54:36.328251 kubelet[2425]: E0510 09:54:36.327994 2425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.32:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.32:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183e21ce83550e09 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-10 09:54:36.316700169 +0000 UTC m=+0.626491266,LastTimestamp:2025-05-10 09:54:36.316700169 +0000 UTC m=+0.626491266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 10 09:54:36.328493 kubelet[2425]: E0510 09:54:36.328467 2425 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 09:54:36.328698 kubelet[2425]: E0510 09:54:36.328677 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:36.328757 kubelet[2425]: I0510 09:54:36.328716 2425 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 09:54:36.329322 kubelet[2425]: I0510 09:54:36.328948 2425 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 09:54:36.329322 kubelet[2425]: I0510 09:54:36.329130 2425 reconciler.go:26] "Reconciler: start to sync state" May 10 09:54:36.329965 kubelet[2425]: W0510 09:54:36.329644 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:36.329965 kubelet[2425]: E0510 09:54:36.329697 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:36.330542 kubelet[2425]: E0510 09:54:36.330385 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="200ms" May 10 09:54:36.331122 kubelet[2425]: I0510 09:54:36.331054 2425 factory.go:221] Registration of the systemd container factory successfully May 10 09:54:36.331274 kubelet[2425]: I0510 09:54:36.331239 2425 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 09:54:36.333910 kubelet[2425]: I0510 09:54:36.333779 2425 factory.go:221] Registration of the containerd container factory successfully May 10 09:54:36.344753 kubelet[2425]: I0510 09:54:36.344702 2425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 09:54:36.346496 kubelet[2425]: I0510 09:54:36.346052 2425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 09:54:36.346496 kubelet[2425]: I0510 09:54:36.346078 2425 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 09:54:36.346496 kubelet[2425]: I0510 09:54:36.346092 2425 kubelet.go:2337] "Starting kubelet main sync loop" May 10 09:54:36.346496 kubelet[2425]: E0510 09:54:36.346129 2425 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 09:54:36.346825 kubelet[2425]: W0510 09:54:36.346694 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:36.346825 kubelet[2425]: E0510 09:54:36.346747 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:36.349270 kubelet[2425]: I0510 09:54:36.349245 2425 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 09:54:36.349270 kubelet[2425]: I0510 09:54:36.349263 2425 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 09:54:36.349378 kubelet[2425]: I0510 09:54:36.349280 2425 state_mem.go:36] "Initialized new in-memory state store" May 10 09:54:36.430926 kubelet[2425]: I0510 09:54:36.430881 2425 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 09:54:36.431318 kubelet[2425]: E0510 09:54:36.431273 2425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" May 10 09:54:36.446534 kubelet[2425]: E0510 09:54:36.446472 2425 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 09:54:36.531553 kubelet[2425]: E0510 09:54:36.531371 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="400ms" May 10 09:54:36.633119 kubelet[2425]: I0510 09:54:36.633082 2425 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 09:54:36.633463 kubelet[2425]: E0510 09:54:36.633429 2425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" May 10 09:54:36.647682 kubelet[2425]: E0510 09:54:36.647620 2425 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 09:54:36.722603 kubelet[2425]: I0510 09:54:36.722563 2425 policy_none.go:49] "None policy: Start" May 10 09:54:36.723412 kubelet[2425]: I0510 09:54:36.723390 2425 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 09:54:36.723468 kubelet[2425]: I0510 09:54:36.723449 2425 state_mem.go:35] "Initializing new in-memory state store" May 10 09:54:36.731566 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 10 09:54:36.745734 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 10 09:54:36.752133 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 10 09:54:36.765894 kubelet[2425]: I0510 09:54:36.765834 2425 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 09:54:36.766276 kubelet[2425]: I0510 09:54:36.766071 2425 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 09:54:36.766276 kubelet[2425]: I0510 09:54:36.766189 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 09:54:36.767182 kubelet[2425]: E0510 09:54:36.767156 2425 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 10 09:54:36.932704 kubelet[2425]: E0510 09:54:36.932637 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="800ms" May 10 09:54:37.035239 kubelet[2425]: I0510 09:54:37.035188 2425 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 09:54:37.035566 kubelet[2425]: E0510 09:54:37.035531 2425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" May 10 09:54:37.048682 kubelet[2425]: I0510 09:54:37.048644 2425 topology_manager.go:215] "Topology Admit Handler" podUID="a187cce7f790c0a4c227cd2e2efd95f4" podNamespace="kube-system" podName="kube-apiserver-localhost" May 10 09:54:37.049385 kubelet[2425]: I0510 09:54:37.049348 2425 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 10 09:54:37.050117 kubelet[2425]: I0510 09:54:37.050056 2425 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 10 09:54:37.055670 systemd[1]: Created slice kubepods-burstable-poda187cce7f790c0a4c227cd2e2efd95f4.slice - libcontainer container kubepods-burstable-poda187cce7f790c0a4c227cd2e2efd95f4.slice. May 10 09:54:37.075211 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 10 09:54:37.089661 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 10 09:54:37.133724 kubelet[2425]: I0510 09:54:37.133679 2425 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a187cce7f790c0a4c227cd2e2efd95f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a187cce7f790c0a4c227cd2e2efd95f4\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:37.133724 kubelet[2425]: I0510 09:54:37.133708 2425 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:37.133724 kubelet[2425]: I0510 09:54:37.133723 2425 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a187cce7f790c0a4c227cd2e2efd95f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a187cce7f790c0a4c227cd2e2efd95f4\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:37.133911 kubelet[2425]: I0510 09:54:37.133738 2425 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a187cce7f790c0a4c227cd2e2efd95f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a187cce7f790c0a4c227cd2e2efd95f4\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:37.133911 kubelet[2425]: I0510 09:54:37.133753 2425 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:37.133911 kubelet[2425]: I0510 09:54:37.133769 2425 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:37.133911 kubelet[2425]: I0510 09:54:37.133783 2425 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:37.133911 kubelet[2425]: I0510 09:54:37.133801 2425 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:37.134057 kubelet[2425]: I0510 09:54:37.133821 2425 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 10 09:54:37.310527 kubelet[2425]: W0510 09:54:37.310436 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:37.310527 kubelet[2425]: E0510 09:54:37.310539 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:37.311844 kubelet[2425]: W0510 09:54:37.311810 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:37.311844 kubelet[2425]: E0510 09:54:37.311836 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:37.373743 kubelet[2425]: E0510 09:54:37.373687 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:37.374521 containerd[1546]: time="2025-05-10T09:54:37.374457752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a187cce7f790c0a4c227cd2e2efd95f4,Namespace:kube-system,Attempt:0,}" May 10 09:54:37.387756 kubelet[2425]: E0510 09:54:37.387702 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:37.388325 containerd[1546]: time="2025-05-10T09:54:37.388274193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 10 09:54:37.392529 kubelet[2425]: E0510 09:54:37.392466 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:37.392976 containerd[1546]: time="2025-05-10T09:54:37.392915818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 10 09:54:37.597895 kubelet[2425]: W0510 09:54:37.597646 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:37.597895 kubelet[2425]: E0510 09:54:37.597713 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:37.618167 kubelet[2425]: W0510 09:54:37.618079 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:37.618167 kubelet[2425]: E0510 09:54:37.618148 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:37.733211 kubelet[2425]: E0510 09:54:37.733144 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="1.6s" May 10 09:54:37.837684 kubelet[2425]: I0510 09:54:37.837634 2425 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 09:54:37.838238 kubelet[2425]: E0510 09:54:37.838019 2425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" May 10 09:54:38.415534 kubelet[2425]: E0510 09:54:38.415453 2425 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:38.509082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619337724.mount: Deactivated successfully. May 10 09:54:38.517601 containerd[1546]: time="2025-05-10T09:54:38.517537845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 09:54:38.520522 containerd[1546]: time="2025-05-10T09:54:38.520482838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 10 09:54:38.523521 containerd[1546]: time="2025-05-10T09:54:38.523450223Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 09:54:38.525664 containerd[1546]: time="2025-05-10T09:54:38.525623569Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 09:54:38.526660 containerd[1546]: time="2025-05-10T09:54:38.526600962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 10 09:54:38.527982 containerd[1546]: time="2025-05-10T09:54:38.527923622Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 09:54:38.528758 containerd[1546]: time="2025-05-10T09:54:38.528714596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 10 09:54:38.529889 containerd[1546]: time="2025-05-10T09:54:38.529830308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 09:54:38.531019 containerd[1546]: time="2025-05-10T09:54:38.530984984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 574.683998ms" May 10 09:54:38.531600 containerd[1546]: time="2025-05-10T09:54:38.531571644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 574.080095ms" May 10 09:54:38.538170 containerd[1546]: time="2025-05-10T09:54:38.538074379Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 582.855752ms" May 10 09:54:38.592545 containerd[1546]: time="2025-05-10T09:54:38.590448238Z" level=info msg="connecting to shim 001ab9c887bb9a54f8356a5ec3a82157f228f65aafe853a710fc6b4326d578dc" address="unix:///run/containerd/s/fcdad51e234c419de387d74c41c80e3e6db7ce0078e02b42626552bebb9bf187" namespace=k8s.io protocol=ttrpc version=3 May 10 09:54:38.592545 containerd[1546]: time="2025-05-10T09:54:38.590467204Z" level=info msg="connecting to shim 960313aa5c67489aa68fe67d26a5c02130411379784cd70c4ca79c72d79e5aad" address="unix:///run/containerd/s/d8f49dfc230bbafffa8ab86246684a93ac8f0010638f19cac7a0c72d45f8a1c0" namespace=k8s.io protocol=ttrpc version=3 May 10 09:54:38.668244 containerd[1546]: time="2025-05-10T09:54:38.667958187Z" level=info msg="connecting to shim aafa9ed83c64187ebb00a36eb7dbc96b6457f21ad369028397dc23354fdb20e1" address="unix:///run/containerd/s/2e2e832b812e02989c93bcc60eb7360fddc65a068f7f486a0547be9948a7acf7" namespace=k8s.io protocol=ttrpc version=3 May 10 09:54:38.673109 systemd[1]: Started cri-containerd-001ab9c887bb9a54f8356a5ec3a82157f228f65aafe853a710fc6b4326d578dc.scope - libcontainer container 001ab9c887bb9a54f8356a5ec3a82157f228f65aafe853a710fc6b4326d578dc. May 10 09:54:38.678958 systemd[1]: Started cri-containerd-960313aa5c67489aa68fe67d26a5c02130411379784cd70c4ca79c72d79e5aad.scope - libcontainer container 960313aa5c67489aa68fe67d26a5c02130411379784cd70c4ca79c72d79e5aad. May 10 09:54:38.720031 systemd[1]: Started cri-containerd-aafa9ed83c64187ebb00a36eb7dbc96b6457f21ad369028397dc23354fdb20e1.scope - libcontainer container aafa9ed83c64187ebb00a36eb7dbc96b6457f21ad369028397dc23354fdb20e1. May 10 09:54:38.886875 containerd[1546]: time="2025-05-10T09:54:38.886778123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a187cce7f790c0a4c227cd2e2efd95f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"001ab9c887bb9a54f8356a5ec3a82157f228f65aafe853a710fc6b4326d578dc\"" May 10 09:54:38.888210 kubelet[2425]: E0510 09:54:38.888170 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:38.890935 containerd[1546]: time="2025-05-10T09:54:38.890873022Z" level=info msg="CreateContainer within sandbox \"001ab9c887bb9a54f8356a5ec3a82157f228f65aafe853a710fc6b4326d578dc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 09:54:38.894073 containerd[1546]: time="2025-05-10T09:54:38.894031375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"960313aa5c67489aa68fe67d26a5c02130411379784cd70c4ca79c72d79e5aad\"" May 10 09:54:38.895042 kubelet[2425]: E0510 09:54:38.894714 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:38.896434 containerd[1546]: time="2025-05-10T09:54:38.896408313Z" level=info msg="CreateContainer within sandbox \"960313aa5c67489aa68fe67d26a5c02130411379784cd70c4ca79c72d79e5aad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 09:54:38.898636 containerd[1546]: time="2025-05-10T09:54:38.898595725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"aafa9ed83c64187ebb00a36eb7dbc96b6457f21ad369028397dc23354fdb20e1\"" May 10 09:54:38.899191 kubelet[2425]: E0510 09:54:38.899169 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:38.901263 containerd[1546]: time="2025-05-10T09:54:38.901234123Z" level=info msg="CreateContainer within sandbox \"aafa9ed83c64187ebb00a36eb7dbc96b6457f21ad369028397dc23354fdb20e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 09:54:39.126619 containerd[1546]: time="2025-05-10T09:54:39.126564237Z" level=info msg="Container a190cfe2e252f05dbe3de1618b2704f3846439d9aa24566a54976968eba4f49a: CDI devices from CRI Config.CDIDevices: []" May 10 09:54:39.236158 containerd[1546]: time="2025-05-10T09:54:39.236092702Z" level=info msg="Container 5e5379b63846597ad82da3e7691cdba751c833df5c5066725d2c40ba56128c9b: CDI devices from CRI Config.CDIDevices: []" May 10 09:54:39.269817 kubelet[2425]: W0510 09:54:39.269721 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:39.269817 kubelet[2425]: E0510 09:54:39.269803 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:39.333883 kubelet[2425]: E0510 09:54:39.333778 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="3.2s" May 10 09:54:39.440224 kubelet[2425]: I0510 09:54:39.440086 2425 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 09:54:39.440474 kubelet[2425]: E0510 09:54:39.440406 2425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" May 10 09:54:39.526538 containerd[1546]: time="2025-05-10T09:54:39.526478068Z" level=info msg="Container 72b95e3e6707538ba804da1a5eea4f5ec2bc206af2fbc9c26513173306cc6b57: CDI devices from CRI Config.CDIDevices: []" May 10 09:54:39.732317 containerd[1546]: time="2025-05-10T09:54:39.732187497Z" level=info msg="CreateContainer within sandbox \"001ab9c887bb9a54f8356a5ec3a82157f228f65aafe853a710fc6b4326d578dc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a190cfe2e252f05dbe3de1618b2704f3846439d9aa24566a54976968eba4f49a\"" May 10 09:54:39.732890 containerd[1546]: time="2025-05-10T09:54:39.732845411Z" level=info msg="StartContainer for \"a190cfe2e252f05dbe3de1618b2704f3846439d9aa24566a54976968eba4f49a\"" May 10 09:54:39.734205 containerd[1546]: time="2025-05-10T09:54:39.734169975Z" level=info msg="connecting to shim a190cfe2e252f05dbe3de1618b2704f3846439d9aa24566a54976968eba4f49a" address="unix:///run/containerd/s/fcdad51e234c419de387d74c41c80e3e6db7ce0078e02b42626552bebb9bf187" protocol=ttrpc version=3 May 10 09:54:39.740383 containerd[1546]: time="2025-05-10T09:54:39.740347109Z" level=info msg="CreateContainer within sandbox \"960313aa5c67489aa68fe67d26a5c02130411379784cd70c4ca79c72d79e5aad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e5379b63846597ad82da3e7691cdba751c833df5c5066725d2c40ba56128c9b\"" May 10 09:54:39.741974 containerd[1546]: time="2025-05-10T09:54:39.740818934Z" level=info msg="StartContainer for \"5e5379b63846597ad82da3e7691cdba751c833df5c5066725d2c40ba56128c9b\"" May 10 09:54:39.742269 containerd[1546]: time="2025-05-10T09:54:39.742233136Z" level=info msg="connecting to shim 5e5379b63846597ad82da3e7691cdba751c833df5c5066725d2c40ba56128c9b" address="unix:///run/containerd/s/d8f49dfc230bbafffa8ab86246684a93ac8f0010638f19cac7a0c72d45f8a1c0" protocol=ttrpc version=3 May 10 09:54:39.751993 containerd[1546]: time="2025-05-10T09:54:39.751951752Z" level=info msg="CreateContainer within sandbox \"aafa9ed83c64187ebb00a36eb7dbc96b6457f21ad369028397dc23354fdb20e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"72b95e3e6707538ba804da1a5eea4f5ec2bc206af2fbc9c26513173306cc6b57\"" May 10 09:54:39.753092 containerd[1546]: time="2025-05-10T09:54:39.752986713Z" level=info msg="StartContainer for \"72b95e3e6707538ba804da1a5eea4f5ec2bc206af2fbc9c26513173306cc6b57\"" May 10 09:54:39.755701 containerd[1546]: time="2025-05-10T09:54:39.755323225Z" level=info msg="connecting to shim 72b95e3e6707538ba804da1a5eea4f5ec2bc206af2fbc9c26513173306cc6b57" address="unix:///run/containerd/s/2e2e832b812e02989c93bcc60eb7360fddc65a068f7f486a0547be9948a7acf7" protocol=ttrpc version=3 May 10 09:54:39.757059 systemd[1]: Started cri-containerd-a190cfe2e252f05dbe3de1618b2704f3846439d9aa24566a54976968eba4f49a.scope - libcontainer container a190cfe2e252f05dbe3de1618b2704f3846439d9aa24566a54976968eba4f49a. May 10 09:54:39.762013 systemd[1]: Started cri-containerd-5e5379b63846597ad82da3e7691cdba751c833df5c5066725d2c40ba56128c9b.scope - libcontainer container 5e5379b63846597ad82da3e7691cdba751c833df5c5066725d2c40ba56128c9b. May 10 09:54:39.763379 kubelet[2425]: W0510 09:54:39.763313 2425 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:39.763534 kubelet[2425]: E0510 09:54:39.763519 2425 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.32:6443: connect: connection refused May 10 09:54:39.783266 systemd[1]: Started cri-containerd-72b95e3e6707538ba804da1a5eea4f5ec2bc206af2fbc9c26513173306cc6b57.scope - libcontainer container 72b95e3e6707538ba804da1a5eea4f5ec2bc206af2fbc9c26513173306cc6b57. May 10 09:54:39.815830 containerd[1546]: time="2025-05-10T09:54:39.815643407Z" level=info msg="StartContainer for \"a190cfe2e252f05dbe3de1618b2704f3846439d9aa24566a54976968eba4f49a\" returns successfully" May 10 09:54:39.822552 containerd[1546]: time="2025-05-10T09:54:39.822503211Z" level=info msg="StartContainer for \"5e5379b63846597ad82da3e7691cdba751c833df5c5066725d2c40ba56128c9b\" returns successfully" May 10 09:54:39.876892 containerd[1546]: time="2025-05-10T09:54:39.876810727Z" level=info msg="StartContainer for \"72b95e3e6707538ba804da1a5eea4f5ec2bc206af2fbc9c26513173306cc6b57\" returns successfully" May 10 09:54:40.361466 kubelet[2425]: E0510 09:54:40.361404 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:40.363754 kubelet[2425]: E0510 09:54:40.363728 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:40.366330 kubelet[2425]: E0510 09:54:40.366305 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:41.369880 kubelet[2425]: E0510 09:54:41.369303 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:41.369880 kubelet[2425]: E0510 09:54:41.369696 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:41.370467 kubelet[2425]: E0510 09:54:41.370393 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:41.782926 kubelet[2425]: E0510 09:54:41.782786 2425 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 10 09:54:42.195708 kubelet[2425]: E0510 09:54:42.195666 2425 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 10 09:54:42.369466 kubelet[2425]: E0510 09:54:42.369419 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:42.561315 kubelet[2425]: E0510 09:54:42.561268 2425 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 10 09:54:42.635411 kubelet[2425]: E0510 09:54:42.635348 2425 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 10 09:54:42.642763 kubelet[2425]: I0510 09:54:42.642732 2425 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 09:54:42.650379 kubelet[2425]: I0510 09:54:42.650347 2425 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 10 09:54:42.657458 kubelet[2425]: E0510 09:54:42.657419 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:42.758001 kubelet[2425]: E0510 09:54:42.757956 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:42.859143 kubelet[2425]: E0510 09:54:42.859009 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:42.959581 kubelet[2425]: E0510 09:54:42.959528 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.060048 kubelet[2425]: E0510 09:54:43.060004 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.160637 kubelet[2425]: E0510 09:54:43.160515 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.260972 kubelet[2425]: E0510 09:54:43.260910 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.361319 kubelet[2425]: E0510 09:54:43.361273 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.462210 kubelet[2425]: E0510 09:54:43.462091 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.466986 kubelet[2425]: E0510 09:54:43.466962 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:43.563224 kubelet[2425]: E0510 09:54:43.563162 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.616652 systemd[1]: Reload requested from client PID 2708 ('systemctl') (unit session-7.scope)... May 10 09:54:43.616669 systemd[1]: Reloading... May 10 09:54:43.663703 kubelet[2425]: E0510 09:54:43.663674 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.710983 zram_generator::config[2754]: No configuration found. May 10 09:54:43.764625 kubelet[2425]: E0510 09:54:43.764516 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.805917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:54:43.865552 kubelet[2425]: E0510 09:54:43.865516 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.945412 systemd[1]: Reloading finished in 328 ms. May 10 09:54:43.966274 kubelet[2425]: E0510 09:54:43.966225 2425 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:43.974049 kubelet[2425]: I0510 09:54:43.973962 2425 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 09:54:43.974058 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:43.992433 systemd[1]: kubelet.service: Deactivated successfully. May 10 09:54:43.992788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:43.992847 systemd[1]: kubelet.service: Consumed 1.142s CPU time, 118.4M memory peak. May 10 09:54:43.995097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:44.178102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:44.182355 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 09:54:44.234564 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 09:54:44.234564 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 09:54:44.234564 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 09:54:44.234970 kubelet[2796]: I0510 09:54:44.234594 2796 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 09:54:44.239147 kubelet[2796]: I0510 09:54:44.239118 2796 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 09:54:44.239147 kubelet[2796]: I0510 09:54:44.239138 2796 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 09:54:44.239326 kubelet[2796]: I0510 09:54:44.239304 2796 server.go:927] "Client rotation is on, will bootstrap in background" May 10 09:54:44.240674 kubelet[2796]: I0510 09:54:44.240644 2796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 09:54:44.244415 kubelet[2796]: I0510 09:54:44.244372 2796 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 09:54:44.251622 kubelet[2796]: I0510 09:54:44.251596 2796 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 09:54:44.251869 kubelet[2796]: I0510 09:54:44.251829 2796 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 09:54:44.252062 kubelet[2796]: I0510 09:54:44.251883 2796 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 09:54:44.252188 kubelet[2796]: I0510 09:54:44.252079 2796 topology_manager.go:138] "Creating topology manager with none policy" May 10 09:54:44.252188 kubelet[2796]: I0510 09:54:44.252090 2796 container_manager_linux.go:301] "Creating device plugin manager" May 10 09:54:44.252188 kubelet[2796]: I0510 09:54:44.252147 2796 state_mem.go:36] "Initialized new in-memory state store" May 10 09:54:44.252279 kubelet[2796]: I0510 09:54:44.252266 2796 kubelet.go:400] "Attempting to sync node with API server" May 10 09:54:44.252317 kubelet[2796]: I0510 09:54:44.252280 2796 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 09:54:44.252317 kubelet[2796]: I0510 09:54:44.252302 2796 kubelet.go:312] "Adding apiserver pod source" May 10 09:54:44.252387 kubelet[2796]: I0510 09:54:44.252325 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 09:54:44.254890 kubelet[2796]: I0510 09:54:44.253111 2796 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 10 09:54:44.254890 kubelet[2796]: I0510 09:54:44.253264 2796 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 09:54:44.254890 kubelet[2796]: I0510 09:54:44.253603 2796 server.go:1264] "Started kubelet" May 10 09:54:44.254890 kubelet[2796]: I0510 09:54:44.253651 2796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 09:54:44.254890 kubelet[2796]: I0510 09:54:44.254453 2796 server.go:455] "Adding debug handlers to kubelet server" May 10 09:54:44.255067 kubelet[2796]: I0510 09:54:44.254891 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 09:54:44.255067 kubelet[2796]: I0510 09:54:44.254984 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 09:54:44.255216 kubelet[2796]: I0510 09:54:44.255186 2796 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 09:54:44.255374 kubelet[2796]: I0510 09:54:44.255350 2796 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 09:54:44.255522 kubelet[2796]: I0510 09:54:44.255493 2796 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 09:54:44.255722 kubelet[2796]: I0510 09:54:44.255697 2796 reconciler.go:26] "Reconciler: start to sync state" May 10 09:54:44.260546 kubelet[2796]: I0510 09:54:44.260519 2796 factory.go:221] Registration of the systemd container factory successfully May 10 09:54:44.260770 kubelet[2796]: I0510 09:54:44.260742 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 09:54:44.262280 kubelet[2796]: I0510 09:54:44.262261 2796 factory.go:221] Registration of the containerd container factory successfully May 10 09:54:44.277566 kubelet[2796]: I0510 09:54:44.277509 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 09:54:44.278954 kubelet[2796]: I0510 09:54:44.278932 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 09:54:44.279015 kubelet[2796]: I0510 09:54:44.278963 2796 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 09:54:44.279015 kubelet[2796]: I0510 09:54:44.278984 2796 kubelet.go:2337] "Starting kubelet main sync loop" May 10 09:54:44.279071 kubelet[2796]: E0510 09:54:44.279024 2796 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 09:54:44.302100 kubelet[2796]: I0510 09:54:44.302074 2796 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 09:54:44.302100 kubelet[2796]: I0510 09:54:44.302090 2796 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 09:54:44.302100 kubelet[2796]: I0510 09:54:44.302108 2796 state_mem.go:36] "Initialized new in-memory state store" May 10 09:54:44.302293 kubelet[2796]: I0510 09:54:44.302247 2796 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 09:54:44.302293 kubelet[2796]: I0510 09:54:44.302256 2796 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 09:54:44.302293 kubelet[2796]: I0510 09:54:44.302274 2796 policy_none.go:49] "None policy: Start" May 10 09:54:44.302824 kubelet[2796]: I0510 09:54:44.302800 2796 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 09:54:44.302824 kubelet[2796]: I0510 09:54:44.302820 2796 state_mem.go:35] "Initializing new in-memory state store" May 10 09:54:44.302959 kubelet[2796]: I0510 09:54:44.302947 2796 state_mem.go:75] "Updated machine memory state" May 10 09:54:44.307148 kubelet[2796]: I0510 09:54:44.307120 2796 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 09:54:44.307551 kubelet[2796]: I0510 09:54:44.307364 2796 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 09:54:44.307551 kubelet[2796]: I0510 09:54:44.307444 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 09:54:44.360104 kubelet[2796]: I0510 09:54:44.360075 2796 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 09:54:44.367425 kubelet[2796]: I0510 09:54:44.367400 2796 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 10 09:54:44.367512 kubelet[2796]: I0510 09:54:44.367481 2796 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 10 09:54:44.380014 kubelet[2796]: I0510 09:54:44.379962 2796 topology_manager.go:215] "Topology Admit Handler" podUID="a187cce7f790c0a4c227cd2e2efd95f4" podNamespace="kube-system" podName="kube-apiserver-localhost" May 10 09:54:44.380167 kubelet[2796]: I0510 09:54:44.380070 2796 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 10 09:54:44.380167 kubelet[2796]: I0510 09:54:44.380139 2796 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 10 09:54:44.456734 kubelet[2796]: I0510 09:54:44.456017 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:44.456734 kubelet[2796]: I0510 09:54:44.456050 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:44.456734 kubelet[2796]: I0510 09:54:44.456068 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a187cce7f790c0a4c227cd2e2efd95f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a187cce7f790c0a4c227cd2e2efd95f4\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:44.456734 kubelet[2796]: I0510 09:54:44.456084 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a187cce7f790c0a4c227cd2e2efd95f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a187cce7f790c0a4c227cd2e2efd95f4\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:44.456734 kubelet[2796]: I0510 09:54:44.456104 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:44.457035 kubelet[2796]: I0510 09:54:44.456124 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:44.457035 kubelet[2796]: I0510 09:54:44.456149 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 10 09:54:44.457035 kubelet[2796]: I0510 09:54:44.456169 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a187cce7f790c0a4c227cd2e2efd95f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a187cce7f790c0a4c227cd2e2efd95f4\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:44.457035 kubelet[2796]: I0510 09:54:44.456190 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:44.615358 sudo[2830]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 09:54:44.615764 sudo[2830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 10 09:54:44.691203 kubelet[2796]: E0510 09:54:44.691173 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:44.691711 kubelet[2796]: E0510 09:54:44.691550 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:44.691711 kubelet[2796]: E0510 09:54:44.691666 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:45.077703 sudo[2830]: pam_unix(sudo:session): session closed for user root May 10 09:54:45.253101 kubelet[2796]: I0510 09:54:45.253059 2796 apiserver.go:52] "Watching apiserver" May 10 09:54:45.255721 kubelet[2796]: I0510 09:54:45.255659 2796 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 09:54:45.288960 kubelet[2796]: E0510 09:54:45.287709 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:45.288960 kubelet[2796]: E0510 09:54:45.288065 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:45.345452 kubelet[2796]: E0510 09:54:45.345284 2796 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 10 09:54:45.345902 kubelet[2796]: E0510 09:54:45.345773 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:45.353662 kubelet[2796]: I0510 09:54:45.353373 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3533535190000001 podStartE2EDuration="1.353353519s" podCreationTimestamp="2025-05-10 09:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:54:45.345992244 +0000 UTC m=+1.158848127" watchObservedRunningTime="2025-05-10 09:54:45.353353519 +0000 UTC m=+1.166209392" May 10 09:54:45.361051 kubelet[2796]: I0510 09:54:45.360937 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.36092626 podStartE2EDuration="1.36092626s" podCreationTimestamp="2025-05-10 09:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:54:45.353514738 +0000 UTC m=+1.166370611" watchObservedRunningTime="2025-05-10 09:54:45.36092626 +0000 UTC m=+1.173782133" May 10 09:54:46.289303 kubelet[2796]: E0510 09:54:46.289249 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:46.622179 sudo[1743]: pam_unix(sudo:session): session closed for user root May 10 09:54:46.623722 sshd[1742]: Connection closed by 10.0.0.1 port 52720 May 10 09:54:46.624340 sshd-session[1739]: pam_unix(sshd:session): session closed for user core May 10 09:54:46.629156 systemd[1]: sshd@6-10.0.0.32:22-10.0.0.1:52720.service: Deactivated successfully. May 10 09:54:46.631704 systemd[1]: session-7.scope: Deactivated successfully. May 10 09:54:46.631979 systemd[1]: session-7.scope: Consumed 5.264s CPU time, 277.1M memory peak. May 10 09:54:46.633288 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. May 10 09:54:46.634449 systemd-logind[1532]: Removed session 7. May 10 09:54:47.669184 kubelet[2796]: E0510 09:54:47.669136 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:52.106348 kubelet[2796]: E0510 09:54:52.106303 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:52.118036 kubelet[2796]: I0510 09:54:52.117983 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.117964979 podStartE2EDuration="8.117964979s" podCreationTimestamp="2025-05-10 09:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:54:45.361272713 +0000 UTC m=+1.174128586" watchObservedRunningTime="2025-05-10 09:54:52.117964979 +0000 UTC m=+7.930820852" May 10 09:54:52.297642 kubelet[2796]: E0510 09:54:52.297591 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:53.469131 kubelet[2796]: E0510 09:54:53.469070 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:53.856205 update_engine[1536]: I20250510 09:54:53.856086 1536 update_attempter.cc:509] Updating boot flags... May 10 09:54:53.919927 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2881) May 10 09:54:53.968888 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2881) May 10 09:54:53.990889 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2881) May 10 09:54:54.300781 kubelet[2796]: E0510 09:54:54.300746 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:57.676084 kubelet[2796]: E0510 09:54:57.676040 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:57.971642 kubelet[2796]: I0510 09:54:57.971497 2796 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 09:54:57.972025 containerd[1546]: time="2025-05-10T09:54:57.971978906Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 09:54:57.972437 kubelet[2796]: I0510 09:54:57.972201 2796 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 09:54:58.927047 kubelet[2796]: I0510 09:54:58.926014 2796 topology_manager.go:215] "Topology Admit Handler" podUID="1606fce3-10fa-49a6-80ce-b4453c896ddc" podNamespace="kube-system" podName="kube-proxy-q2n2z" May 10 09:54:58.933883 kubelet[2796]: I0510 09:54:58.931739 2796 topology_manager.go:215] "Topology Admit Handler" podUID="9ed9a172-0a80-45e5-aba8-5c8afc5944f1" podNamespace="kube-system" podName="cilium-j6vf4" May 10 09:54:58.933883 kubelet[2796]: W0510 09:54:58.933724 2796 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 10 09:54:58.933883 kubelet[2796]: E0510 09:54:58.933756 2796 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 10 09:54:58.933883 kubelet[2796]: W0510 09:54:58.933800 2796 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 10 09:54:58.933883 kubelet[2796]: E0510 09:54:58.933814 2796 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 10 09:54:58.934112 kubelet[2796]: W0510 09:54:58.933916 2796 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 10 09:54:58.934112 kubelet[2796]: E0510 09:54:58.933931 2796 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 10 09:54:58.938668 systemd[1]: Created slice kubepods-besteffort-pod1606fce3_10fa_49a6_80ce_b4453c896ddc.slice - libcontainer container kubepods-besteffort-pod1606fce3_10fa_49a6_80ce_b4453c896ddc.slice. May 10 09:54:58.959192 systemd[1]: Created slice kubepods-burstable-pod9ed9a172_0a80_45e5_aba8_5c8afc5944f1.slice - libcontainer container kubepods-burstable-pod9ed9a172_0a80_45e5_aba8_5c8afc5944f1.slice. May 10 09:54:59.042512 kubelet[2796]: I0510 09:54:59.042372 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cni-path\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.042512 kubelet[2796]: I0510 09:54:59.042426 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-xtables-lock\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.042512 kubelet[2796]: I0510 09:54:59.042471 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-host-proc-sys-net\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.042512 kubelet[2796]: I0510 09:54:59.042493 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-host-proc-sys-kernel\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.042512 kubelet[2796]: I0510 09:54:59.042512 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flsdk\" (UniqueName: \"kubernetes.io/projected/1606fce3-10fa-49a6-80ce-b4453c896ddc-kube-api-access-flsdk\") pod \"kube-proxy-q2n2z\" (UID: \"1606fce3-10fa-49a6-80ce-b4453c896ddc\") " pod="kube-system/kube-proxy-q2n2z" May 10 09:54:59.042821 kubelet[2796]: I0510 09:54:59.042533 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1606fce3-10fa-49a6-80ce-b4453c896ddc-kube-proxy\") pod \"kube-proxy-q2n2z\" (UID: \"1606fce3-10fa-49a6-80ce-b4453c896ddc\") " pod="kube-system/kube-proxy-q2n2z" May 10 09:54:59.042821 kubelet[2796]: I0510 09:54:59.042553 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1606fce3-10fa-49a6-80ce-b4453c896ddc-xtables-lock\") pod \"kube-proxy-q2n2z\" (UID: \"1606fce3-10fa-49a6-80ce-b4453c896ddc\") " pod="kube-system/kube-proxy-q2n2z" May 10 09:54:59.042821 kubelet[2796]: I0510 09:54:59.042569 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-etc-cni-netd\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.042821 kubelet[2796]: I0510 09:54:59.042647 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-bpf-maps\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.042821 kubelet[2796]: I0510 09:54:59.042715 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-cgroup\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.042821 kubelet[2796]: I0510 09:54:59.042737 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-config-path\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.043044 kubelet[2796]: I0510 09:54:59.042756 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-hubble-tls\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.043044 kubelet[2796]: I0510 09:54:59.042785 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj5zk\" (UniqueName: \"kubernetes.io/projected/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-kube-api-access-fj5zk\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.043044 kubelet[2796]: I0510 09:54:59.042804 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1606fce3-10fa-49a6-80ce-b4453c896ddc-lib-modules\") pod \"kube-proxy-q2n2z\" (UID: \"1606fce3-10fa-49a6-80ce-b4453c896ddc\") " pod="kube-system/kube-proxy-q2n2z" May 10 09:54:59.043044 kubelet[2796]: I0510 09:54:59.042821 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-run\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.043044 kubelet[2796]: I0510 09:54:59.042839 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-hostproc\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.043044 kubelet[2796]: I0510 09:54:59.042871 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-lib-modules\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.043229 kubelet[2796]: I0510 09:54:59.042900 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-clustermesh-secrets\") pod \"cilium-j6vf4\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " pod="kube-system/cilium-j6vf4" May 10 09:54:59.182968 kubelet[2796]: I0510 09:54:59.182738 2796 topology_manager.go:215] "Topology Admit Handler" podUID="e20f4605-e788-4634-afb1-46803baef04f" podNamespace="kube-system" podName="cilium-operator-599987898-brlnm" May 10 09:54:59.192835 systemd[1]: Created slice kubepods-besteffort-pode20f4605_e788_4634_afb1_46803baef04f.slice - libcontainer container kubepods-besteffort-pode20f4605_e788_4634_afb1_46803baef04f.slice. May 10 09:54:59.258287 kubelet[2796]: E0510 09:54:59.258248 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:59.258952 containerd[1546]: time="2025-05-10T09:54:59.258914248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q2n2z,Uid:1606fce3-10fa-49a6-80ce-b4453c896ddc,Namespace:kube-system,Attempt:0,}" May 10 09:54:59.298651 containerd[1546]: time="2025-05-10T09:54:59.298601248Z" level=info msg="connecting to shim dbc47e4d27c5a83e6a76a775a6597984b8795680568579f5bcfad254384384b3" address="unix:///run/containerd/s/cb2079ce92ecd5b7879a3f92fe72ad104d2d8c2938cac69f16f14012cbdf3bee" namespace=k8s.io protocol=ttrpc version=3 May 10 09:54:59.343910 kubelet[2796]: I0510 09:54:59.343806 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gssl\" (UniqueName: \"kubernetes.io/projected/e20f4605-e788-4634-afb1-46803baef04f-kube-api-access-8gssl\") pod \"cilium-operator-599987898-brlnm\" (UID: \"e20f4605-e788-4634-afb1-46803baef04f\") " pod="kube-system/cilium-operator-599987898-brlnm" May 10 09:54:59.343910 kubelet[2796]: I0510 09:54:59.343885 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e20f4605-e788-4634-afb1-46803baef04f-cilium-config-path\") pod \"cilium-operator-599987898-brlnm\" (UID: \"e20f4605-e788-4634-afb1-46803baef04f\") " pod="kube-system/cilium-operator-599987898-brlnm" May 10 09:54:59.353007 systemd[1]: Started cri-containerd-dbc47e4d27c5a83e6a76a775a6597984b8795680568579f5bcfad254384384b3.scope - libcontainer container dbc47e4d27c5a83e6a76a775a6597984b8795680568579f5bcfad254384384b3. May 10 09:54:59.378776 containerd[1546]: time="2025-05-10T09:54:59.378724011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q2n2z,Uid:1606fce3-10fa-49a6-80ce-b4453c896ddc,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbc47e4d27c5a83e6a76a775a6597984b8795680568579f5bcfad254384384b3\"" May 10 09:54:59.379417 kubelet[2796]: E0510 09:54:59.379395 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:54:59.381368 containerd[1546]: time="2025-05-10T09:54:59.381318626Z" level=info msg="CreateContainer within sandbox \"dbc47e4d27c5a83e6a76a775a6597984b8795680568579f5bcfad254384384b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 09:54:59.392480 containerd[1546]: time="2025-05-10T09:54:59.392418668Z" level=info msg="Container 7328c99460192592420d078356cedcc2aaf2423ea00d18ec9f65dde7f5899afb: CDI devices from CRI Config.CDIDevices: []" May 10 09:54:59.400994 containerd[1546]: time="2025-05-10T09:54:59.400959123Z" level=info msg="CreateContainer within sandbox \"dbc47e4d27c5a83e6a76a775a6597984b8795680568579f5bcfad254384384b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7328c99460192592420d078356cedcc2aaf2423ea00d18ec9f65dde7f5899afb\"" May 10 09:54:59.401510 containerd[1546]: time="2025-05-10T09:54:59.401486139Z" level=info msg="StartContainer for \"7328c99460192592420d078356cedcc2aaf2423ea00d18ec9f65dde7f5899afb\"" May 10 09:54:59.403174 containerd[1546]: time="2025-05-10T09:54:59.403114607Z" level=info msg="connecting to shim 7328c99460192592420d078356cedcc2aaf2423ea00d18ec9f65dde7f5899afb" address="unix:///run/containerd/s/cb2079ce92ecd5b7879a3f92fe72ad104d2d8c2938cac69f16f14012cbdf3bee" protocol=ttrpc version=3 May 10 09:54:59.423065 systemd[1]: Started cri-containerd-7328c99460192592420d078356cedcc2aaf2423ea00d18ec9f65dde7f5899afb.scope - libcontainer container 7328c99460192592420d078356cedcc2aaf2423ea00d18ec9f65dde7f5899afb. May 10 09:54:59.467696 containerd[1546]: time="2025-05-10T09:54:59.467586938Z" level=info msg="StartContainer for \"7328c99460192592420d078356cedcc2aaf2423ea00d18ec9f65dde7f5899afb\" returns successfully" May 10 09:55:00.122038 kubelet[2796]: E0510 09:55:00.121976 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:00.122728 containerd[1546]: time="2025-05-10T09:55:00.122675765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-brlnm,Uid:e20f4605-e788-4634-afb1-46803baef04f,Namespace:kube-system,Attempt:0,}" May 10 09:55:00.144148 kubelet[2796]: E0510 09:55:00.144108 2796 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 10 09:55:00.144295 kubelet[2796]: E0510 09:55:00.144215 2796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-clustermesh-secrets podName:9ed9a172-0a80-45e5-aba8-5c8afc5944f1 nodeName:}" failed. No retries permitted until 2025-05-10 09:55:00.644189843 +0000 UTC m=+16.457045716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-clustermesh-secrets") pod "cilium-j6vf4" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1") : failed to sync secret cache: timed out waiting for the condition May 10 09:55:00.147108 containerd[1546]: time="2025-05-10T09:55:00.147017855Z" level=info msg="connecting to shim 18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4" address="unix:///run/containerd/s/761fd4a712b0e210c707c4156f7dbc930d41e872305ea0fe0a8d6b66d24a4577" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:00.175084 systemd[1]: Started cri-containerd-18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4.scope - libcontainer container 18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4. May 10 09:55:00.203887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094327011.mount: Deactivated successfully. May 10 09:55:00.219419 containerd[1546]: time="2025-05-10T09:55:00.219384321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-brlnm,Uid:e20f4605-e788-4634-afb1-46803baef04f,Namespace:kube-system,Attempt:0,} returns sandbox id \"18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4\"" May 10 09:55:00.220072 kubelet[2796]: E0510 09:55:00.220051 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:00.222375 containerd[1546]: time="2025-05-10T09:55:00.222334705Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 09:55:00.312534 kubelet[2796]: E0510 09:55:00.312311 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:00.321362 kubelet[2796]: I0510 09:55:00.321035 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q2n2z" podStartSLOduration=2.321013494 podStartE2EDuration="2.321013494s" podCreationTimestamp="2025-05-10 09:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:55:00.321000219 +0000 UTC m=+16.133856092" watchObservedRunningTime="2025-05-10 09:55:00.321013494 +0000 UTC m=+16.133869367" May 10 09:55:00.762960 kubelet[2796]: E0510 09:55:00.762918 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:00.763501 containerd[1546]: time="2025-05-10T09:55:00.763446172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6vf4,Uid:9ed9a172-0a80-45e5-aba8-5c8afc5944f1,Namespace:kube-system,Attempt:0,}" May 10 09:55:00.787304 containerd[1546]: time="2025-05-10T09:55:00.787255216Z" level=info msg="connecting to shim c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50" address="unix:///run/containerd/s/03a3a8913aed403cde75735ce252c087919e116df7b2ef8064125788e02cd62f" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:00.816020 systemd[1]: Started cri-containerd-c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50.scope - libcontainer container c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50. May 10 09:55:00.846278 containerd[1546]: time="2025-05-10T09:55:00.846235561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6vf4,Uid:9ed9a172-0a80-45e5-aba8-5c8afc5944f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\"" May 10 09:55:00.846751 kubelet[2796]: E0510 09:55:00.846724 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:01.909348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313113082.mount: Deactivated successfully. May 10 09:55:03.361566 containerd[1546]: time="2025-05-10T09:55:03.361495572Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:03.362377 containerd[1546]: time="2025-05-10T09:55:03.362351508Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 10 09:55:03.363973 containerd[1546]: time="2025-05-10T09:55:03.363923103Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:03.364996 containerd[1546]: time="2025-05-10T09:55:03.364940373Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.142572786s" May 10 09:55:03.364996 containerd[1546]: time="2025-05-10T09:55:03.364984636Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 09:55:03.366135 containerd[1546]: time="2025-05-10T09:55:03.366096594Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 09:55:03.368220 containerd[1546]: time="2025-05-10T09:55:03.368180065Z" level=info msg="CreateContainer within sandbox \"18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 09:55:03.378762 containerd[1546]: time="2025-05-10T09:55:03.378705626Z" level=info msg="Container 65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:03.384780 containerd[1546]: time="2025-05-10T09:55:03.384736645Z" level=info msg="CreateContainer within sandbox \"18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\"" May 10 09:55:03.385396 containerd[1546]: time="2025-05-10T09:55:03.385326759Z" level=info msg="StartContainer for \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\"" May 10 09:55:03.386418 containerd[1546]: time="2025-05-10T09:55:03.386372181Z" level=info msg="connecting to shim 65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4" address="unix:///run/containerd/s/761fd4a712b0e210c707c4156f7dbc930d41e872305ea0fe0a8d6b66d24a4577" protocol=ttrpc version=3 May 10 09:55:03.410005 systemd[1]: Started cri-containerd-65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4.scope - libcontainer container 65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4. May 10 09:55:03.445223 containerd[1546]: time="2025-05-10T09:55:03.445169405Z" level=info msg="StartContainer for \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" returns successfully" May 10 09:55:04.330351 kubelet[2796]: E0510 09:55:04.330303 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:04.332983 kubelet[2796]: I0510 09:55:04.332937 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-brlnm" podStartSLOduration=2.187739248 podStartE2EDuration="5.33292092s" podCreationTimestamp="2025-05-10 09:54:59 +0000 UTC" firstStartedPulling="2025-05-10 09:55:00.220687944 +0000 UTC m=+16.033543817" lastFinishedPulling="2025-05-10 09:55:03.365869616 +0000 UTC m=+19.178725489" observedRunningTime="2025-05-10 09:55:04.332167149 +0000 UTC m=+20.145023022" watchObservedRunningTime="2025-05-10 09:55:04.33292092 +0000 UTC m=+20.145776813" May 10 09:55:05.323084 kubelet[2796]: E0510 09:55:05.323046 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:11.152361 systemd[1]: Started sshd@7-10.0.0.32:22-10.0.0.1:37406.service - OpenSSH per-connection server daemon (10.0.0.1:37406). May 10 09:55:11.208661 sshd[3242]: Accepted publickey for core from 10.0.0.1 port 37406 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:11.210943 sshd-session[3242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:11.217720 systemd-logind[1532]: New session 8 of user core. May 10 09:55:11.224056 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 09:55:11.369179 sshd[3244]: Connection closed by 10.0.0.1 port 37406 May 10 09:55:11.369840 sshd-session[3242]: pam_unix(sshd:session): session closed for user core May 10 09:55:11.374935 systemd[1]: sshd@7-10.0.0.32:22-10.0.0.1:37406.service: Deactivated successfully. May 10 09:55:11.377589 systemd[1]: session-8.scope: Deactivated successfully. May 10 09:55:11.378836 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. May 10 09:55:11.379849 systemd-logind[1532]: Removed session 8. May 10 09:55:12.122041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657631781.mount: Deactivated successfully. May 10 09:55:14.712833 containerd[1546]: time="2025-05-10T09:55:14.712751731Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:14.713698 containerd[1546]: time="2025-05-10T09:55:14.713633409Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 10 09:55:14.715144 containerd[1546]: time="2025-05-10T09:55:14.715100710Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:14.717421 containerd[1546]: time="2025-05-10T09:55:14.717373555Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.351235523s" May 10 09:55:14.717421 containerd[1546]: time="2025-05-10T09:55:14.717417177Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 09:55:14.726438 containerd[1546]: time="2025-05-10T09:55:14.726387551Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 09:55:14.734894 containerd[1546]: time="2025-05-10T09:55:14.734669128Z" level=info msg="Container a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:14.740000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740708697.mount: Deactivated successfully. May 10 09:55:14.743551 containerd[1546]: time="2025-05-10T09:55:14.743510088Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\"" May 10 09:55:14.744263 containerd[1546]: time="2025-05-10T09:55:14.744111008Z" level=info msg="StartContainer for \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\"" May 10 09:55:14.745144 containerd[1546]: time="2025-05-10T09:55:14.745121740Z" level=info msg="connecting to shim a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d" address="unix:///run/containerd/s/03a3a8913aed403cde75735ce252c087919e116df7b2ef8064125788e02cd62f" protocol=ttrpc version=3 May 10 09:55:14.770052 systemd[1]: Started cri-containerd-a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d.scope - libcontainer container a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d. May 10 09:55:14.809092 containerd[1546]: time="2025-05-10T09:55:14.809036665Z" level=info msg="StartContainer for \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\" returns successfully" May 10 09:55:14.820918 systemd[1]: cri-containerd-a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d.scope: Deactivated successfully. May 10 09:55:14.822592 containerd[1546]: time="2025-05-10T09:55:14.822547500Z" level=info msg="received exit event container_id:\"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\" id:\"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\" pid:3296 exited_at:{seconds:1746870914 nanos:822088877}" May 10 09:55:14.822743 containerd[1546]: time="2025-05-10T09:55:14.822623172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\" id:\"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\" pid:3296 exited_at:{seconds:1746870914 nanos:822088877}" May 10 09:55:14.845257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d-rootfs.mount: Deactivated successfully. May 10 09:55:15.348161 kubelet[2796]: E0510 09:55:15.348118 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:15.350317 containerd[1546]: time="2025-05-10T09:55:15.350173463Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 09:55:15.359980 containerd[1546]: time="2025-05-10T09:55:15.359942686Z" level=info msg="Container f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:15.367088 containerd[1546]: time="2025-05-10T09:55:15.367034493Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\"" May 10 09:55:15.367668 containerd[1546]: time="2025-05-10T09:55:15.367636616Z" level=info msg="StartContainer for \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\"" May 10 09:55:15.368423 containerd[1546]: time="2025-05-10T09:55:15.368377057Z" level=info msg="connecting to shim f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0" address="unix:///run/containerd/s/03a3a8913aed403cde75735ce252c087919e116df7b2ef8064125788e02cd62f" protocol=ttrpc version=3 May 10 09:55:15.387999 systemd[1]: Started cri-containerd-f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0.scope - libcontainer container f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0. May 10 09:55:15.422448 containerd[1546]: time="2025-05-10T09:55:15.422397763Z" level=info msg="StartContainer for \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\" returns successfully" May 10 09:55:15.436958 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 09:55:15.437230 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 09:55:15.437460 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 10 09:55:15.439001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 09:55:15.441053 systemd[1]: cri-containerd-f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0.scope: Deactivated successfully. May 10 09:55:15.441563 containerd[1546]: time="2025-05-10T09:55:15.441380774Z" level=info msg="received exit event container_id:\"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\" id:\"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\" pid:3343 exited_at:{seconds:1746870915 nanos:440954452}" May 10 09:55:15.441563 containerd[1546]: time="2025-05-10T09:55:15.441487164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\" id:\"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\" pid:3343 exited_at:{seconds:1746870915 nanos:440954452}" May 10 09:55:15.468826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 09:55:16.351829 kubelet[2796]: E0510 09:55:16.351792 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:16.353756 containerd[1546]: time="2025-05-10T09:55:16.353684168Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 09:55:16.366594 containerd[1546]: time="2025-05-10T09:55:16.366549520Z" level=info msg="Container 9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:16.378656 containerd[1546]: time="2025-05-10T09:55:16.378606792Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\"" May 10 09:55:16.379920 containerd[1546]: time="2025-05-10T09:55:16.379105629Z" level=info msg="StartContainer for \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\"" May 10 09:55:16.380749 containerd[1546]: time="2025-05-10T09:55:16.380723962Z" level=info msg="connecting to shim 9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb" address="unix:///run/containerd/s/03a3a8913aed403cde75735ce252c087919e116df7b2ef8064125788e02cd62f" protocol=ttrpc version=3 May 10 09:55:16.381699 systemd[1]: Started sshd@8-10.0.0.32:22-10.0.0.1:37420.service - OpenSSH per-connection server daemon (10.0.0.1:37420). May 10 09:55:16.406026 systemd[1]: Started cri-containerd-9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb.scope - libcontainer container 9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb. May 10 09:55:16.433584 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 37420 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:16.435875 sshd-session[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:16.443904 systemd-logind[1532]: New session 9 of user core. May 10 09:55:16.448073 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 09:55:16.455659 systemd[1]: cri-containerd-9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb.scope: Deactivated successfully. May 10 09:55:16.455909 containerd[1546]: time="2025-05-10T09:55:16.455742293Z" level=info msg="StartContainer for \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\" returns successfully" May 10 09:55:16.456721 containerd[1546]: time="2025-05-10T09:55:16.456699383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\" id:\"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\" pid:3391 exited_at:{seconds:1746870916 nanos:456491791}" May 10 09:55:16.456816 containerd[1546]: time="2025-05-10T09:55:16.456763673Z" level=info msg="received exit event container_id:\"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\" id:\"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\" pid:3391 exited_at:{seconds:1746870916 nanos:456491791}" May 10 09:55:16.479210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb-rootfs.mount: Deactivated successfully. May 10 09:55:16.571832 sshd[3404]: Connection closed by 10.0.0.1 port 37420 May 10 09:55:16.572203 sshd-session[3378]: pam_unix(sshd:session): session closed for user core May 10 09:55:16.576444 systemd[1]: sshd@8-10.0.0.32:22-10.0.0.1:37420.service: Deactivated successfully. May 10 09:55:16.578363 systemd[1]: session-9.scope: Deactivated successfully. May 10 09:55:16.579304 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. May 10 09:55:16.580359 systemd-logind[1532]: Removed session 9. May 10 09:55:17.355677 kubelet[2796]: E0510 09:55:17.355643 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:17.357187 containerd[1546]: time="2025-05-10T09:55:17.357154532Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 09:55:17.635630 containerd[1546]: time="2025-05-10T09:55:17.635413908Z" level=info msg="Container ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:17.778368 containerd[1546]: time="2025-05-10T09:55:17.777082719Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\"" May 10 09:55:17.778744 containerd[1546]: time="2025-05-10T09:55:17.778718825Z" level=info msg="StartContainer for \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\"" May 10 09:55:17.779896 containerd[1546]: time="2025-05-10T09:55:17.779844141Z" level=info msg="connecting to shim ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2" address="unix:///run/containerd/s/03a3a8913aed403cde75735ce252c087919e116df7b2ef8064125788e02cd62f" protocol=ttrpc version=3 May 10 09:55:17.802063 systemd[1]: Started cri-containerd-ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2.scope - libcontainer container ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2. May 10 09:55:17.831337 systemd[1]: cri-containerd-ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2.scope: Deactivated successfully. May 10 09:55:17.832202 containerd[1546]: time="2025-05-10T09:55:17.831815809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\" id:\"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\" pid:3442 exited_at:{seconds:1746870917 nanos:831429834}" May 10 09:55:17.932311 containerd[1546]: time="2025-05-10T09:55:17.932168385Z" level=info msg="received exit event container_id:\"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\" id:\"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\" pid:3442 exited_at:{seconds:1746870917 nanos:831429834}" May 10 09:55:17.940620 containerd[1546]: time="2025-05-10T09:55:17.940590047Z" level=info msg="StartContainer for \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\" returns successfully" May 10 09:55:17.954690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2-rootfs.mount: Deactivated successfully. May 10 09:55:18.360822 kubelet[2796]: E0510 09:55:18.360765 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:18.363737 containerd[1546]: time="2025-05-10T09:55:18.363062372Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 09:55:18.588158 containerd[1546]: time="2025-05-10T09:55:18.588100945Z" level=info msg="Container bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:18.614822 containerd[1546]: time="2025-05-10T09:55:18.614695635Z" level=info msg="CreateContainer within sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\"" May 10 09:55:18.615392 containerd[1546]: time="2025-05-10T09:55:18.615353722Z" level=info msg="StartContainer for \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\"" May 10 09:55:18.616639 containerd[1546]: time="2025-05-10T09:55:18.616603402Z" level=info msg="connecting to shim bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e" address="unix:///run/containerd/s/03a3a8913aed403cde75735ce252c087919e116df7b2ef8064125788e02cd62f" protocol=ttrpc version=3 May 10 09:55:18.643025 systemd[1]: Started cri-containerd-bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e.scope - libcontainer container bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e. May 10 09:55:18.680148 containerd[1546]: time="2025-05-10T09:55:18.680105458Z" level=info msg="StartContainer for \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" returns successfully" May 10 09:55:18.754785 containerd[1546]: time="2025-05-10T09:55:18.754739872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" id:\"8529c907b0bb1f463cc0e5d15cb84ae9dd84e493b9a24f045a6771ff3e227ea5\" pid:3511 exited_at:{seconds:1746870918 nanos:754431472}" May 10 09:55:18.774853 kubelet[2796]: I0510 09:55:18.774822 2796 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 09:55:18.795056 kubelet[2796]: I0510 09:55:18.794984 2796 topology_manager.go:215] "Topology Admit Handler" podUID="778f1e0d-1de4-4771-89e6-44ba65487df8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ltjxr" May 10 09:55:18.795262 kubelet[2796]: I0510 09:55:18.795217 2796 topology_manager.go:215] "Topology Admit Handler" podUID="c08d4ec8-c87b-4fdb-948e-9f3982fa1961" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t5kf7" May 10 09:55:18.804063 systemd[1]: Created slice kubepods-burstable-podc08d4ec8_c87b_4fdb_948e_9f3982fa1961.slice - libcontainer container kubepods-burstable-podc08d4ec8_c87b_4fdb_948e_9f3982fa1961.slice. May 10 09:55:18.813089 systemd[1]: Created slice kubepods-burstable-pod778f1e0d_1de4_4771_89e6_44ba65487df8.slice - libcontainer container kubepods-burstable-pod778f1e0d_1de4_4771_89e6_44ba65487df8.slice. May 10 09:55:18.985846 kubelet[2796]: I0510 09:55:18.985687 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/778f1e0d-1de4-4771-89e6-44ba65487df8-config-volume\") pod \"coredns-7db6d8ff4d-ltjxr\" (UID: \"778f1e0d-1de4-4771-89e6-44ba65487df8\") " pod="kube-system/coredns-7db6d8ff4d-ltjxr" May 10 09:55:18.985846 kubelet[2796]: I0510 09:55:18.985736 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhfp\" (UniqueName: \"kubernetes.io/projected/778f1e0d-1de4-4771-89e6-44ba65487df8-kube-api-access-wdhfp\") pod \"coredns-7db6d8ff4d-ltjxr\" (UID: \"778f1e0d-1de4-4771-89e6-44ba65487df8\") " pod="kube-system/coredns-7db6d8ff4d-ltjxr" May 10 09:55:18.985846 kubelet[2796]: I0510 09:55:18.985761 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c08d4ec8-c87b-4fdb-948e-9f3982fa1961-config-volume\") pod \"coredns-7db6d8ff4d-t5kf7\" (UID: \"c08d4ec8-c87b-4fdb-948e-9f3982fa1961\") " pod="kube-system/coredns-7db6d8ff4d-t5kf7" May 10 09:55:18.985846 kubelet[2796]: I0510 09:55:18.985775 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xr7n\" (UniqueName: \"kubernetes.io/projected/c08d4ec8-c87b-4fdb-948e-9f3982fa1961-kube-api-access-7xr7n\") pod \"coredns-7db6d8ff4d-t5kf7\" (UID: \"c08d4ec8-c87b-4fdb-948e-9f3982fa1961\") " pod="kube-system/coredns-7db6d8ff4d-t5kf7" May 10 09:55:19.111320 kubelet[2796]: E0510 09:55:19.110976 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:19.111801 containerd[1546]: time="2025-05-10T09:55:19.111760164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5kf7,Uid:c08d4ec8-c87b-4fdb-948e-9f3982fa1961,Namespace:kube-system,Attempt:0,}" May 10 09:55:19.116532 kubelet[2796]: E0510 09:55:19.116484 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:19.117044 containerd[1546]: time="2025-05-10T09:55:19.116979555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ltjxr,Uid:778f1e0d-1de4-4771-89e6-44ba65487df8,Namespace:kube-system,Attempt:0,}" May 10 09:55:19.411671 kubelet[2796]: E0510 09:55:19.411631 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:19.558665 kubelet[2796]: I0510 09:55:19.557652 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j6vf4" podStartSLOduration=7.686762905 podStartE2EDuration="21.557629649s" podCreationTimestamp="2025-05-10 09:54:58 +0000 UTC" firstStartedPulling="2025-05-10 09:55:00.847332313 +0000 UTC m=+16.660188186" lastFinishedPulling="2025-05-10 09:55:14.718199057 +0000 UTC m=+30.531054930" observedRunningTime="2025-05-10 09:55:19.556809679 +0000 UTC m=+35.369665562" watchObservedRunningTime="2025-05-10 09:55:19.557629649 +0000 UTC m=+35.370485523" May 10 09:55:20.413600 kubelet[2796]: E0510 09:55:20.413564 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:20.951137 systemd-networkd[1464]: cilium_host: Link UP May 10 09:55:20.951301 systemd-networkd[1464]: cilium_net: Link UP May 10 09:55:20.951503 systemd-networkd[1464]: cilium_net: Gained carrier May 10 09:55:20.951683 systemd-networkd[1464]: cilium_host: Gained carrier May 10 09:55:21.066000 systemd-networkd[1464]: cilium_vxlan: Link UP May 10 09:55:21.066012 systemd-networkd[1464]: cilium_vxlan: Gained carrier May 10 09:55:21.143044 systemd-networkd[1464]: cilium_net: Gained IPv6LL May 10 09:55:21.295917 kernel: NET: Registered PF_ALG protocol family May 10 09:55:21.415200 kubelet[2796]: E0510 09:55:21.415156 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:21.586964 systemd[1]: Started sshd@9-10.0.0.32:22-10.0.0.1:51944.service - OpenSSH per-connection server daemon (10.0.0.1:51944). May 10 09:55:21.641105 sshd[3768]: Accepted publickey for core from 10.0.0.1 port 51944 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:21.643058 sshd-session[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:21.647796 systemd-logind[1532]: New session 10 of user core. May 10 09:55:21.656166 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 09:55:21.776951 sshd[3823]: Connection closed by 10.0.0.1 port 51944 May 10 09:55:21.777572 sshd-session[3768]: pam_unix(sshd:session): session closed for user core May 10 09:55:21.781965 systemd[1]: sshd@9-10.0.0.32:22-10.0.0.1:51944.service: Deactivated successfully. May 10 09:55:21.784221 systemd[1]: session-10.scope: Deactivated successfully. May 10 09:55:21.785123 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. May 10 09:55:21.786272 systemd-logind[1532]: Removed session 10. May 10 09:55:21.887015 systemd-networkd[1464]: cilium_host: Gained IPv6LL May 10 09:55:22.017171 systemd-networkd[1464]: lxc_health: Link UP May 10 09:55:22.017586 systemd-networkd[1464]: lxc_health: Gained carrier May 10 09:55:22.435842 systemd-networkd[1464]: lxc063da5fa251d: Link UP May 10 09:55:22.436895 kernel: eth0: renamed from tmpa67dd May 10 09:55:22.450487 systemd-networkd[1464]: lxc063da5fa251d: Gained carrier May 10 09:55:22.470909 systemd-networkd[1464]: lxcb45aa92ab79e: Link UP May 10 09:55:22.481268 kernel: eth0: renamed from tmpfd074 May 10 09:55:22.486944 systemd-networkd[1464]: lxcb45aa92ab79e: Gained carrier May 10 09:55:22.766818 kubelet[2796]: E0510 09:55:22.766658 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:22.847158 systemd-networkd[1464]: cilium_vxlan: Gained IPv6LL May 10 09:55:23.418718 kubelet[2796]: E0510 09:55:23.418679 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:23.487034 systemd-networkd[1464]: lxc_health: Gained IPv6LL May 10 09:55:23.935090 systemd-networkd[1464]: lxcb45aa92ab79e: Gained IPv6LL May 10 09:55:23.999047 systemd-networkd[1464]: lxc063da5fa251d: Gained IPv6LL May 10 09:55:24.420225 kubelet[2796]: E0510 09:55:24.420175 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:26.046533 containerd[1546]: time="2025-05-10T09:55:26.046434461Z" level=info msg="connecting to shim a67dd0ddfcc1e6363ad034fe0f3efbe8a77baf579b7cc22d91120dacc390cb0b" address="unix:///run/containerd/s/22d9c6fe015028165c11f63685f742e5075fd0a477837f02bbdcd6f9a161ad22" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:26.088138 systemd[1]: Started cri-containerd-a67dd0ddfcc1e6363ad034fe0f3efbe8a77baf579b7cc22d91120dacc390cb0b.scope - libcontainer container a67dd0ddfcc1e6363ad034fe0f3efbe8a77baf579b7cc22d91120dacc390cb0b. May 10 09:55:26.098742 containerd[1546]: time="2025-05-10T09:55:26.098679616Z" level=info msg="connecting to shim fd07495bf58d902e763cfac3fc28b13c8d4a01ca0ec123a1be9154afdb632923" address="unix:///run/containerd/s/34803a3f3149f87ab22f480ce0c57739c5b6134290e71959027942b0250f2d94" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:26.109665 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 09:55:26.130093 systemd[1]: Started cri-containerd-fd07495bf58d902e763cfac3fc28b13c8d4a01ca0ec123a1be9154afdb632923.scope - libcontainer container fd07495bf58d902e763cfac3fc28b13c8d4a01ca0ec123a1be9154afdb632923. May 10 09:55:26.143964 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 09:55:26.163013 containerd[1546]: time="2025-05-10T09:55:26.162961635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t5kf7,Uid:c08d4ec8-c87b-4fdb-948e-9f3982fa1961,Namespace:kube-system,Attempt:0,} returns sandbox id \"a67dd0ddfcc1e6363ad034fe0f3efbe8a77baf579b7cc22d91120dacc390cb0b\"" May 10 09:55:26.163707 kubelet[2796]: E0510 09:55:26.163670 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:26.166341 containerd[1546]: time="2025-05-10T09:55:26.166293723Z" level=info msg="CreateContainer within sandbox \"a67dd0ddfcc1e6363ad034fe0f3efbe8a77baf579b7cc22d91120dacc390cb0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 09:55:26.180674 containerd[1546]: time="2025-05-10T09:55:26.180627641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ltjxr,Uid:778f1e0d-1de4-4771-89e6-44ba65487df8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd07495bf58d902e763cfac3fc28b13c8d4a01ca0ec123a1be9154afdb632923\"" May 10 09:55:26.181518 kubelet[2796]: E0510 09:55:26.181496 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:26.184681 containerd[1546]: time="2025-05-10T09:55:26.184146830Z" level=info msg="Container f7aca3b208eb7afe6b540ead740cc3ad3d07e6597995c02ead19ff7aec150354: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:26.186677 containerd[1546]: time="2025-05-10T09:55:26.186642577Z" level=info msg="CreateContainer within sandbox \"fd07495bf58d902e763cfac3fc28b13c8d4a01ca0ec123a1be9154afdb632923\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 09:55:26.194505 containerd[1546]: time="2025-05-10T09:55:26.194472812Z" level=info msg="CreateContainer within sandbox \"a67dd0ddfcc1e6363ad034fe0f3efbe8a77baf579b7cc22d91120dacc390cb0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7aca3b208eb7afe6b540ead740cc3ad3d07e6597995c02ead19ff7aec150354\"" May 10 09:55:26.197419 containerd[1546]: time="2025-05-10T09:55:26.197353172Z" level=info msg="StartContainer for \"f7aca3b208eb7afe6b540ead740cc3ad3d07e6597995c02ead19ff7aec150354\"" May 10 09:55:26.198543 containerd[1546]: time="2025-05-10T09:55:26.198506838Z" level=info msg="connecting to shim f7aca3b208eb7afe6b540ead740cc3ad3d07e6597995c02ead19ff7aec150354" address="unix:///run/containerd/s/22d9c6fe015028165c11f63685f742e5075fd0a477837f02bbdcd6f9a161ad22" protocol=ttrpc version=3 May 10 09:55:26.204477 containerd[1546]: time="2025-05-10T09:55:26.204440010Z" level=info msg="Container 682414b6e1ad0c62b85e3acc0314d08333fc9bded1f151831ee6ba2335fb54bb: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:26.212035 containerd[1546]: time="2025-05-10T09:55:26.211782730Z" level=info msg="CreateContainer within sandbox \"fd07495bf58d902e763cfac3fc28b13c8d4a01ca0ec123a1be9154afdb632923\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"682414b6e1ad0c62b85e3acc0314d08333fc9bded1f151831ee6ba2335fb54bb\"" May 10 09:55:26.213021 containerd[1546]: time="2025-05-10T09:55:26.212918904Z" level=info msg="StartContainer for \"682414b6e1ad0c62b85e3acc0314d08333fc9bded1f151831ee6ba2335fb54bb\"" May 10 09:55:26.213649 containerd[1546]: time="2025-05-10T09:55:26.213616123Z" level=info msg="connecting to shim 682414b6e1ad0c62b85e3acc0314d08333fc9bded1f151831ee6ba2335fb54bb" address="unix:///run/containerd/s/34803a3f3149f87ab22f480ce0c57739c5b6134290e71959027942b0250f2d94" protocol=ttrpc version=3 May 10 09:55:26.219022 systemd[1]: Started cri-containerd-f7aca3b208eb7afe6b540ead740cc3ad3d07e6597995c02ead19ff7aec150354.scope - libcontainer container f7aca3b208eb7afe6b540ead740cc3ad3d07e6597995c02ead19ff7aec150354. May 10 09:55:26.239143 systemd[1]: Started cri-containerd-682414b6e1ad0c62b85e3acc0314d08333fc9bded1f151831ee6ba2335fb54bb.scope - libcontainer container 682414b6e1ad0c62b85e3acc0314d08333fc9bded1f151831ee6ba2335fb54bb. May 10 09:55:26.266437 containerd[1546]: time="2025-05-10T09:55:26.266398838Z" level=info msg="StartContainer for \"f7aca3b208eb7afe6b540ead740cc3ad3d07e6597995c02ead19ff7aec150354\" returns successfully" May 10 09:55:26.274527 containerd[1546]: time="2025-05-10T09:55:26.274482128Z" level=info msg="StartContainer for \"682414b6e1ad0c62b85e3acc0314d08333fc9bded1f151831ee6ba2335fb54bb\" returns successfully" May 10 09:55:26.429845 kubelet[2796]: E0510 09:55:26.429144 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:26.433949 kubelet[2796]: E0510 09:55:26.433833 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:26.446091 kubelet[2796]: I0510 09:55:26.446003 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t5kf7" podStartSLOduration=27.445984324 podStartE2EDuration="27.445984324s" podCreationTimestamp="2025-05-10 09:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:55:26.445125199 +0000 UTC m=+42.257981072" watchObservedRunningTime="2025-05-10 09:55:26.445984324 +0000 UTC m=+42.258840197" May 10 09:55:26.793969 systemd[1]: Started sshd@10-10.0.0.32:22-10.0.0.1:44292.service - OpenSSH per-connection server daemon (10.0.0.1:44292). May 10 09:55:26.856698 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 44292 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:26.859308 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:26.864763 systemd-logind[1532]: New session 11 of user core. May 10 09:55:26.879050 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 09:55:27.010850 sshd[4181]: Connection closed by 10.0.0.1 port 44292 May 10 09:55:27.011148 sshd-session[4179]: pam_unix(sshd:session): session closed for user core May 10 09:55:27.016313 systemd[1]: sshd@10-10.0.0.32:22-10.0.0.1:44292.service: Deactivated successfully. May 10 09:55:27.019356 systemd[1]: session-11.scope: Deactivated successfully. May 10 09:55:27.020582 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. May 10 09:55:27.022289 systemd-logind[1532]: Removed session 11. May 10 09:55:27.044642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761856653.mount: Deactivated successfully. May 10 09:55:27.434656 kubelet[2796]: E0510 09:55:27.434511 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:27.434656 kubelet[2796]: E0510 09:55:27.434511 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:28.436770 kubelet[2796]: E0510 09:55:28.436718 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:28.437338 kubelet[2796]: E0510 09:55:28.436888 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:55:32.025129 systemd[1]: Started sshd@11-10.0.0.32:22-10.0.0.1:44304.service - OpenSSH per-connection server daemon (10.0.0.1:44304). May 10 09:55:32.076135 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 44304 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:32.078087 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:32.085119 systemd-logind[1532]: New session 12 of user core. May 10 09:55:32.090035 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 09:55:32.207746 sshd[4200]: Connection closed by 10.0.0.1 port 44304 May 10 09:55:32.208222 sshd-session[4198]: pam_unix(sshd:session): session closed for user core May 10 09:55:32.224011 systemd[1]: sshd@11-10.0.0.32:22-10.0.0.1:44304.service: Deactivated successfully. May 10 09:55:32.226080 systemd[1]: session-12.scope: Deactivated successfully. May 10 09:55:32.227646 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. May 10 09:55:32.229410 systemd[1]: Started sshd@12-10.0.0.32:22-10.0.0.1:44308.service - OpenSSH per-connection server daemon (10.0.0.1:44308). May 10 09:55:32.230550 systemd-logind[1532]: Removed session 12. May 10 09:55:32.280388 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 44308 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:32.281871 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:32.286419 systemd-logind[1532]: New session 13 of user core. May 10 09:55:32.296999 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 09:55:32.455363 sshd[4217]: Connection closed by 10.0.0.1 port 44308 May 10 09:55:32.455919 sshd-session[4214]: pam_unix(sshd:session): session closed for user core May 10 09:55:32.466048 systemd[1]: sshd@12-10.0.0.32:22-10.0.0.1:44308.service: Deactivated successfully. May 10 09:55:32.469960 systemd[1]: session-13.scope: Deactivated successfully. May 10 09:55:32.474000 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. May 10 09:55:32.478209 systemd[1]: Started sshd@13-10.0.0.32:22-10.0.0.1:44310.service - OpenSSH per-connection server daemon (10.0.0.1:44310). May 10 09:55:32.481194 systemd-logind[1532]: Removed session 13. May 10 09:55:32.522427 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 44310 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:32.523757 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:32.528502 systemd-logind[1532]: New session 14 of user core. May 10 09:55:32.536973 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 09:55:32.640578 sshd[4231]: Connection closed by 10.0.0.1 port 44310 May 10 09:55:32.640903 sshd-session[4228]: pam_unix(sshd:session): session closed for user core May 10 09:55:32.644795 systemd[1]: sshd@13-10.0.0.32:22-10.0.0.1:44310.service: Deactivated successfully. May 10 09:55:32.646980 systemd[1]: session-14.scope: Deactivated successfully. May 10 09:55:32.647747 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. May 10 09:55:32.648702 systemd-logind[1532]: Removed session 14. May 10 09:55:37.653017 systemd[1]: Started sshd@14-10.0.0.32:22-10.0.0.1:35490.service - OpenSSH per-connection server daemon (10.0.0.1:35490). May 10 09:55:37.706837 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 35490 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:37.708326 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:37.712648 systemd-logind[1532]: New session 15 of user core. May 10 09:55:37.720009 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 09:55:37.863435 sshd[4247]: Connection closed by 10.0.0.1 port 35490 May 10 09:55:37.863847 sshd-session[4245]: pam_unix(sshd:session): session closed for user core May 10 09:55:37.869055 systemd[1]: sshd@14-10.0.0.32:22-10.0.0.1:35490.service: Deactivated successfully. May 10 09:55:37.871504 systemd[1]: session-15.scope: Deactivated successfully. May 10 09:55:37.872504 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. May 10 09:55:37.873550 systemd-logind[1532]: Removed session 15. May 10 09:55:42.877505 systemd[1]: Started sshd@15-10.0.0.32:22-10.0.0.1:35500.service - OpenSSH per-connection server daemon (10.0.0.1:35500). May 10 09:55:42.925701 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 35500 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:42.927548 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:42.932785 systemd-logind[1532]: New session 16 of user core. May 10 09:55:42.947017 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 09:55:43.068332 sshd[4262]: Connection closed by 10.0.0.1 port 35500 May 10 09:55:43.068736 sshd-session[4260]: pam_unix(sshd:session): session closed for user core May 10 09:55:43.083061 systemd[1]: sshd@15-10.0.0.32:22-10.0.0.1:35500.service: Deactivated successfully. May 10 09:55:43.085793 systemd[1]: session-16.scope: Deactivated successfully. May 10 09:55:43.088257 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. May 10 09:55:43.089968 systemd[1]: Started sshd@16-10.0.0.32:22-10.0.0.1:35502.service - OpenSSH per-connection server daemon (10.0.0.1:35502). May 10 09:55:43.091160 systemd-logind[1532]: Removed session 16. May 10 09:55:43.137620 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 35502 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:43.139126 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:43.143712 systemd-logind[1532]: New session 17 of user core. May 10 09:55:43.156994 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 09:55:43.431241 sshd[4277]: Connection closed by 10.0.0.1 port 35502 May 10 09:55:43.431875 sshd-session[4274]: pam_unix(sshd:session): session closed for user core May 10 09:55:43.447721 systemd[1]: sshd@16-10.0.0.32:22-10.0.0.1:35502.service: Deactivated successfully. May 10 09:55:43.450154 systemd[1]: session-17.scope: Deactivated successfully. May 10 09:55:43.452567 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. May 10 09:55:43.454198 systemd[1]: Started sshd@17-10.0.0.32:22-10.0.0.1:35510.service - OpenSSH per-connection server daemon (10.0.0.1:35510). May 10 09:55:43.455378 systemd-logind[1532]: Removed session 17. May 10 09:55:43.507483 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 35510 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:43.509229 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:43.514162 systemd-logind[1532]: New session 18 of user core. May 10 09:55:43.523988 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 09:55:44.962893 sshd[4290]: Connection closed by 10.0.0.1 port 35510 May 10 09:55:44.963655 sshd-session[4287]: pam_unix(sshd:session): session closed for user core May 10 09:55:44.974178 systemd[1]: sshd@17-10.0.0.32:22-10.0.0.1:35510.service: Deactivated successfully. May 10 09:55:44.976113 systemd[1]: session-18.scope: Deactivated successfully. May 10 09:55:44.978212 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. May 10 09:55:44.980571 systemd[1]: Started sshd@18-10.0.0.32:22-10.0.0.1:35522.service - OpenSSH per-connection server daemon (10.0.0.1:35522). May 10 09:55:44.982079 systemd-logind[1532]: Removed session 18. May 10 09:55:45.028982 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 35522 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:45.030448 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:45.035235 systemd-logind[1532]: New session 19 of user core. May 10 09:55:45.045993 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 09:55:45.270163 sshd[4312]: Connection closed by 10.0.0.1 port 35522 May 10 09:55:45.272042 sshd-session[4309]: pam_unix(sshd:session): session closed for user core May 10 09:55:45.284642 systemd[1]: sshd@18-10.0.0.32:22-10.0.0.1:35522.service: Deactivated successfully. May 10 09:55:45.288455 systemd[1]: session-19.scope: Deactivated successfully. May 10 09:55:45.289624 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. May 10 09:55:45.297154 systemd[1]: Started sshd@19-10.0.0.32:22-10.0.0.1:35536.service - OpenSSH per-connection server daemon (10.0.0.1:35536). May 10 09:55:45.300823 systemd-logind[1532]: Removed session 19. May 10 09:55:45.346753 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 35536 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:45.348268 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:45.352895 systemd-logind[1532]: New session 20 of user core. May 10 09:55:45.363091 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 09:55:45.475169 sshd[4326]: Connection closed by 10.0.0.1 port 35536 May 10 09:55:45.475529 sshd-session[4323]: pam_unix(sshd:session): session closed for user core May 10 09:55:45.480018 systemd[1]: sshd@19-10.0.0.32:22-10.0.0.1:35536.service: Deactivated successfully. May 10 09:55:45.482319 systemd[1]: session-20.scope: Deactivated successfully. May 10 09:55:45.483292 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. May 10 09:55:45.484755 systemd-logind[1532]: Removed session 20. May 10 09:55:50.490972 systemd[1]: Started sshd@20-10.0.0.32:22-10.0.0.1:58198.service - OpenSSH per-connection server daemon (10.0.0.1:58198). May 10 09:55:50.538124 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 58198 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:50.539718 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:50.544300 systemd-logind[1532]: New session 21 of user core. May 10 09:55:50.550982 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 09:55:50.667230 sshd[4342]: Connection closed by 10.0.0.1 port 58198 May 10 09:55:50.667541 sshd-session[4340]: pam_unix(sshd:session): session closed for user core May 10 09:55:50.671750 systemd[1]: sshd@20-10.0.0.32:22-10.0.0.1:58198.service: Deactivated successfully. May 10 09:55:50.673903 systemd[1]: session-21.scope: Deactivated successfully. May 10 09:55:50.674761 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. May 10 09:55:50.675955 systemd-logind[1532]: Removed session 21. May 10 09:55:55.681019 systemd[1]: Started sshd@21-10.0.0.32:22-10.0.0.1:58220.service - OpenSSH per-connection server daemon (10.0.0.1:58220). May 10 09:55:55.722805 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 58220 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:55.724343 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:55.728416 systemd-logind[1532]: New session 22 of user core. May 10 09:55:55.737981 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 09:55:55.846194 sshd[4361]: Connection closed by 10.0.0.1 port 58220 May 10 09:55:55.846545 sshd-session[4359]: pam_unix(sshd:session): session closed for user core May 10 09:55:55.850892 systemd[1]: sshd@21-10.0.0.32:22-10.0.0.1:58220.service: Deactivated successfully. May 10 09:55:55.852843 systemd[1]: session-22.scope: Deactivated successfully. May 10 09:55:55.853624 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. May 10 09:55:55.854575 systemd-logind[1532]: Removed session 22. May 10 09:56:00.860452 systemd[1]: Started sshd@22-10.0.0.32:22-10.0.0.1:34966.service - OpenSSH per-connection server daemon (10.0.0.1:34966). May 10 09:56:00.915410 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 34966 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:56:00.917287 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:56:00.922437 systemd-logind[1532]: New session 23 of user core. May 10 09:56:00.927987 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 09:56:01.034471 sshd[4379]: Connection closed by 10.0.0.1 port 34966 May 10 09:56:01.034876 sshd-session[4377]: pam_unix(sshd:session): session closed for user core May 10 09:56:01.039654 systemd[1]: sshd@22-10.0.0.32:22-10.0.0.1:34966.service: Deactivated successfully. May 10 09:56:01.042074 systemd[1]: session-23.scope: Deactivated successfully. May 10 09:56:01.042888 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. May 10 09:56:01.044445 systemd-logind[1532]: Removed session 23. May 10 09:56:02.280695 kubelet[2796]: E0510 09:56:02.280648 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:06.055001 systemd[1]: Started sshd@23-10.0.0.32:22-10.0.0.1:35006.service - OpenSSH per-connection server daemon (10.0.0.1:35006). May 10 09:56:06.107203 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 35006 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:56:06.108743 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:56:06.113415 systemd-logind[1532]: New session 24 of user core. May 10 09:56:06.129974 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 09:56:06.243379 sshd[4394]: Connection closed by 10.0.0.1 port 35006 May 10 09:56:06.243726 sshd-session[4392]: pam_unix(sshd:session): session closed for user core May 10 09:56:06.252944 systemd[1]: sshd@23-10.0.0.32:22-10.0.0.1:35006.service: Deactivated successfully. May 10 09:56:06.255019 systemd[1]: session-24.scope: Deactivated successfully. May 10 09:56:06.257075 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. May 10 09:56:06.258699 systemd[1]: Started sshd@24-10.0.0.32:22-10.0.0.1:35020.service - OpenSSH per-connection server daemon (10.0.0.1:35020). May 10 09:56:06.259849 systemd-logind[1532]: Removed session 24. May 10 09:56:06.304282 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 35020 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:56:06.305980 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:56:06.310904 systemd-logind[1532]: New session 25 of user core. May 10 09:56:06.329081 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 09:56:07.662888 kubelet[2796]: I0510 09:56:07.662778 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ltjxr" podStartSLOduration=68.662758607 podStartE2EDuration="1m8.662758607s" podCreationTimestamp="2025-05-10 09:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:55:26.468528912 +0000 UTC m=+42.281384785" watchObservedRunningTime="2025-05-10 09:56:07.662758607 +0000 UTC m=+83.475614480" May 10 09:56:07.668847 containerd[1546]: time="2025-05-10T09:56:07.668705514Z" level=info msg="StopContainer for \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" with timeout 30 (s)" May 10 09:56:07.678536 containerd[1546]: time="2025-05-10T09:56:07.678500019Z" level=info msg="Stop container \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" with signal terminated" May 10 09:56:07.694431 systemd[1]: cri-containerd-65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4.scope: Deactivated successfully. May 10 09:56:07.695896 containerd[1546]: time="2025-05-10T09:56:07.694920780Z" level=info msg="received exit event container_id:\"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" id:\"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" pid:3216 exited_at:{seconds:1746870967 nanos:694618331}" May 10 09:56:07.696038 containerd[1546]: time="2025-05-10T09:56:07.695964272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" id:\"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" pid:3216 exited_at:{seconds:1746870967 nanos:694618331}" May 10 09:56:07.698297 containerd[1546]: time="2025-05-10T09:56:07.698241341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" id:\"60a0b250ca6b6d1aab98bea145046d7ff00aad011ffd3f27380b7540f08a2e8b\" pid:4429 exited_at:{seconds:1746870967 nanos:698078008}" May 10 09:56:07.700589 containerd[1546]: time="2025-05-10T09:56:07.700528819Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 09:56:07.701145 containerd[1546]: time="2025-05-10T09:56:07.701118435Z" level=info msg="StopContainer for \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" with timeout 2 (s)" May 10 09:56:07.701457 containerd[1546]: time="2025-05-10T09:56:07.701425332Z" level=info msg="Stop container \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" with signal terminated" May 10 09:56:07.710250 systemd-networkd[1464]: lxc_health: Link DOWN May 10 09:56:07.710259 systemd-networkd[1464]: lxc_health: Lost carrier May 10 09:56:07.718778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4-rootfs.mount: Deactivated successfully. May 10 09:56:07.731525 systemd[1]: cri-containerd-bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e.scope: Deactivated successfully. May 10 09:56:07.731990 systemd[1]: cri-containerd-bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e.scope: Consumed 7.287s CPU time, 124.2M memory peak, 196K read from disk, 13.3M written to disk. May 10 09:56:07.733237 containerd[1546]: time="2025-05-10T09:56:07.733198410Z" level=info msg="received exit event container_id:\"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" id:\"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" pid:3480 exited_at:{seconds:1746870967 nanos:732950678}" May 10 09:56:07.733751 containerd[1546]: time="2025-05-10T09:56:07.733726329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" id:\"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" pid:3480 exited_at:{seconds:1746870967 nanos:732950678}" May 10 09:56:07.748562 containerd[1546]: time="2025-05-10T09:56:07.748515753Z" level=info msg="StopContainer for \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" returns successfully" May 10 09:56:07.749191 containerd[1546]: time="2025-05-10T09:56:07.749161777Z" level=info msg="StopPodSandbox for \"18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4\"" May 10 09:56:07.749262 containerd[1546]: time="2025-05-10T09:56:07.749233384Z" level=info msg="Container to stop \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 09:56:07.754920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e-rootfs.mount: Deactivated successfully. May 10 09:56:07.757984 systemd[1]: cri-containerd-18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4.scope: Deactivated successfully. May 10 09:56:07.759240 containerd[1546]: time="2025-05-10T09:56:07.758769165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4\" id:\"18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4\" pid:3127 exit_status:137 exited_at:{seconds:1746870967 nanos:758466157}" May 10 09:56:07.766291 containerd[1546]: time="2025-05-10T09:56:07.766258097Z" level=info msg="StopContainer for \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" returns successfully" May 10 09:56:07.767000 containerd[1546]: time="2025-05-10T09:56:07.766659063Z" level=info msg="StopPodSandbox for \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\"" May 10 09:56:07.767000 containerd[1546]: time="2025-05-10T09:56:07.766719508Z" level=info msg="Container to stop \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 09:56:07.767000 containerd[1546]: time="2025-05-10T09:56:07.766734347Z" level=info msg="Container to stop \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 09:56:07.767000 containerd[1546]: time="2025-05-10T09:56:07.766745488Z" level=info msg="Container to stop \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 09:56:07.767000 containerd[1546]: time="2025-05-10T09:56:07.766756779Z" level=info msg="Container to stop \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 09:56:07.767000 containerd[1546]: time="2025-05-10T09:56:07.766767330Z" level=info msg="Container to stop \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 09:56:07.773191 systemd[1]: cri-containerd-c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50.scope: Deactivated successfully. May 10 09:56:07.789309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4-rootfs.mount: Deactivated successfully. May 10 09:56:07.795649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50-rootfs.mount: Deactivated successfully. May 10 09:56:07.796652 containerd[1546]: time="2025-05-10T09:56:07.796182927Z" level=info msg="shim disconnected" id=18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4 namespace=k8s.io May 10 09:56:07.796652 containerd[1546]: time="2025-05-10T09:56:07.796212213Z" level=warning msg="cleaning up after shim disconnected" id=18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4 namespace=k8s.io May 10 09:56:07.811120 containerd[1546]: time="2025-05-10T09:56:07.796220909Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 09:56:07.811270 containerd[1546]: time="2025-05-10T09:56:07.796353242Z" level=info msg="shim disconnected" id=c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50 namespace=k8s.io May 10 09:56:07.811270 containerd[1546]: time="2025-05-10T09:56:07.811252136Z" level=warning msg="cleaning up after shim disconnected" id=c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50 namespace=k8s.io May 10 09:56:07.811398 containerd[1546]: time="2025-05-10T09:56:07.811263347Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 09:56:07.837459 containerd[1546]: time="2025-05-10T09:56:07.837292216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" id:\"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" pid:3173 exit_status:137 exited_at:{seconds:1746870967 nanos:773696042}" May 10 09:56:07.839779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50-shm.mount: Deactivated successfully. May 10 09:56:07.840047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4-shm.mount: Deactivated successfully. May 10 09:56:07.845266 containerd[1546]: time="2025-05-10T09:56:07.844748004Z" level=info msg="received exit event sandbox_id:\"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" exit_status:137 exited_at:{seconds:1746870967 nanos:773696042}" May 10 09:56:07.845266 containerd[1546]: time="2025-05-10T09:56:07.844879556Z" level=info msg="received exit event sandbox_id:\"18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4\" exit_status:137 exited_at:{seconds:1746870967 nanos:758466157}" May 10 09:56:07.853534 containerd[1546]: time="2025-05-10T09:56:07.853487105Z" level=info msg="TearDown network for sandbox \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" successfully" May 10 09:56:07.853534 containerd[1546]: time="2025-05-10T09:56:07.853523905Z" level=info msg="StopPodSandbox for \"c0efdce2a748b9bd850926acfed12673593425af0e47c9edfc2b029de6aefe50\" returns successfully" May 10 09:56:07.854618 containerd[1546]: time="2025-05-10T09:56:07.854576495Z" level=info msg="TearDown network for sandbox \"18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4\" successfully" May 10 09:56:07.854618 containerd[1546]: time="2025-05-10T09:56:07.854606031Z" level=info msg="StopPodSandbox for \"18c8ca20deb2b4fc275527b2dc66982481091486c9f6f3eea45ecabecac691c4\" returns successfully" May 10 09:56:08.056021 kubelet[2796]: I0510 09:56:08.055970 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-clustermesh-secrets\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056021 kubelet[2796]: I0510 09:56:08.056012 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-bpf-maps\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056021 kubelet[2796]: I0510 09:56:08.056030 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-run\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056345 kubelet[2796]: I0510 09:56:08.056047 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cni-path\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056345 kubelet[2796]: I0510 09:56:08.056062 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-host-proc-sys-kernel\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056345 kubelet[2796]: I0510 09:56:08.056084 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gssl\" (UniqueName: \"kubernetes.io/projected/e20f4605-e788-4634-afb1-46803baef04f-kube-api-access-8gssl\") pod \"e20f4605-e788-4634-afb1-46803baef04f\" (UID: \"e20f4605-e788-4634-afb1-46803baef04f\") " May 10 09:56:08.056345 kubelet[2796]: I0510 09:56:08.056080 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.056345 kubelet[2796]: I0510 09:56:08.056116 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.056525 kubelet[2796]: I0510 09:56:08.056126 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.056525 kubelet[2796]: I0510 09:56:08.056097 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-xtables-lock\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056525 kubelet[2796]: I0510 09:56:08.056183 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-config-path\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056525 kubelet[2796]: I0510 09:56:08.056214 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj5zk\" (UniqueName: \"kubernetes.io/projected/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-kube-api-access-fj5zk\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056525 kubelet[2796]: I0510 09:56:08.056275 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e20f4605-e788-4634-afb1-46803baef04f-cilium-config-path\") pod \"e20f4605-e788-4634-afb1-46803baef04f\" (UID: \"e20f4605-e788-4634-afb1-46803baef04f\") " May 10 09:56:08.056525 kubelet[2796]: I0510 09:56:08.056297 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-host-proc-sys-net\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056719 kubelet[2796]: I0510 09:56:08.056317 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-etc-cni-netd\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056719 kubelet[2796]: I0510 09:56:08.056337 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-cgroup\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056719 kubelet[2796]: I0510 09:56:08.056356 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-lib-modules\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056719 kubelet[2796]: I0510 09:56:08.056376 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-hostproc\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056719 kubelet[2796]: I0510 09:56:08.056400 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-hubble-tls\") pod \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\" (UID: \"9ed9a172-0a80-45e5-aba8-5c8afc5944f1\") " May 10 09:56:08.056719 kubelet[2796]: I0510 09:56:08.056435 2796 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-run\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.056719 kubelet[2796]: I0510 09:56:08.056450 2796 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.056978 kubelet[2796]: I0510 09:56:08.056462 2796 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.056978 kubelet[2796]: I0510 09:56:08.056137 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cni-path" (OuterVolumeSpecName: "cni-path") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.056978 kubelet[2796]: I0510 09:56:08.056080 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.056978 kubelet[2796]: I0510 09:56:08.056540 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.059536 kubelet[2796]: I0510 09:56:08.059500 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.059773 kubelet[2796]: I0510 09:56:08.059703 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 09:56:08.059967 kubelet[2796]: I0510 09:56:08.059737 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.060028 kubelet[2796]: I0510 09:56:08.059755 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.060073 kubelet[2796]: I0510 09:56:08.059781 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-hostproc" (OuterVolumeSpecName: "hostproc") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 09:56:08.060192 kubelet[2796]: I0510 09:56:08.060176 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 09:56:08.060326 kubelet[2796]: I0510 09:56:08.060307 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e20f4605-e788-4634-afb1-46803baef04f-kube-api-access-8gssl" (OuterVolumeSpecName: "kube-api-access-8gssl") pod "e20f4605-e788-4634-afb1-46803baef04f" (UID: "e20f4605-e788-4634-afb1-46803baef04f"). InnerVolumeSpecName "kube-api-access-8gssl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 09:56:08.060488 kubelet[2796]: I0510 09:56:08.060471 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 09:56:08.062997 kubelet[2796]: I0510 09:56:08.062959 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-kube-api-access-fj5zk" (OuterVolumeSpecName: "kube-api-access-fj5zk") pod "9ed9a172-0a80-45e5-aba8-5c8afc5944f1" (UID: "9ed9a172-0a80-45e5-aba8-5c8afc5944f1"). InnerVolumeSpecName "kube-api-access-fj5zk". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 09:56:08.063553 kubelet[2796]: I0510 09:56:08.063518 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e20f4605-e788-4634-afb1-46803baef04f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e20f4605-e788-4634-afb1-46803baef04f" (UID: "e20f4605-e788-4634-afb1-46803baef04f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 09:56:08.157340 kubelet[2796]: I0510 09:56:08.157279 2796 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-hostproc\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157340 kubelet[2796]: I0510 09:56:08.157318 2796 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157340 kubelet[2796]: I0510 09:56:08.157329 2796 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157340 kubelet[2796]: I0510 09:56:08.157337 2796 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157340 kubelet[2796]: I0510 09:56:08.157347 2796 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cni-path\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157340 kubelet[2796]: I0510 09:56:08.157356 2796 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8gssl\" (UniqueName: \"kubernetes.io/projected/e20f4605-e788-4634-afb1-46803baef04f-kube-api-access-8gssl\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157340 kubelet[2796]: I0510 09:56:08.157364 2796 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157663 kubelet[2796]: I0510 09:56:08.157373 2796 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fj5zk\" (UniqueName: \"kubernetes.io/projected/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-kube-api-access-fj5zk\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157663 kubelet[2796]: I0510 09:56:08.157382 2796 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e20f4605-e788-4634-afb1-46803baef04f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157663 kubelet[2796]: I0510 09:56:08.157389 2796 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157663 kubelet[2796]: I0510 09:56:08.157397 2796 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157663 kubelet[2796]: I0510 09:56:08.157405 2796 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.157663 kubelet[2796]: I0510 09:56:08.157413 2796 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ed9a172-0a80-45e5-aba8-5c8afc5944f1-lib-modules\") on node \"localhost\" DevicePath \"\"" May 10 09:56:08.287530 systemd[1]: Removed slice kubepods-burstable-pod9ed9a172_0a80_45e5_aba8_5c8afc5944f1.slice - libcontainer container kubepods-burstable-pod9ed9a172_0a80_45e5_aba8_5c8afc5944f1.slice. May 10 09:56:08.287628 systemd[1]: kubepods-burstable-pod9ed9a172_0a80_45e5_aba8_5c8afc5944f1.slice: Consumed 7.422s CPU time, 124.5M memory peak, 216K read from disk, 13.3M written to disk. May 10 09:56:08.288995 systemd[1]: Removed slice kubepods-besteffort-pode20f4605_e788_4634_afb1_46803baef04f.slice - libcontainer container kubepods-besteffort-pode20f4605_e788_4634_afb1_46803baef04f.slice. May 10 09:56:08.539103 kubelet[2796]: I0510 09:56:08.538852 2796 scope.go:117] "RemoveContainer" containerID="65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4" May 10 09:56:08.540412 containerd[1546]: time="2025-05-10T09:56:08.540284968Z" level=info msg="RemoveContainer for \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\"" May 10 09:56:08.565465 containerd[1546]: time="2025-05-10T09:56:08.565405461Z" level=info msg="RemoveContainer for \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" returns successfully" May 10 09:56:08.565761 kubelet[2796]: I0510 09:56:08.565655 2796 scope.go:117] "RemoveContainer" containerID="65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4" May 10 09:56:08.566739 containerd[1546]: time="2025-05-10T09:56:08.566113492Z" level=error msg="ContainerStatus for \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\": not found" May 10 09:56:08.570296 kubelet[2796]: E0510 09:56:08.570251 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\": not found" containerID="65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4" May 10 09:56:08.570383 kubelet[2796]: I0510 09:56:08.570289 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4"} err="failed to get container status \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\": rpc error: code = NotFound desc = an error occurred when try to find container \"65d4264560d794a7230c8846fd1a26a3719845ca2f0ed4e8f748801a837d9bd4\": not found" May 10 09:56:08.570383 kubelet[2796]: I0510 09:56:08.570375 2796 scope.go:117] "RemoveContainer" containerID="bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e" May 10 09:56:08.572111 containerd[1546]: time="2025-05-10T09:56:08.572085954Z" level=info msg="RemoveContainer for \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\"" May 10 09:56:08.577907 containerd[1546]: time="2025-05-10T09:56:08.577642260Z" level=info msg="RemoveContainer for \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" returns successfully" May 10 09:56:08.578237 kubelet[2796]: I0510 09:56:08.578199 2796 scope.go:117] "RemoveContainer" containerID="ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2" May 10 09:56:08.581421 containerd[1546]: time="2025-05-10T09:56:08.581386017Z" level=info msg="RemoveContainer for \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\"" May 10 09:56:08.586009 containerd[1546]: time="2025-05-10T09:56:08.585964436Z" level=info msg="RemoveContainer for \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\" returns successfully" May 10 09:56:08.586172 kubelet[2796]: I0510 09:56:08.586144 2796 scope.go:117] "RemoveContainer" containerID="9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb" May 10 09:56:08.588036 containerd[1546]: time="2025-05-10T09:56:08.588007396Z" level=info msg="RemoveContainer for \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\"" May 10 09:56:08.604128 containerd[1546]: time="2025-05-10T09:56:08.604087071Z" level=info msg="RemoveContainer for \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\" returns successfully" May 10 09:56:08.604286 kubelet[2796]: I0510 09:56:08.604261 2796 scope.go:117] "RemoveContainer" containerID="f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0" May 10 09:56:08.605643 containerd[1546]: time="2025-05-10T09:56:08.605614777Z" level=info msg="RemoveContainer for \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\"" May 10 09:56:08.609317 containerd[1546]: time="2025-05-10T09:56:08.609290574Z" level=info msg="RemoveContainer for \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\" returns successfully" May 10 09:56:08.609497 kubelet[2796]: I0510 09:56:08.609475 2796 scope.go:117] "RemoveContainer" containerID="a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d" May 10 09:56:08.610496 containerd[1546]: time="2025-05-10T09:56:08.610473822Z" level=info msg="RemoveContainer for \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\"" May 10 09:56:08.613706 containerd[1546]: time="2025-05-10T09:56:08.613669222Z" level=info msg="RemoveContainer for \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\" returns successfully" May 10 09:56:08.613880 kubelet[2796]: I0510 09:56:08.613793 2796 scope.go:117] "RemoveContainer" containerID="bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e" May 10 09:56:08.614036 containerd[1546]: time="2025-05-10T09:56:08.614000364Z" level=error msg="ContainerStatus for \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\": not found" May 10 09:56:08.614167 kubelet[2796]: E0510 09:56:08.614137 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\": not found" containerID="bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e" May 10 09:56:08.614200 kubelet[2796]: I0510 09:56:08.614169 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e"} err="failed to get container status \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd96218588ca5b3985a601d201747fa9bef218f2fc7b7a9ff8caf2fa41ced59e\": not found" May 10 09:56:08.614200 kubelet[2796]: I0510 09:56:08.614191 2796 scope.go:117] "RemoveContainer" containerID="ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2" May 10 09:56:08.614435 containerd[1546]: time="2025-05-10T09:56:08.614394106Z" level=error msg="ContainerStatus for \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\": not found" May 10 09:56:08.614572 kubelet[2796]: E0510 09:56:08.614546 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\": not found" containerID="ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2" May 10 09:56:08.614632 kubelet[2796]: I0510 09:56:08.614578 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2"} err="failed to get container status \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ece6c948fb0194101dc8eb24a3baa66dd465b571812b40cb6d055bfc989137b2\": not found" May 10 09:56:08.614632 kubelet[2796]: I0510 09:56:08.614603 2796 scope.go:117] "RemoveContainer" containerID="9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb" May 10 09:56:08.614794 containerd[1546]: time="2025-05-10T09:56:08.614764834Z" level=error msg="ContainerStatus for \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\": not found" May 10 09:56:08.614939 kubelet[2796]: E0510 09:56:08.614919 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\": not found" containerID="9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb" May 10 09:56:08.615013 kubelet[2796]: I0510 09:56:08.614944 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb"} err="failed to get container status \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e06be2fd08f7e77e13ec215621198d88c378f9534aa2708a21ea7dd87e132cb\": not found" May 10 09:56:08.615013 kubelet[2796]: I0510 09:56:08.614965 2796 scope.go:117] "RemoveContainer" containerID="f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0" May 10 09:56:08.615129 containerd[1546]: time="2025-05-10T09:56:08.615100084Z" level=error msg="ContainerStatus for \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\": not found" May 10 09:56:08.615231 kubelet[2796]: E0510 09:56:08.615207 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\": not found" containerID="f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0" May 10 09:56:08.615280 kubelet[2796]: I0510 09:56:08.615227 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0"} err="failed to get container status \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f32a255f85cb5bb84996134022d5919cbd500b89ac9de1cc2c78b0e73f55b6a0\": not found" May 10 09:56:08.615280 kubelet[2796]: I0510 09:56:08.615241 2796 scope.go:117] "RemoveContainer" containerID="a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d" May 10 09:56:08.615416 containerd[1546]: time="2025-05-10T09:56:08.615387573Z" level=error msg="ContainerStatus for \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\": not found" May 10 09:56:08.615499 kubelet[2796]: E0510 09:56:08.615476 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\": not found" containerID="a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d" May 10 09:56:08.615499 kubelet[2796]: I0510 09:56:08.615494 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d"} err="failed to get container status \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0daedec6dedd6536af4a42779c0c5e4cbe7abd93ed6562f4d5f71a52bc1519d\": not found" May 10 09:56:08.718483 systemd[1]: var-lib-kubelet-pods-9ed9a172\x2d0a80\x2d45e5\x2daba8\x2d5c8afc5944f1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 09:56:08.718613 systemd[1]: var-lib-kubelet-pods-9ed9a172\x2d0a80\x2d45e5\x2daba8\x2d5c8afc5944f1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 09:56:08.718698 systemd[1]: var-lib-kubelet-pods-e20f4605\x2de788\x2d4634\x2dafb1\x2d46803baef04f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8gssl.mount: Deactivated successfully. May 10 09:56:08.718778 systemd[1]: var-lib-kubelet-pods-9ed9a172\x2d0a80\x2d45e5\x2daba8\x2d5c8afc5944f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfj5zk.mount: Deactivated successfully. May 10 09:56:09.280662 kubelet[2796]: E0510 09:56:09.280621 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:09.328576 kubelet[2796]: E0510 09:56:09.328523 2796 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 09:56:09.634459 sshd[4409]: Connection closed by 10.0.0.1 port 35020 May 10 09:56:09.635083 sshd-session[4406]: pam_unix(sshd:session): session closed for user core May 10 09:56:09.647335 systemd[1]: sshd@24-10.0.0.32:22-10.0.0.1:35020.service: Deactivated successfully. May 10 09:56:09.649617 systemd[1]: session-25.scope: Deactivated successfully. May 10 09:56:09.650606 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. May 10 09:56:09.652916 systemd[1]: Started sshd@25-10.0.0.32:22-10.0.0.1:36590.service - OpenSSH per-connection server daemon (10.0.0.1:36590). May 10 09:56:09.653853 systemd-logind[1532]: Removed session 25. May 10 09:56:09.704231 sshd[4555]: Accepted publickey for core from 10.0.0.1 port 36590 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:56:09.705688 sshd-session[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:56:09.710320 systemd-logind[1532]: New session 26 of user core. May 10 09:56:09.725990 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 09:56:10.270603 sshd[4558]: Connection closed by 10.0.0.1 port 36590 May 10 09:56:10.272758 sshd-session[4555]: pam_unix(sshd:session): session closed for user core May 10 09:56:10.281949 kubelet[2796]: E0510 09:56:10.281810 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:10.285686 kubelet[2796]: I0510 09:56:10.284114 2796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed9a172-0a80-45e5-aba8-5c8afc5944f1" path="/var/lib/kubelet/pods/9ed9a172-0a80-45e5-aba8-5c8afc5944f1/volumes" May 10 09:56:10.285686 kubelet[2796]: I0510 09:56:10.285147 2796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e20f4605-e788-4634-afb1-46803baef04f" path="/var/lib/kubelet/pods/e20f4605-e788-4634-afb1-46803baef04f/volumes" May 10 09:56:10.286657 systemd[1]: sshd@25-10.0.0.32:22-10.0.0.1:36590.service: Deactivated successfully. May 10 09:56:10.292256 systemd[1]: session-26.scope: Deactivated successfully. May 10 09:56:10.293814 systemd-logind[1532]: Session 26 logged out. Waiting for processes to exit. May 10 09:56:10.297727 kubelet[2796]: I0510 09:56:10.296706 2796 topology_manager.go:215] "Topology Admit Handler" podUID="1c17f4f9-314a-4b68-b20a-32bb30c9c413" podNamespace="kube-system" podName="cilium-8rcbh" May 10 09:56:10.297899 kubelet[2796]: E0510 09:56:10.297825 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e20f4605-e788-4634-afb1-46803baef04f" containerName="cilium-operator" May 10 09:56:10.297899 kubelet[2796]: E0510 09:56:10.297850 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ed9a172-0a80-45e5-aba8-5c8afc5944f1" containerName="cilium-agent" May 10 09:56:10.298041 kubelet[2796]: E0510 09:56:10.298016 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ed9a172-0a80-45e5-aba8-5c8afc5944f1" containerName="mount-bpf-fs" May 10 09:56:10.298041 kubelet[2796]: E0510 09:56:10.298037 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ed9a172-0a80-45e5-aba8-5c8afc5944f1" containerName="clean-cilium-state" May 10 09:56:10.298144 kubelet[2796]: E0510 09:56:10.298046 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ed9a172-0a80-45e5-aba8-5c8afc5944f1" containerName="mount-cgroup" May 10 09:56:10.298144 kubelet[2796]: E0510 09:56:10.298053 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ed9a172-0a80-45e5-aba8-5c8afc5944f1" containerName="apply-sysctl-overwrites" May 10 09:56:10.301921 kubelet[2796]: I0510 09:56:10.298536 2796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e20f4605-e788-4634-afb1-46803baef04f" containerName="cilium-operator" May 10 09:56:10.301921 kubelet[2796]: I0510 09:56:10.298560 2796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed9a172-0a80-45e5-aba8-5c8afc5944f1" containerName="cilium-agent" May 10 09:56:10.300293 systemd[1]: Started sshd@26-10.0.0.32:22-10.0.0.1:36592.service - OpenSSH per-connection server daemon (10.0.0.1:36592). May 10 09:56:10.308288 systemd-logind[1532]: Removed session 26. May 10 09:56:10.324185 systemd[1]: Created slice kubepods-burstable-pod1c17f4f9_314a_4b68_b20a_32bb30c9c413.slice - libcontainer container kubepods-burstable-pod1c17f4f9_314a_4b68_b20a_32bb30c9c413.slice. May 10 09:56:10.351556 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 36592 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:56:10.353639 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:56:10.358771 systemd-logind[1532]: New session 27 of user core. May 10 09:56:10.370036 systemd[1]: Started session-27.scope - Session 27 of User core. May 10 09:56:10.421437 sshd[4572]: Connection closed by 10.0.0.1 port 36592 May 10 09:56:10.421899 sshd-session[4569]: pam_unix(sshd:session): session closed for user core May 10 09:56:10.440936 systemd[1]: sshd@26-10.0.0.32:22-10.0.0.1:36592.service: Deactivated successfully. May 10 09:56:10.442810 systemd[1]: session-27.scope: Deactivated successfully. May 10 09:56:10.444617 systemd-logind[1532]: Session 27 logged out. Waiting for processes to exit. May 10 09:56:10.446050 systemd[1]: Started sshd@27-10.0.0.32:22-10.0.0.1:36598.service - OpenSSH per-connection server daemon (10.0.0.1:36598). May 10 09:56:10.447325 systemd-logind[1532]: Removed session 27. May 10 09:56:10.470064 kubelet[2796]: I0510 09:56:10.470013 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-lib-modules\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470064 kubelet[2796]: I0510 09:56:10.470058 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-bpf-maps\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470231 kubelet[2796]: I0510 09:56:10.470076 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-cni-path\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470231 kubelet[2796]: I0510 09:56:10.470091 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-etc-cni-netd\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470231 kubelet[2796]: I0510 09:56:10.470107 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-host-proc-sys-kernel\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470231 kubelet[2796]: I0510 09:56:10.470128 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c17f4f9-314a-4b68-b20a-32bb30c9c413-hubble-tls\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470231 kubelet[2796]: I0510 09:56:10.470143 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdqbt\" (UniqueName: \"kubernetes.io/projected/1c17f4f9-314a-4b68-b20a-32bb30c9c413-kube-api-access-fdqbt\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470231 kubelet[2796]: I0510 09:56:10.470195 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-cilium-run\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470403 kubelet[2796]: I0510 09:56:10.470215 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-hostproc\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470403 kubelet[2796]: I0510 09:56:10.470232 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-xtables-lock\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470403 kubelet[2796]: I0510 09:56:10.470256 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c17f4f9-314a-4b68-b20a-32bb30c9c413-clustermesh-secrets\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470403 kubelet[2796]: I0510 09:56:10.470279 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c17f4f9-314a-4b68-b20a-32bb30c9c413-cilium-ipsec-secrets\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470403 kubelet[2796]: I0510 09:56:10.470294 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-host-proc-sys-net\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470403 kubelet[2796]: I0510 09:56:10.470311 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c17f4f9-314a-4b68-b20a-32bb30c9c413-cilium-cgroup\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.470572 kubelet[2796]: I0510 09:56:10.470366 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c17f4f9-314a-4b68-b20a-32bb30c9c413-cilium-config-path\") pod \"cilium-8rcbh\" (UID: \"1c17f4f9-314a-4b68-b20a-32bb30c9c413\") " pod="kube-system/cilium-8rcbh" May 10 09:56:10.501918 sshd[4578]: Accepted publickey for core from 10.0.0.1 port 36598 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:56:10.503634 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:56:10.508918 systemd-logind[1532]: New session 28 of user core. May 10 09:56:10.514057 systemd[1]: Started session-28.scope - Session 28 of User core. May 10 09:56:10.629957 kubelet[2796]: E0510 09:56:10.629619 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:10.630379 containerd[1546]: time="2025-05-10T09:56:10.630345093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rcbh,Uid:1c17f4f9-314a-4b68-b20a-32bb30c9c413,Namespace:kube-system,Attempt:0,}" May 10 09:56:10.654385 containerd[1546]: time="2025-05-10T09:56:10.654311103Z" level=info msg="connecting to shim 2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01" address="unix:///run/containerd/s/236a2475e0dbf5de222c769e8556bef6425a2ce4c2a5f738aae3f3f6c5c44384" namespace=k8s.io protocol=ttrpc version=3 May 10 09:56:10.680187 systemd[1]: Started cri-containerd-2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01.scope - libcontainer container 2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01. May 10 09:56:10.709353 containerd[1546]: time="2025-05-10T09:56:10.709274703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rcbh,Uid:1c17f4f9-314a-4b68-b20a-32bb30c9c413,Namespace:kube-system,Attempt:0,} returns sandbox id \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\"" May 10 09:56:10.710296 kubelet[2796]: E0510 09:56:10.710267 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:10.713095 containerd[1546]: time="2025-05-10T09:56:10.713053360Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 09:56:10.721619 containerd[1546]: time="2025-05-10T09:56:10.721538123Z" level=info msg="Container 5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791: CDI devices from CRI Config.CDIDevices: []" May 10 09:56:10.731211 containerd[1546]: time="2025-05-10T09:56:10.731153192Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791\"" May 10 09:56:10.731795 containerd[1546]: time="2025-05-10T09:56:10.731741423Z" level=info msg="StartContainer for \"5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791\"" May 10 09:56:10.732734 containerd[1546]: time="2025-05-10T09:56:10.732707686Z" level=info msg="connecting to shim 5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791" address="unix:///run/containerd/s/236a2475e0dbf5de222c769e8556bef6425a2ce4c2a5f738aae3f3f6c5c44384" protocol=ttrpc version=3 May 10 09:56:10.755049 systemd[1]: Started cri-containerd-5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791.scope - libcontainer container 5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791. May 10 09:56:10.785755 containerd[1546]: time="2025-05-10T09:56:10.785702763Z" level=info msg="StartContainer for \"5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791\" returns successfully" May 10 09:56:10.794471 systemd[1]: cri-containerd-5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791.scope: Deactivated successfully. May 10 09:56:10.795994 containerd[1546]: time="2025-05-10T09:56:10.795940779Z" level=info msg="received exit event container_id:\"5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791\" id:\"5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791\" pid:4650 exited_at:{seconds:1746870970 nanos:795566996}" May 10 09:56:10.801687 containerd[1546]: time="2025-05-10T09:56:10.801647434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791\" id:\"5d2d4d376e65fa4dd00dd28051f312ede14e0916a05f1e5a8c045fc820f9b791\" pid:4650 exited_at:{seconds:1746870970 nanos:795566996}" May 10 09:56:11.552710 kubelet[2796]: E0510 09:56:11.552675 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:11.555052 containerd[1546]: time="2025-05-10T09:56:11.555020379Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 09:56:11.574207 containerd[1546]: time="2025-05-10T09:56:11.574155068Z" level=info msg="Container 8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221: CDI devices from CRI Config.CDIDevices: []" May 10 09:56:11.580852 containerd[1546]: time="2025-05-10T09:56:11.580813903Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221\"" May 10 09:56:11.581323 containerd[1546]: time="2025-05-10T09:56:11.581274862Z" level=info msg="StartContainer for \"8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221\"" May 10 09:56:11.582093 containerd[1546]: time="2025-05-10T09:56:11.582050832Z" level=info msg="connecting to shim 8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221" address="unix:///run/containerd/s/236a2475e0dbf5de222c769e8556bef6425a2ce4c2a5f738aae3f3f6c5c44384" protocol=ttrpc version=3 May 10 09:56:11.606012 systemd[1]: Started cri-containerd-8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221.scope - libcontainer container 8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221. May 10 09:56:11.634008 containerd[1546]: time="2025-05-10T09:56:11.633970936Z" level=info msg="StartContainer for \"8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221\" returns successfully" May 10 09:56:11.640559 systemd[1]: cri-containerd-8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221.scope: Deactivated successfully. May 10 09:56:11.641879 containerd[1546]: time="2025-05-10T09:56:11.641809581Z" level=info msg="received exit event container_id:\"8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221\" id:\"8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221\" pid:4696 exited_at:{seconds:1746870971 nanos:641536109}" May 10 09:56:11.641973 containerd[1546]: time="2025-05-10T09:56:11.641837324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221\" id:\"8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221\" pid:4696 exited_at:{seconds:1746870971 nanos:641536109}" May 10 09:56:11.669366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ab76667a5a2c30e961d557ce5d98d9421024a5b891b0e8acee058dc62a10221-rootfs.mount: Deactivated successfully. May 10 09:56:12.557356 kubelet[2796]: E0510 09:56:12.557310 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:12.559081 containerd[1546]: time="2025-05-10T09:56:12.559030882Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 09:56:12.571407 containerd[1546]: time="2025-05-10T09:56:12.571024939Z" level=info msg="Container 83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8: CDI devices from CRI Config.CDIDevices: []" May 10 09:56:12.580181 containerd[1546]: time="2025-05-10T09:56:12.580131535Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8\"" May 10 09:56:12.580681 containerd[1546]: time="2025-05-10T09:56:12.580651024Z" level=info msg="StartContainer for \"83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8\"" May 10 09:56:12.582032 containerd[1546]: time="2025-05-10T09:56:12.581932726Z" level=info msg="connecting to shim 83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8" address="unix:///run/containerd/s/236a2475e0dbf5de222c769e8556bef6425a2ce4c2a5f738aae3f3f6c5c44384" protocol=ttrpc version=3 May 10 09:56:12.609054 systemd[1]: Started cri-containerd-83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8.scope - libcontainer container 83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8. May 10 09:56:12.652622 containerd[1546]: time="2025-05-10T09:56:12.652574521Z" level=info msg="StartContainer for \"83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8\" returns successfully" May 10 09:56:12.653197 systemd[1]: cri-containerd-83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8.scope: Deactivated successfully. May 10 09:56:12.654131 containerd[1546]: time="2025-05-10T09:56:12.654050704Z" level=info msg="received exit event container_id:\"83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8\" id:\"83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8\" pid:4740 exited_at:{seconds:1746870972 nanos:653641405}" May 10 09:56:12.654558 containerd[1546]: time="2025-05-10T09:56:12.654526171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8\" id:\"83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8\" pid:4740 exited_at:{seconds:1746870972 nanos:653641405}" May 10 09:56:12.679210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83a9bc828b44a5b50ce434a100e5f6ec08b33e06e1f2d339eee7bf790acd2bb8-rootfs.mount: Deactivated successfully. May 10 09:56:13.562977 kubelet[2796]: E0510 09:56:13.562886 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:13.565287 containerd[1546]: time="2025-05-10T09:56:13.565241046Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 09:56:13.576406 containerd[1546]: time="2025-05-10T09:56:13.575593458Z" level=info msg="Container 829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836: CDI devices from CRI Config.CDIDevices: []" May 10 09:56:13.584396 containerd[1546]: time="2025-05-10T09:56:13.584329309Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836\"" May 10 09:56:13.584908 containerd[1546]: time="2025-05-10T09:56:13.584851084Z" level=info msg="StartContainer for \"829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836\"" May 10 09:56:13.585971 containerd[1546]: time="2025-05-10T09:56:13.585921073Z" level=info msg="connecting to shim 829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836" address="unix:///run/containerd/s/236a2475e0dbf5de222c769e8556bef6425a2ce4c2a5f738aae3f3f6c5c44384" protocol=ttrpc version=3 May 10 09:56:13.614085 systemd[1]: Started cri-containerd-829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836.scope - libcontainer container 829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836. May 10 09:56:13.645457 systemd[1]: cri-containerd-829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836.scope: Deactivated successfully. May 10 09:56:13.645880 containerd[1546]: time="2025-05-10T09:56:13.645776446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836\" id:\"829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836\" pid:4778 exited_at:{seconds:1746870973 nanos:645501893}" May 10 09:56:13.647319 containerd[1546]: time="2025-05-10T09:56:13.647293947Z" level=info msg="received exit event container_id:\"829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836\" id:\"829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836\" pid:4778 exited_at:{seconds:1746870973 nanos:645501893}" May 10 09:56:13.656766 containerd[1546]: time="2025-05-10T09:56:13.656669849Z" level=info msg="StartContainer for \"829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836\" returns successfully" May 10 09:56:13.670328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-829d33cd9d7109ecbd3dd101fca99972baa9be33be228e616b1075008151a836-rootfs.mount: Deactivated successfully. May 10 09:56:14.329944 kubelet[2796]: E0510 09:56:14.329816 2796 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 09:56:14.571099 kubelet[2796]: E0510 09:56:14.571056 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:14.576399 containerd[1546]: time="2025-05-10T09:56:14.576343399Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 09:56:14.587356 containerd[1546]: time="2025-05-10T09:56:14.587215241Z" level=info msg="Container 76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc: CDI devices from CRI Config.CDIDevices: []" May 10 09:56:14.591197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144724975.mount: Deactivated successfully. May 10 09:56:14.597913 containerd[1546]: time="2025-05-10T09:56:14.597834414Z" level=info msg="CreateContainer within sandbox \"2db27f714c409802e932777b793da71b3cc5f262f07ab317a368fe7df5cc3e01\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc\"" May 10 09:56:14.598467 containerd[1546]: time="2025-05-10T09:56:14.598413396Z" level=info msg="StartContainer for \"76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc\"" May 10 09:56:14.599582 containerd[1546]: time="2025-05-10T09:56:14.599556193Z" level=info msg="connecting to shim 76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc" address="unix:///run/containerd/s/236a2475e0dbf5de222c769e8556bef6425a2ce4c2a5f738aae3f3f6c5c44384" protocol=ttrpc version=3 May 10 09:56:14.620998 systemd[1]: Started cri-containerd-76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc.scope - libcontainer container 76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc. May 10 09:56:14.664706 containerd[1546]: time="2025-05-10T09:56:14.664650571Z" level=info msg="StartContainer for \"76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc\" returns successfully" May 10 09:56:14.745490 containerd[1546]: time="2025-05-10T09:56:14.745425349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc\" id:\"8f8428040b7f532271a27c611b326052a312c81f292e40db5692d63ecbc4196d\" pid:4848 exited_at:{seconds:1746870974 nanos:745029586}" May 10 09:56:15.090934 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 09:56:15.578113 kubelet[2796]: E0510 09:56:15.578076 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:15.592411 kubelet[2796]: I0510 09:56:15.592345 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8rcbh" podStartSLOduration=5.5923266080000005 podStartE2EDuration="5.592326608s" podCreationTimestamp="2025-05-10 09:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:56:15.592198564 +0000 UTC m=+91.405054437" watchObservedRunningTime="2025-05-10 09:56:15.592326608 +0000 UTC m=+91.405182481" May 10 09:56:15.600254 kubelet[2796]: I0510 09:56:15.600208 2796 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T09:56:15Z","lastTransitionTime":"2025-05-10T09:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 09:56:16.631150 kubelet[2796]: E0510 09:56:16.631017 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:16.884188 containerd[1546]: time="2025-05-10T09:56:16.884043946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc\" id:\"96dc0782b1b0d67bae6ab3d7ea0a13ed53876981c0e4bb0b6ae9394a1e3152c8\" pid:5021 exit_status:1 exited_at:{seconds:1746870976 nanos:883649535}" May 10 09:56:17.279893 kubelet[2796]: E0510 09:56:17.279702 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:18.387240 systemd-networkd[1464]: lxc_health: Link UP May 10 09:56:18.387810 systemd-networkd[1464]: lxc_health: Gained carrier May 10 09:56:18.634507 kubelet[2796]: E0510 09:56:18.634455 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:19.004559 containerd[1546]: time="2025-05-10T09:56:19.004491747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc\" id:\"0da3f6a932dab2f473ea43dd64e546401b4e7747fbf7e168d51917d7a73edd43\" pid:5407 exited_at:{seconds:1746870979 nanos:4201785}" May 10 09:56:19.585322 kubelet[2796]: E0510 09:56:19.585266 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:20.383191 systemd-networkd[1464]: lxc_health: Gained IPv6LL May 10 09:56:20.607082 kubelet[2796]: E0510 09:56:20.607038 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 09:56:21.113432 containerd[1546]: time="2025-05-10T09:56:21.113372293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc\" id:\"062015270544135538449163026ee944903f86db50ac6ab4fce0e3bfd3788f28\" pid:5445 exited_at:{seconds:1746870981 nanos:113056414}" May 10 09:56:23.211232 containerd[1546]: time="2025-05-10T09:56:23.211161489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc\" id:\"6ba37f0f912c39fe550e0d1785ead829162d7ffaad2009f06f4bbf1d2cab4246\" pid:5477 exited_at:{seconds:1746870983 nanos:210659916}" May 10 09:56:25.299956 containerd[1546]: time="2025-05-10T09:56:25.299892075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76c2453445587887b6259c5ba9dafd8874110717f6f382a2ee61854a1b182dcc\" id:\"0857da73b8567db98e8b2988e36db5d718c7abac70476aae3c74442f978341c0\" pid:5502 exited_at:{seconds:1746870985 nanos:299490553}" May 10 09:56:25.319283 sshd[4581]: Connection closed by 10.0.0.1 port 36598 May 10 09:56:25.319841 sshd-session[4578]: pam_unix(sshd:session): session closed for user core May 10 09:56:25.325564 systemd[1]: sshd@27-10.0.0.32:22-10.0.0.1:36598.service: Deactivated successfully. May 10 09:56:25.328639 systemd[1]: session-28.scope: Deactivated successfully. May 10 09:56:25.329529 systemd-logind[1532]: Session 28 logged out. Waiting for processes to exit. May 10 09:56:25.330745 systemd-logind[1532]: Removed session 28.