May 13 10:01:49.894031 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 08:42:12 -00 2025 May 13 10:01:49.894055 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=149a30fd2ffdbc3f620e76792215da346cc1a8b964894e8a61f45888248ff7ba May 13 10:01:49.894064 kernel: BIOS-provided physical RAM map: May 13 10:01:49.894071 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 10:01:49.894077 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 10:01:49.894083 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 10:01:49.894091 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 10:01:49.894100 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 10:01:49.894106 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 10:01:49.894113 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 10:01:49.894119 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 10:01:49.894126 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 10:01:49.894132 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 10:01:49.894148 kernel: NX (Execute Disable) protection: active May 13 10:01:49.894158 kernel: APIC: Static calls initialized May 13 10:01:49.894165 kernel: SMBIOS 2.8 present. May 13 10:01:49.894172 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 10:01:49.894179 kernel: DMI: Memory slots populated: 1/1 May 13 10:01:49.894186 kernel: Hypervisor detected: KVM May 13 10:01:49.894193 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 10:01:49.894200 kernel: kvm-clock: using sched offset of 3252042181 cycles May 13 10:01:49.894207 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 10:01:49.894214 kernel: tsc: Detected 2794.748 MHz processor May 13 10:01:49.894222 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 10:01:49.894231 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 10:01:49.894238 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 10:01:49.894245 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 10:01:49.894253 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 10:01:49.894260 kernel: Using GB pages for direct mapping May 13 10:01:49.894267 kernel: ACPI: Early table checksum verification disabled May 13 10:01:49.894274 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 10:01:49.894281 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:49.894290 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:49.894298 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:49.894305 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 10:01:49.894312 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:49.894319 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:49.894326 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:49.894333 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:01:49.894340 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 10:01:49.894353 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 10:01:49.894360 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 10:01:49.894367 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 10:01:49.894374 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 10:01:49.894382 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 10:01:49.894389 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 10:01:49.894398 kernel: No NUMA configuration found May 13 10:01:49.894487 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 10:01:49.894495 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 13 10:01:49.894502 kernel: Zone ranges: May 13 10:01:49.894509 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 10:01:49.894517 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 10:01:49.894524 kernel: Normal empty May 13 10:01:49.894531 kernel: Device empty May 13 10:01:49.894538 kernel: Movable zone start for each node May 13 10:01:49.894546 kernel: Early memory node ranges May 13 10:01:49.894556 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 10:01:49.894564 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 10:01:49.894574 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 10:01:49.894584 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 10:01:49.894593 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 10:01:49.894602 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 10:01:49.894609 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 10:01:49.894616 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 10:01:49.894624 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 10:01:49.894633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 10:01:49.894641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 10:01:49.894649 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 10:01:49.894656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 10:01:49.894663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 10:01:49.894670 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 10:01:49.894678 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 10:01:49.894685 kernel: TSC deadline timer available May 13 10:01:49.894692 kernel: CPU topo: Max. logical packages: 1 May 13 10:01:49.894702 kernel: CPU topo: Max. logical dies: 1 May 13 10:01:49.894709 kernel: CPU topo: Max. dies per package: 1 May 13 10:01:49.894716 kernel: CPU topo: Max. threads per core: 1 May 13 10:01:49.894723 kernel: CPU topo: Num. cores per package: 4 May 13 10:01:49.894730 kernel: CPU topo: Num. threads per package: 4 May 13 10:01:49.894738 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 13 10:01:49.894745 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 10:01:49.894752 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 10:01:49.894759 kernel: kvm-guest: setup PV sched yield May 13 10:01:49.894767 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 10:01:49.894776 kernel: Booting paravirtualized kernel on KVM May 13 10:01:49.894783 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 10:01:49.894791 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 10:01:49.894798 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 13 10:01:49.894806 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 13 10:01:49.894813 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 10:01:49.894820 kernel: kvm-guest: PV spinlocks enabled May 13 10:01:49.894827 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 10:01:49.894836 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=149a30fd2ffdbc3f620e76792215da346cc1a8b964894e8a61f45888248ff7ba May 13 10:01:49.894846 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 10:01:49.894854 kernel: random: crng init done May 13 10:01:49.894861 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 10:01:49.894869 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 10:01:49.894876 kernel: Fallback order for Node 0: 0 May 13 10:01:49.894883 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 13 10:01:49.894890 kernel: Policy zone: DMA32 May 13 10:01:49.894898 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 10:01:49.894907 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 10:01:49.894915 kernel: ftrace: allocating 40071 entries in 157 pages May 13 10:01:49.894922 kernel: ftrace: allocated 157 pages with 5 groups May 13 10:01:49.894929 kernel: Dynamic Preempt: voluntary May 13 10:01:49.894937 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 10:01:49.894949 kernel: rcu: RCU event tracing is enabled. May 13 10:01:49.894957 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 10:01:49.894964 kernel: Trampoline variant of Tasks RCU enabled. May 13 10:01:49.894972 kernel: Rude variant of Tasks RCU enabled. May 13 10:01:49.894979 kernel: Tracing variant of Tasks RCU enabled. May 13 10:01:49.894989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 10:01:49.894996 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 10:01:49.895003 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:01:49.895011 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:01:49.895018 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:01:49.895026 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 10:01:49.895033 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 10:01:49.895050 kernel: Console: colour VGA+ 80x25 May 13 10:01:49.895057 kernel: printk: legacy console [ttyS0] enabled May 13 10:01:49.895065 kernel: ACPI: Core revision 20240827 May 13 10:01:49.895073 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 10:01:49.895083 kernel: APIC: Switch to symmetric I/O mode setup May 13 10:01:49.895090 kernel: x2apic enabled May 13 10:01:49.895098 kernel: APIC: Switched APIC routing to: physical x2apic May 13 10:01:49.895106 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 10:01:49.895114 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 10:01:49.895123 kernel: kvm-guest: setup PV IPIs May 13 10:01:49.895131 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 10:01:49.895149 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 13 10:01:49.895157 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 10:01:49.895165 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 10:01:49.895173 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 10:01:49.895180 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 10:01:49.895188 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 10:01:49.895196 kernel: Spectre V2 : Mitigation: Retpolines May 13 10:01:49.895206 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 10:01:49.895213 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 10:01:49.895221 kernel: RETBleed: Mitigation: untrained return thunk May 13 10:01:49.895229 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 10:01:49.895237 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 10:01:49.895245 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 10:01:49.895253 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 10:01:49.895261 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 10:01:49.895271 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 10:01:49.895278 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 10:01:49.895286 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 10:01:49.895294 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 10:01:49.895302 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 10:01:49.895309 kernel: Freeing SMP alternatives memory: 32K May 13 10:01:49.895317 kernel: pid_max: default: 32768 minimum: 301 May 13 10:01:49.895325 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 10:01:49.895332 kernel: landlock: Up and running. May 13 10:01:49.895342 kernel: SELinux: Initializing. May 13 10:01:49.895349 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 10:01:49.895357 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 10:01:49.895365 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 10:01:49.895373 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 10:01:49.895381 kernel: ... version: 0 May 13 10:01:49.895388 kernel: ... bit width: 48 May 13 10:01:49.895396 kernel: ... generic registers: 6 May 13 10:01:49.895415 kernel: ... value mask: 0000ffffffffffff May 13 10:01:49.895436 kernel: ... max period: 00007fffffffffff May 13 10:01:49.895443 kernel: ... fixed-purpose events: 0 May 13 10:01:49.895451 kernel: ... event mask: 000000000000003f May 13 10:01:49.895459 kernel: signal: max sigframe size: 1776 May 13 10:01:49.895466 kernel: rcu: Hierarchical SRCU implementation. May 13 10:01:49.895474 kernel: rcu: Max phase no-delay instances is 400. May 13 10:01:49.895482 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 10:01:49.895490 kernel: smp: Bringing up secondary CPUs ... May 13 10:01:49.895498 kernel: smpboot: x86: Booting SMP configuration: May 13 10:01:49.895508 kernel: .... node #0, CPUs: #1 #2 #3 May 13 10:01:49.895515 kernel: smp: Brought up 1 node, 4 CPUs May 13 10:01:49.895523 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 10:01:49.895531 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9948K rodata, 54420K init, 2548K bss, 136904K reserved, 0K cma-reserved) May 13 10:01:49.895539 kernel: devtmpfs: initialized May 13 10:01:49.895546 kernel: x86/mm: Memory block size: 128MB May 13 10:01:49.895554 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 10:01:49.895563 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 10:01:49.895573 kernel: pinctrl core: initialized pinctrl subsystem May 13 10:01:49.895586 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 10:01:49.895595 kernel: audit: initializing netlink subsys (disabled) May 13 10:01:49.895602 kernel: audit: type=2000 audit(1747130507.030:1): state=initialized audit_enabled=0 res=1 May 13 10:01:49.895610 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 10:01:49.895618 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 10:01:49.895626 kernel: cpuidle: using governor menu May 13 10:01:49.895633 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 10:01:49.895641 kernel: dca service started, version 1.12.1 May 13 10:01:49.895649 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 13 10:01:49.895659 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 13 10:01:49.895667 kernel: PCI: Using configuration type 1 for base access May 13 10:01:49.895674 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 10:01:49.895682 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 10:01:49.895690 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 10:01:49.895698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 10:01:49.895706 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 10:01:49.895713 kernel: ACPI: Added _OSI(Module Device) May 13 10:01:49.895721 kernel: ACPI: Added _OSI(Processor Device) May 13 10:01:49.895731 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 10:01:49.895739 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 10:01:49.895746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 10:01:49.895754 kernel: ACPI: Interpreter enabled May 13 10:01:49.895762 kernel: ACPI: PM: (supports S0 S3 S5) May 13 10:01:49.895769 kernel: ACPI: Using IOAPIC for interrupt routing May 13 10:01:49.895777 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 10:01:49.895785 kernel: PCI: Using E820 reservations for host bridge windows May 13 10:01:49.895792 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 10:01:49.895802 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 10:01:49.896051 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 10:01:49.896180 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 10:01:49.896291 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 10:01:49.896302 kernel: PCI host bridge to bus 0000:00 May 13 10:01:49.896469 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 10:01:49.896578 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 10:01:49.896697 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 10:01:49.896799 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 10:01:49.896900 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 10:01:49.897001 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 10:01:49.897147 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 10:01:49.897317 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 13 10:01:49.897560 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 13 10:01:49.897692 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 13 10:01:49.897806 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 13 10:01:49.897919 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 13 10:01:49.898031 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 10:01:49.898160 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 10:01:49.898275 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 13 10:01:49.898393 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 13 10:01:49.898524 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 13 10:01:49.898687 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 13 10:01:49.898812 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 13 10:01:49.898924 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 13 10:01:49.899035 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 13 10:01:49.899166 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 13 10:01:49.899285 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 13 10:01:49.899398 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 13 10:01:49.899541 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 10:01:49.899746 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 13 10:01:49.899872 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 13 10:01:49.899988 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 10:01:49.900142 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 13 10:01:49.900263 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 13 10:01:49.900374 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 13 10:01:49.900517 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 13 10:01:49.900641 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 13 10:01:49.900653 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 10:01:49.900661 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 10:01:49.900669 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 10:01:49.900681 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 10:01:49.900688 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 10:01:49.900697 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 10:01:49.900704 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 10:01:49.900712 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 10:01:49.900720 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 10:01:49.900728 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 10:01:49.900736 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 10:01:49.900744 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 10:01:49.900754 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 10:01:49.900762 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 10:01:49.900769 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 10:01:49.900777 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 10:01:49.900785 kernel: iommu: Default domain type: Translated May 13 10:01:49.900793 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 10:01:49.900800 kernel: PCI: Using ACPI for IRQ routing May 13 10:01:49.900808 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 10:01:49.900816 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 10:01:49.900826 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 10:01:49.900950 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 10:01:49.901062 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 10:01:49.901181 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 10:01:49.901192 kernel: vgaarb: loaded May 13 10:01:49.901200 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 10:01:49.901208 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 10:01:49.901216 kernel: clocksource: Switched to clocksource kvm-clock May 13 10:01:49.901228 kernel: VFS: Disk quotas dquot_6.6.0 May 13 10:01:49.901236 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 10:01:49.901244 kernel: pnp: PnP ACPI init May 13 10:01:49.901366 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 10:01:49.901378 kernel: pnp: PnP ACPI: found 6 devices May 13 10:01:49.901386 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 10:01:49.901394 kernel: NET: Registered PF_INET protocol family May 13 10:01:49.901414 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 10:01:49.901426 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 10:01:49.901433 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 10:01:49.901441 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 10:01:49.901449 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 10:01:49.901457 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 10:01:49.901465 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 10:01:49.901473 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 10:01:49.901481 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 10:01:49.901489 kernel: NET: Registered PF_XDP protocol family May 13 10:01:49.901609 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 10:01:49.901717 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 10:01:49.901820 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 10:01:49.901921 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 10:01:49.902021 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 10:01:49.902123 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 10:01:49.902140 kernel: PCI: CLS 0 bytes, default 64 May 13 10:01:49.902150 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 13 10:01:49.902161 kernel: Initialise system trusted keyrings May 13 10:01:49.902169 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 10:01:49.902176 kernel: Key type asymmetric registered May 13 10:01:49.902184 kernel: Asymmetric key parser 'x509' registered May 13 10:01:49.902192 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 10:01:49.902200 kernel: io scheduler mq-deadline registered May 13 10:01:49.902208 kernel: io scheduler kyber registered May 13 10:01:49.902216 kernel: io scheduler bfq registered May 13 10:01:49.902223 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 10:01:49.902234 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 10:01:49.902242 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 10:01:49.902249 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 10:01:49.902257 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 10:01:49.902265 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 10:01:49.902273 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 10:01:49.902281 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 10:01:49.902289 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 10:01:49.902423 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 10:01:49.902535 kernel: rtc_cmos 00:04: registered as rtc0 May 13 10:01:49.902546 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 10:01:49.902659 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T10:01:49 UTC (1747130509) May 13 10:01:49.902765 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 10:01:49.902776 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 10:01:49.902784 kernel: NET: Registered PF_INET6 protocol family May 13 10:01:49.902792 kernel: Segment Routing with IPv6 May 13 10:01:49.902800 kernel: In-situ OAM (IOAM) with IPv6 May 13 10:01:49.902811 kernel: NET: Registered PF_PACKET protocol family May 13 10:01:49.902819 kernel: Key type dns_resolver registered May 13 10:01:49.902826 kernel: IPI shorthand broadcast: enabled May 13 10:01:49.902834 kernel: sched_clock: Marking stable (2809003535, 111725249)->(2937786368, -17057584) May 13 10:01:49.902842 kernel: registered taskstats version 1 May 13 10:01:49.902850 kernel: Loading compiled-in X.509 certificates May 13 10:01:49.902858 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: 5c3cbe19210297b32e5cab2ad262e7b96f0f791c' May 13 10:01:49.902866 kernel: Demotion targets for Node 0: null May 13 10:01:49.902874 kernel: Key type .fscrypt registered May 13 10:01:49.902883 kernel: Key type fscrypt-provisioning registered May 13 10:01:49.902891 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 10:01:49.902899 kernel: ima: Allocated hash algorithm: sha1 May 13 10:01:49.902907 kernel: ima: No architecture policies found May 13 10:01:49.902915 kernel: clk: Disabling unused clocks May 13 10:01:49.902922 kernel: Warning: unable to open an initial console. May 13 10:01:49.902931 kernel: Freeing unused kernel image (initmem) memory: 54420K May 13 10:01:49.902939 kernel: Write protecting the kernel read-only data: 24576k May 13 10:01:49.902946 kernel: Freeing unused kernel image (rodata/data gap) memory: 292K May 13 10:01:49.902956 kernel: Run /init as init process May 13 10:01:49.902964 kernel: with arguments: May 13 10:01:49.902971 kernel: /init May 13 10:01:49.902979 kernel: with environment: May 13 10:01:49.902986 kernel: HOME=/ May 13 10:01:49.902994 kernel: TERM=linux May 13 10:01:49.903002 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 10:01:49.903011 systemd[1]: Successfully made /usr/ read-only. May 13 10:01:49.903024 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 10:01:49.903045 systemd[1]: Detected virtualization kvm. May 13 10:01:49.903053 systemd[1]: Detected architecture x86-64. May 13 10:01:49.903062 systemd[1]: Running in initrd. May 13 10:01:49.903070 systemd[1]: No hostname configured, using default hostname. May 13 10:01:49.903080 systemd[1]: Hostname set to . May 13 10:01:49.903090 systemd[1]: Initializing machine ID from VM UUID. May 13 10:01:49.903099 systemd[1]: Queued start job for default target initrd.target. May 13 10:01:49.903108 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:01:49.903116 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:01:49.903125 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 10:01:49.903143 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 10:01:49.903152 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 10:01:49.903163 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 10:01:49.903173 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 10:01:49.903182 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 10:01:49.903190 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:01:49.903199 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 10:01:49.903208 systemd[1]: Reached target paths.target - Path Units. May 13 10:01:49.903216 systemd[1]: Reached target slices.target - Slice Units. May 13 10:01:49.903225 systemd[1]: Reached target swap.target - Swaps. May 13 10:01:49.903235 systemd[1]: Reached target timers.target - Timer Units. May 13 10:01:49.903244 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 10:01:49.903252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 10:01:49.903261 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 10:01:49.903270 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 10:01:49.903280 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 10:01:49.903289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 10:01:49.903298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:01:49.903308 systemd[1]: Reached target sockets.target - Socket Units. May 13 10:01:49.903316 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 10:01:49.903325 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 10:01:49.903334 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 10:01:49.903345 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 10:01:49.903356 systemd[1]: Starting systemd-fsck-usr.service... May 13 10:01:49.903365 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 10:01:49.903373 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 10:01:49.903382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:01:49.903390 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 10:01:49.903399 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:01:49.903423 systemd[1]: Finished systemd-fsck-usr.service. May 13 10:01:49.903454 systemd-journald[220]: Collecting audit messages is disabled. May 13 10:01:49.903476 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 10:01:49.903488 systemd-journald[220]: Journal started May 13 10:01:49.903507 systemd-journald[220]: Runtime Journal (/run/log/journal/091570ad045546aaad30fa09c49109a1) is 6M, max 48.6M, 42.5M free. May 13 10:01:49.897505 systemd-modules-load[223]: Inserted module 'overlay' May 13 10:01:49.908789 systemd[1]: Started systemd-journald.service - Journal Service. May 13 10:01:49.910881 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 10:01:49.942248 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 10:01:49.942276 kernel: Bridge firewalling registered May 13 10:01:49.927323 systemd-modules-load[223]: Inserted module 'br_netfilter' May 13 10:01:49.944726 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 10:01:49.948554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:01:49.950044 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 10:01:49.954402 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 10:01:49.955347 systemd-tmpfiles[237]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 10:01:49.958669 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:01:49.971033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 10:01:49.971924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:01:49.981728 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:01:49.982392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:01:49.984670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 10:01:49.989619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 10:01:50.013260 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 10:01:50.042310 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=149a30fd2ffdbc3f620e76792215da346cc1a8b964894e8a61f45888248ff7ba May 13 10:01:50.052323 systemd-resolved[257]: Positive Trust Anchors: May 13 10:01:50.052345 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 10:01:50.052384 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 10:01:50.055045 systemd-resolved[257]: Defaulting to hostname 'linux'. May 13 10:01:50.056456 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 10:01:50.062803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 10:01:50.152458 kernel: SCSI subsystem initialized May 13 10:01:50.161434 kernel: Loading iSCSI transport class v2.0-870. May 13 10:01:50.171430 kernel: iscsi: registered transport (tcp) May 13 10:01:50.195448 kernel: iscsi: registered transport (qla4xxx) May 13 10:01:50.195503 kernel: QLogic iSCSI HBA Driver May 13 10:01:50.217190 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 10:01:50.248237 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:01:50.250140 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 10:01:50.312914 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 10:01:50.315522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 10:01:50.383459 kernel: raid6: avx2x4 gen() 30481 MB/s May 13 10:01:50.400462 kernel: raid6: avx2x2 gen() 31195 MB/s May 13 10:01:50.417688 kernel: raid6: avx2x1 gen() 24826 MB/s May 13 10:01:50.417761 kernel: raid6: using algorithm avx2x2 gen() 31195 MB/s May 13 10:01:50.435546 kernel: raid6: .... xor() 17778 MB/s, rmw enabled May 13 10:01:50.435638 kernel: raid6: using avx2x2 recovery algorithm May 13 10:01:50.457448 kernel: xor: automatically using best checksumming function avx May 13 10:01:50.625471 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 10:01:50.634196 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 10:01:50.636229 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:01:50.675816 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 13 10:01:50.680963 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:01:50.683924 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 10:01:50.722745 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation May 13 10:01:50.750999 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 10:01:50.752896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 10:01:50.824252 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:01:50.828520 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 10:01:50.863439 kernel: cryptd: max_cpu_qlen set to 1000 May 13 10:01:50.867495 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 10:01:50.873547 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 10:01:50.885443 kernel: libata version 3.00 loaded. May 13 10:01:50.888444 kernel: AES CTR mode by8 optimization enabled May 13 10:01:50.897164 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 10:01:50.897213 kernel: GPT:9289727 != 19775487 May 13 10:01:50.897227 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 10:01:50.897241 kernel: GPT:9289727 != 19775487 May 13 10:01:50.897254 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 10:01:50.897267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:01:50.902444 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 13 10:01:50.904440 kernel: ahci 0000:00:1f.2: version 3.0 May 13 10:01:50.906437 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 10:01:50.913498 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 13 10:01:50.913729 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 13 10:01:50.913899 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 10:01:50.925504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 10:01:50.930451 kernel: scsi host0: ahci May 13 10:01:50.930675 kernel: scsi host1: ahci May 13 10:01:50.925900 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:01:50.931356 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:01:50.933727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:01:50.938789 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 10:01:50.941358 kernel: scsi host2: ahci May 13 10:01:50.943424 kernel: scsi host3: ahci May 13 10:01:50.943586 kernel: scsi host4: ahci May 13 10:01:50.946226 kernel: scsi host5: ahci May 13 10:01:50.946436 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 13 10:01:50.946449 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 13 10:01:50.948095 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 13 10:01:50.948130 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 13 10:01:50.949426 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 13 10:01:50.949457 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 13 10:01:50.951225 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 10:01:50.981031 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 10:01:51.008856 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 10:01:51.010434 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 10:01:51.013259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:01:51.037710 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 10:01:51.040801 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 10:01:51.061475 disk-uuid[632]: Primary Header is updated. May 13 10:01:51.061475 disk-uuid[632]: Secondary Entries is updated. May 13 10:01:51.061475 disk-uuid[632]: Secondary Header is updated. May 13 10:01:51.065008 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:01:51.069445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:01:51.257444 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 10:01:51.257523 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 10:01:51.257537 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 10:01:51.258432 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 10:01:51.259722 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 10:01:51.259736 kernel: ata3.00: applying bridge limits May 13 10:01:51.260985 kernel: ata3.00: configured for UDMA/100 May 13 10:01:51.261440 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 10:01:51.265462 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 10:01:51.265491 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 10:01:51.322456 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 10:01:51.322739 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 10:01:51.348440 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 10:01:51.699278 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 10:01:51.700331 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 10:01:51.701939 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:01:51.702289 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 10:01:51.707607 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 10:01:51.741628 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 10:01:52.072400 disk-uuid[633]: The operation has completed successfully. May 13 10:01:52.074072 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:01:52.111506 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 10:01:52.111651 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 10:01:52.143209 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 10:01:52.170754 sh[663]: Success May 13 10:01:52.189034 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 10:01:52.189132 kernel: device-mapper: uevent: version 1.0.3 May 13 10:01:52.190308 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 10:01:52.200446 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 13 10:01:52.236397 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 10:01:52.240604 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 10:01:52.254388 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 10:01:52.262626 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 10:01:52.262656 kernel: BTRFS: device fsid ffca113e-5abc-43bf-8c02-7bfa2cadf852 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (675) May 13 10:01:52.263925 kernel: BTRFS info (device dm-0): first mount of filesystem ffca113e-5abc-43bf-8c02-7bfa2cadf852 May 13 10:01:52.263948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 10:01:52.264809 kernel: BTRFS info (device dm-0): using free-space-tree May 13 10:01:52.270363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 10:01:52.271389 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 10:01:52.272472 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 10:01:52.276640 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 10:01:52.279315 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 10:01:52.320437 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (708) May 13 10:01:52.322824 kernel: BTRFS info (device vda6): first mount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:52.322884 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 10:01:52.322895 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:01:52.329424 kernel: BTRFS info (device vda6): last unmount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:52.330735 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 10:01:52.332756 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 10:01:52.452546 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 10:01:52.455153 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 10:01:52.457335 ignition[753]: Ignition 2.21.0 May 13 10:01:52.457348 ignition[753]: Stage: fetch-offline May 13 10:01:52.457394 ignition[753]: no configs at "/usr/lib/ignition/base.d" May 13 10:01:52.457437 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:52.457568 ignition[753]: parsed url from cmdline: "" May 13 10:01:52.457576 ignition[753]: no config URL provided May 13 10:01:52.457582 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" May 13 10:01:52.457594 ignition[753]: no config at "/usr/lib/ignition/user.ign" May 13 10:01:52.457633 ignition[753]: op(1): [started] loading QEMU firmware config module May 13 10:01:52.457642 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 10:01:52.468302 ignition[753]: op(1): [finished] loading QEMU firmware config module May 13 10:01:52.468327 ignition[753]: QEMU firmware config was not found. Ignoring... May 13 10:01:52.505646 systemd-networkd[850]: lo: Link UP May 13 10:01:52.505656 systemd-networkd[850]: lo: Gained carrier May 13 10:01:52.507123 systemd-networkd[850]: Enumeration completed May 13 10:01:52.507214 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 10:01:52.507469 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:01:52.507473 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 10:01:52.509272 systemd-networkd[850]: eth0: Link UP May 13 10:01:52.509276 systemd-networkd[850]: eth0: Gained carrier May 13 10:01:52.509289 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:01:52.512144 systemd[1]: Reached target network.target - Network. May 13 10:01:52.526447 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 10:01:52.526519 ignition[753]: parsing config with SHA512: f531b3ce232524860e2f6c1df7ff26c9a8845026e024b8f6365e4b67540d5be2fa05040ea536fcf2ebdfac464ce7f9ef36805dbbe5b8108f9d7f515fdc258851 May 13 10:01:52.529713 unknown[753]: fetched base config from "system" May 13 10:01:52.529725 unknown[753]: fetched user config from "qemu" May 13 10:01:52.530035 ignition[753]: fetch-offline: fetch-offline passed May 13 10:01:52.530093 ignition[753]: Ignition finished successfully May 13 10:01:52.532358 systemd-resolved[257]: Detected conflict on linux IN A 10.0.0.18 May 13 10:01:52.532365 systemd-resolved[257]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. May 13 10:01:52.532775 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 10:01:52.533826 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 10:01:52.535711 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 10:01:52.597113 ignition[857]: Ignition 2.21.0 May 13 10:01:52.597129 ignition[857]: Stage: kargs May 13 10:01:52.597297 ignition[857]: no configs at "/usr/lib/ignition/base.d" May 13 10:01:52.597311 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:52.598235 ignition[857]: kargs: kargs passed May 13 10:01:52.598288 ignition[857]: Ignition finished successfully May 13 10:01:52.603037 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 10:01:52.606113 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 10:01:52.642998 ignition[865]: Ignition 2.21.0 May 13 10:01:52.643012 ignition[865]: Stage: disks May 13 10:01:52.643146 ignition[865]: no configs at "/usr/lib/ignition/base.d" May 13 10:01:52.643155 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:52.647643 ignition[865]: disks: disks passed May 13 10:01:52.647707 ignition[865]: Ignition finished successfully May 13 10:01:52.651944 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 10:01:52.654070 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 10:01:52.654675 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 10:01:52.654998 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 10:01:52.655338 systemd[1]: Reached target sysinit.target - System Initialization. May 13 10:01:52.661152 systemd[1]: Reached target basic.target - Basic System. May 13 10:01:52.663886 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 10:01:52.697489 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 10:01:52.916215 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 10:01:52.920361 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 10:01:53.037442 kernel: EXT4-fs (vda9): mounted filesystem b5db2f60-6937-4957-9fc1-2577b44e4198 r/w with ordered data mode. Quota mode: none. May 13 10:01:53.038233 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 10:01:53.039518 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 10:01:53.041921 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 10:01:53.043670 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 10:01:53.044930 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 10:01:53.044973 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 10:01:53.044995 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 10:01:53.062791 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 10:01:53.065161 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 10:01:53.068905 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (883) May 13 10:01:53.071225 kernel: BTRFS info (device vda6): first mount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:53.071247 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 10:01:53.071264 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:01:53.075889 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 10:01:53.112742 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory May 13 10:01:53.116188 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory May 13 10:01:53.120348 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory May 13 10:01:53.124396 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory May 13 10:01:53.230337 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 10:01:53.232713 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 10:01:53.234753 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 10:01:53.257444 kernel: BTRFS info (device vda6): last unmount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:53.261807 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 10:01:53.272643 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 10:01:53.297339 ignition[996]: INFO : Ignition 2.21.0 May 13 10:01:53.297339 ignition[996]: INFO : Stage: mount May 13 10:01:53.300323 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:01:53.300323 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:53.303811 ignition[996]: INFO : mount: mount passed May 13 10:01:53.304602 ignition[996]: INFO : Ignition finished successfully May 13 10:01:53.307718 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 10:01:53.309463 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 10:01:53.331242 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 10:01:53.366446 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1010) May 13 10:01:53.368914 kernel: BTRFS info (device vda6): first mount of filesystem a7c22072-ef43-49a5-be01-ac31542d1f05 May 13 10:01:53.368941 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 10:01:53.368955 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:01:53.374246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 10:01:53.435628 ignition[1027]: INFO : Ignition 2.21.0 May 13 10:01:53.435628 ignition[1027]: INFO : Stage: files May 13 10:01:53.438391 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:01:53.438391 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:53.440751 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping May 13 10:01:53.440751 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 10:01:53.440751 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 10:01:53.444821 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 10:01:53.444821 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 10:01:53.444821 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 10:01:53.443722 unknown[1027]: wrote ssh authorized keys file for user: core May 13 10:01:53.450269 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 10:01:53.450269 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 10:01:53.491482 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 10:01:53.635639 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 10:01:53.635639 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 10:01:53.640465 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 10:01:53.993654 systemd-networkd[850]: eth0: Gained IPv6LL May 13 10:01:54.003008 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 10:01:54.174806 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 10:01:54.174806 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 10:01:54.179517 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 10:01:54.179517 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 10:01:54.179517 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 10:01:54.179517 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 10:01:54.179517 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 10:01:54.179517 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 10:01:54.179517 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 10:01:54.194159 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 10:01:54.194159 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 10:01:54.194159 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 10:01:54.194159 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 10:01:54.194159 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 10:01:54.194159 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 10:01:54.705671 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 10:01:55.626962 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 10:01:55.626962 ignition[1027]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 10:01:55.631470 ignition[1027]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 10:01:55.638275 ignition[1027]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 10:01:55.638275 ignition[1027]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 10:01:55.638275 ignition[1027]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 10:01:55.643575 ignition[1027]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 10:01:55.643575 ignition[1027]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 10:01:55.643575 ignition[1027]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 10:01:55.643575 ignition[1027]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 10:01:55.665859 ignition[1027]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 10:01:55.672202 ignition[1027]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 10:01:55.674228 ignition[1027]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 10:01:55.674228 ignition[1027]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 10:01:55.674228 ignition[1027]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 10:01:55.674228 ignition[1027]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 10:01:55.674228 ignition[1027]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 10:01:55.674228 ignition[1027]: INFO : files: files passed May 13 10:01:55.674228 ignition[1027]: INFO : Ignition finished successfully May 13 10:01:55.678961 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 10:01:55.682061 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 10:01:55.686325 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 10:01:55.704310 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 10:01:55.704494 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 10:01:55.710145 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory May 13 10:01:55.714728 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 10:01:55.714728 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 10:01:55.718352 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 10:01:55.718713 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 10:01:55.722432 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 10:01:55.725683 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 10:01:55.794389 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 10:01:55.795774 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 10:01:55.798668 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 10:01:55.799007 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 10:01:55.801759 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 10:01:55.802951 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 10:01:55.838267 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 10:01:55.840598 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 10:01:55.865399 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 10:01:55.866132 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:01:55.866534 systemd[1]: Stopped target timers.target - Timer Units. May 13 10:01:55.867045 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 10:01:55.867219 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 10:01:55.873709 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 10:01:55.874113 systemd[1]: Stopped target basic.target - Basic System. May 13 10:01:55.874490 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 10:01:55.875017 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 10:01:55.875376 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 10:01:55.875915 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 10:01:55.876300 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 10:01:55.876827 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 10:01:55.877217 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 10:01:55.877743 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 10:01:55.878107 systemd[1]: Stopped target swap.target - Swaps. May 13 10:01:55.878444 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 10:01:55.878595 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 10:01:55.879363 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 10:01:55.879935 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:01:55.880243 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 10:01:55.880443 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:01:55.906932 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 10:01:55.907106 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 10:01:55.912616 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 10:01:55.912803 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 10:01:55.915343 systemd[1]: Stopped target paths.target - Path Units. May 13 10:01:55.918849 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 10:01:55.923552 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:01:55.927035 systemd[1]: Stopped target slices.target - Slice Units. May 13 10:01:55.927848 systemd[1]: Stopped target sockets.target - Socket Units. May 13 10:01:55.929921 systemd[1]: iscsid.socket: Deactivated successfully. May 13 10:01:55.930070 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 10:01:55.931814 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 10:01:55.931934 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 10:01:55.933945 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 10:01:55.934117 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 10:01:55.936155 systemd[1]: ignition-files.service: Deactivated successfully. May 13 10:01:55.936285 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 10:01:55.939947 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 10:01:55.946830 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 10:01:55.948061 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 10:01:55.948364 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:01:55.950860 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 10:01:55.951083 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 10:01:55.958739 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 10:01:55.958893 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 10:01:55.974715 ignition[1083]: INFO : Ignition 2.21.0 May 13 10:01:55.974715 ignition[1083]: INFO : Stage: umount May 13 10:01:55.977534 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:01:55.977534 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:01:55.977534 ignition[1083]: INFO : umount: umount passed May 13 10:01:55.977534 ignition[1083]: INFO : Ignition finished successfully May 13 10:01:55.975865 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 10:01:55.982844 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 10:01:55.983047 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 10:01:55.986016 systemd[1]: Stopped target network.target - Network. May 13 10:01:55.986866 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 10:01:55.986958 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 10:01:55.987278 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 10:01:55.987337 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 10:01:55.987771 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 10:01:55.987842 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 10:01:55.988143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 10:01:55.988193 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 10:01:55.988903 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 10:01:55.989384 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 10:01:56.003094 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 10:01:56.003291 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 10:01:56.009463 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 10:01:56.009956 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 10:01:56.010169 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 10:01:56.014860 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 10:01:56.016240 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 10:01:56.016813 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 10:01:56.016867 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 10:01:56.020483 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 10:01:56.021245 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 10:01:56.021311 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 10:01:56.022029 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 10:01:56.022096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 10:01:56.027059 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 10:01:56.027134 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 10:01:56.027873 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 10:01:56.027956 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:01:56.033070 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:01:56.035048 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 10:01:56.035143 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 10:01:56.047357 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 10:01:56.047587 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 10:01:56.111917 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 10:01:56.112120 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:01:56.113065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 10:01:56.113108 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 10:01:56.144090 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 10:01:56.144153 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:01:56.144662 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 10:01:56.144717 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 10:01:56.145364 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 10:01:56.145427 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 10:01:56.146184 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 10:01:56.146230 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 10:01:56.155727 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 10:01:56.156169 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 10:01:56.156222 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:01:56.160472 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 10:01:56.160519 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:01:56.163899 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 10:01:56.164009 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 10:01:56.185450 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 10:01:56.185545 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:01:56.186046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 10:01:56.186095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:01:56.192543 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 13 10:01:56.192605 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 13 10:01:56.192647 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 10:01:56.192699 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 10:01:56.195544 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 10:01:56.195685 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 10:01:56.319421 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 10:01:56.319582 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 10:01:56.320518 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 10:01:56.322662 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 10:01:56.322727 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 10:01:56.323838 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 10:01:56.355034 systemd[1]: Switching root. May 13 10:01:56.405348 systemd-journald[220]: Journal stopped May 13 10:01:57.881911 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 13 10:01:57.881998 kernel: SELinux: policy capability network_peer_controls=1 May 13 10:01:57.882017 kernel: SELinux: policy capability open_perms=1 May 13 10:01:57.882031 kernel: SELinux: policy capability extended_socket_class=1 May 13 10:01:57.882049 kernel: SELinux: policy capability always_check_network=0 May 13 10:01:57.882063 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 10:01:57.882077 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 10:01:57.882091 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 10:01:57.882104 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 10:01:57.882119 kernel: SELinux: policy capability userspace_initial_context=0 May 13 10:01:57.882133 kernel: audit: type=1403 audit(1747130516.893:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 10:01:57.882171 systemd[1]: Successfully loaded SELinux policy in 49.640ms. May 13 10:01:57.882204 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.175ms. May 13 10:01:57.882223 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 10:01:57.882239 systemd[1]: Detected virtualization kvm. May 13 10:01:57.882253 systemd[1]: Detected architecture x86-64. May 13 10:01:57.882266 systemd[1]: Detected first boot. May 13 10:01:57.882285 systemd[1]: Initializing machine ID from VM UUID. May 13 10:01:57.882299 kernel: Guest personality initialized and is inactive May 13 10:01:57.882318 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 10:01:57.882332 kernel: Initialized host personality May 13 10:01:57.882346 zram_generator::config[1130]: No configuration found. May 13 10:01:57.882368 kernel: NET: Registered PF_VSOCK protocol family May 13 10:01:57.882383 systemd[1]: Populated /etc with preset unit settings. May 13 10:01:57.882400 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 10:01:57.882497 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 10:01:57.882513 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 10:01:57.882529 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 10:01:57.882545 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 10:01:57.882562 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 10:01:57.882582 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 10:01:57.882598 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 10:01:57.882617 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 10:01:57.882634 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 10:01:57.882651 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 10:01:57.882666 systemd[1]: Created slice user.slice - User and Session Slice. May 13 10:01:57.882683 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:01:57.882699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:01:57.882736 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 10:01:57.882759 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 10:01:57.882776 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 10:01:57.882793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 10:01:57.882809 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 10:01:57.882826 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:01:57.882843 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 10:01:57.882859 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 10:01:57.882877 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 10:01:57.882898 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 10:01:57.882915 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 10:01:57.882942 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:01:57.882960 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 10:01:57.882976 systemd[1]: Reached target slices.target - Slice Units. May 13 10:01:57.882992 systemd[1]: Reached target swap.target - Swaps. May 13 10:01:57.883009 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 10:01:57.883029 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 10:01:57.883048 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 10:01:57.883068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 10:01:57.883091 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 10:01:57.883108 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:01:57.883124 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 10:01:57.883141 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 10:01:57.883157 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 10:01:57.883173 systemd[1]: Mounting media.mount - External Media Directory... May 13 10:01:57.883190 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:57.883205 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 10:01:57.883223 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 10:01:57.883239 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 10:01:57.883256 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 10:01:57.883271 systemd[1]: Reached target machines.target - Containers. May 13 10:01:57.883287 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 10:01:57.883302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:01:57.883319 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 10:01:57.883335 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 10:01:57.883353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:01:57.883368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 10:01:57.883383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:01:57.883398 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 10:01:57.883430 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:01:57.883446 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 10:01:57.883462 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 10:01:57.883480 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 10:01:57.883500 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 10:01:57.883516 systemd[1]: Stopped systemd-fsck-usr.service. May 13 10:01:57.883533 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:01:57.883550 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 10:01:57.883566 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 10:01:57.883582 kernel: loop: module loaded May 13 10:01:57.883599 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 10:01:57.883615 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 10:01:57.883632 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 10:01:57.883651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 10:01:57.883668 systemd[1]: verity-setup.service: Deactivated successfully. May 13 10:01:57.883684 systemd[1]: Stopped verity-setup.service. May 13 10:01:57.883701 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:57.883720 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 10:01:57.883737 kernel: ACPI: bus type drm_connector registered May 13 10:01:57.883752 kernel: fuse: init (API version 7.41) May 13 10:01:57.883766 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 10:01:57.883782 systemd[1]: Mounted media.mount - External Media Directory. May 13 10:01:57.883818 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 10:01:57.883837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 10:01:57.883853 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 10:01:57.883870 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 10:01:57.883886 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:01:57.883902 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 10:01:57.883918 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 10:01:57.883971 systemd-journald[1205]: Collecting audit messages is disabled. May 13 10:01:57.884002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:01:57.884022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:01:57.884038 systemd-journald[1205]: Journal started May 13 10:01:57.884074 systemd-journald[1205]: Runtime Journal (/run/log/journal/091570ad045546aaad30fa09c49109a1) is 6M, max 48.6M, 42.5M free. May 13 10:01:57.485152 systemd[1]: Queued start job for default target multi-user.target. May 13 10:01:57.510265 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 10:01:57.510919 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 10:01:57.886486 systemd[1]: Started systemd-journald.service - Journal Service. May 13 10:01:57.888817 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 10:01:57.889075 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 10:01:57.890643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:01:57.890861 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:01:57.892668 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 10:01:57.892883 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 10:01:57.894624 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:01:57.894843 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:01:57.896744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 10:01:57.898388 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:01:57.900367 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 10:01:57.902201 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 10:01:57.916710 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 10:01:57.919918 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 10:01:57.922623 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 10:01:57.924239 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 10:01:57.925443 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 10:01:57.928207 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 10:01:57.932465 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 10:01:57.934638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:01:57.938435 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 10:01:57.941510 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 10:01:57.943226 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 10:01:57.945974 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 10:01:57.948753 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 10:01:57.950307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:01:57.957597 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 10:01:57.961725 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 10:01:57.962584 systemd-journald[1205]: Time spent on flushing to /var/log/journal/091570ad045546aaad30fa09c49109a1 is 18.945ms for 987 entries. May 13 10:01:57.962584 systemd-journald[1205]: System Journal (/var/log/journal/091570ad045546aaad30fa09c49109a1) is 8M, max 195.6M, 187.6M free. May 13 10:01:58.001566 systemd-journald[1205]: Received client request to flush runtime journal. May 13 10:01:58.002310 kernel: loop0: detected capacity change from 0 to 146240 May 13 10:01:57.965934 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:01:57.969443 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 10:01:57.971489 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 10:01:57.973627 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 10:01:57.980882 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 10:01:57.984209 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 10:01:58.012010 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 10:01:58.018463 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 10:01:58.026930 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 13 10:01:58.027457 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 13 10:01:58.027765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:01:58.036869 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 10:01:58.040865 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 10:01:58.048445 kernel: loop1: detected capacity change from 0 to 113872 May 13 10:01:58.053717 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 10:01:58.079457 kernel: loop2: detected capacity change from 0 to 205544 May 13 10:01:58.094936 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 10:01:58.097965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 10:01:58.110249 kernel: loop3: detected capacity change from 0 to 146240 May 13 10:01:58.146461 kernel: loop4: detected capacity change from 0 to 113872 May 13 10:01:58.163445 kernel: loop5: detected capacity change from 0 to 205544 May 13 10:01:58.162866 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 13 10:01:58.162891 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 13 10:01:58.170454 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:01:58.190756 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 10:01:58.191443 (sd-merge)[1272]: Merged extensions into '/usr'. May 13 10:01:58.197492 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... May 13 10:01:58.197512 systemd[1]: Reloading... May 13 10:01:58.309445 zram_generator::config[1300]: No configuration found. May 13 10:01:58.474998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:01:58.498438 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 10:01:58.563189 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 10:01:58.563277 systemd[1]: Reloading finished in 365 ms. May 13 10:01:58.598291 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 10:01:58.600163 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 10:01:58.618267 systemd[1]: Starting ensure-sysext.service... May 13 10:01:58.620696 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 10:01:58.633357 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... May 13 10:01:58.633377 systemd[1]: Reloading... May 13 10:01:58.706438 zram_generator::config[1364]: No configuration found. May 13 10:01:58.737176 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 10:01:58.737643 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 10:01:58.738074 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 10:01:58.738367 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 10:01:58.739364 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 10:01:58.739733 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 13 10:01:58.739986 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 13 10:01:58.745309 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 13 10:01:58.745435 systemd-tmpfiles[1338]: Skipping /boot May 13 10:01:58.763019 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 13 10:01:58.763161 systemd-tmpfiles[1338]: Skipping /boot May 13 10:01:58.960148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:01:59.050199 systemd[1]: Reloading finished in 416 ms. May 13 10:01:59.075081 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 10:01:59.105430 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:01:59.114562 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:01:59.117113 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 10:01:59.136429 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 10:01:59.140504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 10:01:59.144334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:01:59.148592 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 10:01:59.152534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:59.152702 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:01:59.163694 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:01:59.194770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:01:59.197638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:01:59.198858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:01:59.198997 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:01:59.202707 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 10:01:59.203870 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:59.218291 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 10:01:59.218990 systemd-udevd[1409]: Using default interface naming scheme 'v255'. May 13 10:01:59.220952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:01:59.221227 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:01:59.223111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:01:59.223368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:01:59.225242 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:01:59.225522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:01:59.248029 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 10:01:59.253915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:59.254283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:01:59.259669 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:01:59.262875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:01:59.266786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:01:59.269031 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:01:59.269136 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:01:59.296513 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 10:01:59.300677 augenrules[1441]: No rules May 13 10:01:59.297829 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:59.298851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:01:59.300542 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 10:01:59.303342 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:01:59.304214 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:01:59.306096 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 10:01:59.308102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:01:59.309058 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:01:59.310759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:01:59.311472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:01:59.313140 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:01:59.313945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:01:59.332569 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:59.336276 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:01:59.337456 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:01:59.340751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:01:59.343378 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 10:01:59.351061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:01:59.354688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:01:59.355910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:01:59.356018 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:01:59.361505 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 10:01:59.362724 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 10:01:59.362830 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 10:01:59.364340 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 10:01:59.365989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:01:59.373238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:01:59.375202 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 10:01:59.375466 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 10:01:59.376955 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:01:59.377176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:01:59.385562 augenrules[1483]: /sbin/augenrules: No change May 13 10:01:59.387959 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 10:01:59.394077 systemd[1]: Finished ensure-sysext.service. May 13 10:01:59.395981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:01:59.396377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:01:59.396835 augenrules[1511]: No rules May 13 10:01:59.398153 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:01:59.398609 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:01:59.402458 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 10:01:59.414233 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 10:01:59.460673 systemd-resolved[1407]: Positive Trust Anchors: May 13 10:01:59.461013 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 10:01:59.461088 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 10:01:59.474142 systemd-resolved[1407]: Defaulting to hostname 'linux'. May 13 10:01:59.476121 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 10:01:59.477654 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 10:01:59.507905 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 10:01:59.544509 kernel: mousedev: PS/2 mouse device common for all mice May 13 10:01:59.549445 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 10:01:59.552924 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 10:01:59.555609 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 10:01:59.560436 kernel: ACPI: button: Power Button [PWRF] May 13 10:01:59.562454 systemd-networkd[1490]: lo: Link UP May 13 10:01:59.562718 systemd-networkd[1490]: lo: Gained carrier May 13 10:01:59.565014 systemd-networkd[1490]: Enumeration completed May 13 10:01:59.565200 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 10:01:59.566661 systemd[1]: Reached target network.target - Network. May 13 10:01:59.568958 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:01:59.569035 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 10:01:59.569789 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 10:01:59.571821 systemd-networkd[1490]: eth0: Link UP May 13 10:01:59.572472 systemd-networkd[1490]: eth0: Gained carrier May 13 10:01:59.572488 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:01:59.572703 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 10:01:59.584310 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 10:01:59.585641 systemd[1]: Reached target sysinit.target - System Initialization. May 13 10:01:59.586794 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 10:01:59.588063 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 10:01:59.589321 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 13 10:01:59.591300 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 10:01:59.591427 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 10:01:59.593903 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 10:01:59.593671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 10:01:59.593703 systemd[1]: Reached target paths.target - Path Units. May 13 10:01:59.594498 systemd-networkd[1490]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 10:01:59.594688 systemd[1]: Reached target time-set.target - System Time Set. May 13 10:01:59.595973 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 10:01:59.596451 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. May 13 10:01:59.597259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 10:01:59.598182 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 10:01:59.598465 systemd-timesyncd[1518]: Initial clock synchronization to Tue 2025-05-13 10:01:59.948567 UTC. May 13 10:01:59.598746 systemd[1]: Reached target timers.target - Timer Units. May 13 10:01:59.600814 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 10:01:59.603578 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 10:01:59.608317 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 10:01:59.611181 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 10:01:59.612502 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 10:01:59.617547 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 10:01:59.619229 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 10:01:59.621676 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 10:01:59.623913 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 10:01:59.625348 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 10:01:59.631840 systemd[1]: Reached target sockets.target - Socket Units. May 13 10:01:59.633487 systemd[1]: Reached target basic.target - Basic System. May 13 10:01:59.634573 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 10:01:59.634616 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 10:01:59.637599 systemd[1]: Starting containerd.service - containerd container runtime... May 13 10:01:59.640646 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 10:01:59.643659 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 10:01:59.648538 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 10:01:59.654683 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 10:01:59.655801 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 10:01:59.659670 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 13 10:01:59.662693 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 10:01:59.666568 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 10:01:59.668416 jq[1554]: false May 13 10:01:59.673439 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 10:01:59.677496 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 10:01:59.683111 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 10:01:59.685303 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 10:01:59.685892 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 10:01:59.689073 systemd[1]: Starting update-engine.service - Update Engine... May 13 10:01:59.694483 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing passwd entry cache May 13 10:01:59.694515 oslogin_cache_refresh[1556]: Refreshing passwd entry cache May 13 10:01:59.698324 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 10:01:59.703219 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 10:01:59.711556 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 10:01:59.712155 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 10:01:59.719395 jq[1570]: true May 13 10:01:59.727428 update_engine[1565]: I20250513 10:01:59.725765 1565 main.cc:92] Flatcar Update Engine starting May 13 10:01:59.727662 extend-filesystems[1555]: Found loop3 May 13 10:01:59.727662 extend-filesystems[1555]: Found loop4 May 13 10:01:59.727662 extend-filesystems[1555]: Found loop5 May 13 10:01:59.727662 extend-filesystems[1555]: Found sr0 May 13 10:01:59.727662 extend-filesystems[1555]: Found vda May 13 10:01:59.727662 extend-filesystems[1555]: Found vda1 May 13 10:01:59.727662 extend-filesystems[1555]: Found vda2 May 13 10:01:59.727662 extend-filesystems[1555]: Found vda3 May 13 10:01:59.727662 extend-filesystems[1555]: Found usr May 13 10:01:59.727662 extend-filesystems[1555]: Found vda4 May 13 10:01:59.727662 extend-filesystems[1555]: Found vda6 May 13 10:01:59.727662 extend-filesystems[1555]: Found vda7 May 13 10:01:59.727662 extend-filesystems[1555]: Found vda9 May 13 10:01:59.727662 extend-filesystems[1555]: Checking size of /dev/vda9 May 13 10:01:59.737136 oslogin_cache_refresh[1556]: Failure getting users, quitting May 13 10:01:59.749890 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting users, quitting May 13 10:01:59.749890 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 10:01:59.749890 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing group entry cache May 13 10:01:59.749890 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting groups, quitting May 13 10:01:59.749890 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 10:01:59.732798 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 10:01:59.750036 jq[1579]: true May 13 10:01:59.737165 oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 10:01:59.733161 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 10:01:59.737222 oslogin_cache_refresh[1556]: Refreshing group entry cache May 13 10:01:59.749243 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 13 10:01:59.745177 oslogin_cache_refresh[1556]: Failure getting groups, quitting May 13 10:01:59.749546 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 13 10:01:59.745188 oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 10:01:59.764993 systemd[1]: motdgen.service: Deactivated successfully. May 13 10:01:59.767704 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 10:01:59.779655 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 10:01:59.815263 tar[1573]: linux-amd64/helm May 13 10:01:59.829159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:01:59.880159 dbus-daemon[1548]: [system] SELinux support is enabled May 13 10:01:59.880386 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 10:01:59.896866 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 10:01:59.896902 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 10:01:59.916057 kernel: kvm_amd: TSC scaling supported May 13 10:01:59.916109 kernel: kvm_amd: Nested Virtualization enabled May 13 10:01:59.916137 kernel: kvm_amd: Nested Paging enabled May 13 10:01:59.916152 kernel: kvm_amd: LBR virtualization supported May 13 10:01:59.916166 update_engine[1565]: I20250513 10:01:59.916055 1565 update_check_scheduler.cc:74] Next update check in 7m19s May 13 10:01:59.917262 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 10:01:59.917282 kernel: kvm_amd: Virtual GIF supported May 13 10:01:59.918582 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 10:01:59.918603 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 10:01:59.920012 systemd[1]: Started update-engine.service - Update Engine. May 13 10:01:59.926661 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 10:01:59.929135 extend-filesystems[1555]: Resized partition /dev/vda9 May 13 10:01:59.968587 extend-filesystems[1614]: resize2fs 1.47.2 (1-Jan-2025) May 13 10:02:00.050453 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 10:02:00.069044 systemd-logind[1563]: Watching system buttons on /dev/input/event2 (Power Button) May 13 10:02:00.069077 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 10:02:00.095848 systemd-logind[1563]: New seat seat0. May 13 10:02:00.102080 bash[1609]: Updated "/home/core/.ssh/authorized_keys" May 13 10:02:00.102515 systemd[1]: Started systemd-logind.service - User Login Management. May 13 10:02:00.103658 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 10:02:00.141991 kernel: EDAC MC: Ver: 3.0.0 May 13 10:02:00.109789 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 10:02:00.114142 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 10:02:00.143926 extend-filesystems[1614]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 10:02:00.143926 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 10:02:00.143926 extend-filesystems[1614]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 10:02:00.188280 extend-filesystems[1555]: Resized filesystem in /dev/vda9 May 13 10:02:00.145182 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 10:02:00.146283 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 10:02:00.149563 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 10:02:00.267677 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 10:02:00.260074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:02:00.380933 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 10:02:00.383430 containerd[1592]: time="2025-05-13T10:02:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 10:02:00.384598 containerd[1592]: time="2025-05-13T10:02:00.384559251Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 10:02:00.385147 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 10:02:00.400279 containerd[1592]: time="2025-05-13T10:02:00.400238372Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.886µs" May 13 10:02:00.400279 containerd[1592]: time="2025-05-13T10:02:00.400275705Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 10:02:00.400339 containerd[1592]: time="2025-05-13T10:02:00.400301316Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 10:02:00.400562 containerd[1592]: time="2025-05-13T10:02:00.400537565Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 10:02:00.400594 containerd[1592]: time="2025-05-13T10:02:00.400563259Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 10:02:00.400697 containerd[1592]: time="2025-05-13T10:02:00.400677729Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 10:02:00.400794 containerd[1592]: time="2025-05-13T10:02:00.400761851Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 10:02:00.400794 containerd[1592]: time="2025-05-13T10:02:00.400778772Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 10:02:00.401129 containerd[1592]: time="2025-05-13T10:02:00.401098410Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 10:02:00.401129 containerd[1592]: time="2025-05-13T10:02:00.401124262Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 10:02:00.401177 containerd[1592]: time="2025-05-13T10:02:00.401136498Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 10:02:00.401177 containerd[1592]: time="2025-05-13T10:02:00.401154589Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 10:02:00.401285 containerd[1592]: time="2025-05-13T10:02:00.401267563Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 10:02:00.401585 containerd[1592]: time="2025-05-13T10:02:00.401564174Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 10:02:00.401617 containerd[1592]: time="2025-05-13T10:02:00.401602094Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 10:02:00.401617 containerd[1592]: time="2025-05-13T10:02:00.401612866Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 10:02:00.401710 containerd[1592]: time="2025-05-13T10:02:00.401685212Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 10:02:00.401990 containerd[1592]: time="2025-05-13T10:02:00.401970204Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 10:02:00.402066 containerd[1592]: time="2025-05-13T10:02:00.402049452Z" level=info msg="metadata content store policy set" policy=shared May 13 10:02:00.407084 systemd[1]: issuegen.service: Deactivated successfully. May 13 10:02:00.407364 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 10:02:00.410696 containerd[1592]: time="2025-05-13T10:02:00.410659277Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 10:02:00.410746 containerd[1592]: time="2025-05-13T10:02:00.410724021Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 10:02:00.410746 containerd[1592]: time="2025-05-13T10:02:00.410739844Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 10:02:00.410797 containerd[1592]: time="2025-05-13T10:02:00.410781935Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 10:02:00.410797 containerd[1592]: time="2025-05-13T10:02:00.410794589Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 10:02:00.410847 containerd[1592]: time="2025-05-13T10:02:00.410804199Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 10:02:00.410847 containerd[1592]: time="2025-05-13T10:02:00.410818673Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 10:02:00.410847 containerd[1592]: time="2025-05-13T10:02:00.410829727Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 10:02:00.410933 containerd[1592]: time="2025-05-13T10:02:00.410852127Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 10:02:00.410933 containerd[1592]: time="2025-05-13T10:02:00.410862658Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 10:02:00.410933 containerd[1592]: time="2025-05-13T10:02:00.410873827Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 10:02:00.410933 containerd[1592]: time="2025-05-13T10:02:00.410887276Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 10:02:00.411034 containerd[1592]: time="2025-05-13T10:02:00.411019575Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 10:02:00.411061 containerd[1592]: time="2025-05-13T10:02:00.411039508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 10:02:00.411061 containerd[1592]: time="2025-05-13T10:02:00.411053281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 10:02:00.411122 containerd[1592]: time="2025-05-13T10:02:00.411063268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 10:02:00.411122 containerd[1592]: time="2025-05-13T10:02:00.411105642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 10:02:00.411122 containerd[1592]: time="2025-05-13T10:02:00.411117982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 10:02:00.411389 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 10:02:00.413677 containerd[1592]: time="2025-05-13T10:02:00.413646042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 10:02:00.413732 containerd[1592]: time="2025-05-13T10:02:00.413680175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 10:02:00.413732 containerd[1592]: time="2025-05-13T10:02:00.413702649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 10:02:00.413732 containerd[1592]: time="2025-05-13T10:02:00.413719821Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 10:02:00.413793 containerd[1592]: time="2025-05-13T10:02:00.413733678Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 10:02:00.415510 containerd[1592]: time="2025-05-13T10:02:00.415039108Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 10:02:00.415510 containerd[1592]: time="2025-05-13T10:02:00.415100850Z" level=info msg="Start snapshots syncer" May 13 10:02:00.415510 containerd[1592]: time="2025-05-13T10:02:00.415137494Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 10:02:00.415922 containerd[1592]: time="2025-05-13T10:02:00.415849306Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 10:02:00.416045 containerd[1592]: time="2025-05-13T10:02:00.415944294Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 10:02:00.417509 containerd[1592]: time="2025-05-13T10:02:00.417458273Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 10:02:00.418134 containerd[1592]: time="2025-05-13T10:02:00.418115663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 10:02:00.418224 containerd[1592]: time="2025-05-13T10:02:00.418210430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 10:02:00.418296 containerd[1592]: time="2025-05-13T10:02:00.418283979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 10:02:00.418391 containerd[1592]: time="2025-05-13T10:02:00.418376519Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 10:02:00.418513 containerd[1592]: time="2025-05-13T10:02:00.418496939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 10:02:00.418606 containerd[1592]: time="2025-05-13T10:02:00.418591162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 10:02:00.418698 containerd[1592]: time="2025-05-13T10:02:00.418681255Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 10:02:00.418841 containerd[1592]: time="2025-05-13T10:02:00.418809142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 10:02:00.418941 containerd[1592]: time="2025-05-13T10:02:00.418922002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 10:02:00.419083 containerd[1592]: time="2025-05-13T10:02:00.419005652Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 10:02:00.419180 containerd[1592]: time="2025-05-13T10:02:00.419162392Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 10:02:00.419362 containerd[1592]: time="2025-05-13T10:02:00.419342662Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 10:02:00.419465 containerd[1592]: time="2025-05-13T10:02:00.419437868Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 10:02:00.419564 containerd[1592]: time="2025-05-13T10:02:00.419545541Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 10:02:00.419662 containerd[1592]: time="2025-05-13T10:02:00.419643582Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 10:02:00.419803 containerd[1592]: time="2025-05-13T10:02:00.419737126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 10:02:00.419803 containerd[1592]: time="2025-05-13T10:02:00.419763322Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 10:02:00.419919 containerd[1592]: time="2025-05-13T10:02:00.419901593Z" level=info msg="runtime interface created" May 13 10:02:00.420058 containerd[1592]: time="2025-05-13T10:02:00.419974620Z" level=info msg="created NRI interface" May 13 10:02:00.420058 containerd[1592]: time="2025-05-13T10:02:00.419990536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 10:02:00.420058 containerd[1592]: time="2025-05-13T10:02:00.420013835Z" level=info msg="Connect containerd service" May 13 10:02:00.420219 containerd[1592]: time="2025-05-13T10:02:00.420200504Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 10:02:00.421812 containerd[1592]: time="2025-05-13T10:02:00.421788305Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 10:02:00.480652 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 10:02:00.490646 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 10:02:00.494525 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 10:02:00.496142 systemd[1]: Reached target getty.target - Login Prompts. May 13 10:02:00.598631 tar[1573]: linux-amd64/LICENSE May 13 10:02:00.598806 tar[1573]: linux-amd64/README.md May 13 10:02:00.626824 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 10:02:00.637047 containerd[1592]: time="2025-05-13T10:02:00.636932650Z" level=info msg="Start subscribing containerd event" May 13 10:02:00.637047 containerd[1592]: time="2025-05-13T10:02:00.637018371Z" level=info msg="Start recovering state" May 13 10:02:00.637213 containerd[1592]: time="2025-05-13T10:02:00.637195514Z" level=info msg="Start event monitor" May 13 10:02:00.637250 containerd[1592]: time="2025-05-13T10:02:00.637221722Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 10:02:00.637289 containerd[1592]: time="2025-05-13T10:02:00.637227944Z" level=info msg="Start cni network conf syncer for default" May 13 10:02:00.637311 containerd[1592]: time="2025-05-13T10:02:00.637298376Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 10:02:00.637364 containerd[1592]: time="2025-05-13T10:02:00.637337686Z" level=info msg="Start streaming server" May 13 10:02:00.637391 containerd[1592]: time="2025-05-13T10:02:00.637363109Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 10:02:00.637391 containerd[1592]: time="2025-05-13T10:02:00.637373263Z" level=info msg="runtime interface starting up..." May 13 10:02:00.637391 containerd[1592]: time="2025-05-13T10:02:00.637384370Z" level=info msg="starting plugins..." May 13 10:02:00.637468 containerd[1592]: time="2025-05-13T10:02:00.637409457Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 10:02:00.637713 systemd[1]: Started containerd.service - containerd container runtime. May 13 10:02:00.638200 containerd[1592]: time="2025-05-13T10:02:00.638166415Z" level=info msg="containerd successfully booted in 0.256962s" May 13 10:02:00.714666 systemd-networkd[1490]: eth0: Gained IPv6LL May 13 10:02:00.717996 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 10:02:00.719948 systemd[1]: Reached target network-online.target - Network is Online. May 13 10:02:00.722715 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 10:02:00.725873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:00.728559 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 10:02:00.760932 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 10:02:00.763205 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 10:02:00.763602 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 10:02:00.766220 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 10:02:01.974410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:01.976866 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 10:02:01.978368 systemd[1]: Startup finished in 2.904s (kernel) + 7.236s (initrd) + 5.133s (userspace) = 15.274s. May 13 10:02:02.006951 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:02:02.623026 kubelet[1693]: E0513 10:02:02.622946 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:02:02.626660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:02:02.626872 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:02:02.627295 systemd[1]: kubelet.service: Consumed 1.618s CPU time, 236.4M memory peak. May 13 10:02:02.660715 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 10:02:02.661993 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:52578.service - OpenSSH per-connection server daemon (10.0.0.1:52578). May 13 10:02:02.742242 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 52578 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:02:02.744351 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:02.757502 systemd-logind[1563]: New session 1 of user core. May 13 10:02:02.758961 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 10:02:02.760305 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 10:02:02.792450 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 10:02:02.794980 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 10:02:02.815429 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 10:02:02.817843 systemd-logind[1563]: New session c1 of user core. May 13 10:02:02.994725 systemd[1710]: Queued start job for default target default.target. May 13 10:02:03.004682 systemd[1710]: Created slice app.slice - User Application Slice. May 13 10:02:03.004708 systemd[1710]: Reached target paths.target - Paths. May 13 10:02:03.004748 systemd[1710]: Reached target timers.target - Timers. May 13 10:02:03.006304 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 10:02:03.018276 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 10:02:03.018403 systemd[1710]: Reached target sockets.target - Sockets. May 13 10:02:03.018461 systemd[1710]: Reached target basic.target - Basic System. May 13 10:02:03.018504 systemd[1710]: Reached target default.target - Main User Target. May 13 10:02:03.018545 systemd[1710]: Startup finished in 193ms. May 13 10:02:03.018836 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 10:02:03.020961 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 10:02:03.093586 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:52580.service - OpenSSH per-connection server daemon (10.0.0.1:52580). May 13 10:02:03.140049 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 52580 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:02:03.141594 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:03.146641 systemd-logind[1563]: New session 2 of user core. May 13 10:02:03.160652 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 10:02:03.215568 sshd[1723]: Connection closed by 10.0.0.1 port 52580 May 13 10:02:03.216061 sshd-session[1721]: pam_unix(sshd:session): session closed for user core May 13 10:02:03.229595 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:52580.service: Deactivated successfully. May 13 10:02:03.231964 systemd[1]: session-2.scope: Deactivated successfully. May 13 10:02:03.232819 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. May 13 10:02:03.236421 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:52592.service - OpenSSH per-connection server daemon (10.0.0.1:52592). May 13 10:02:03.237077 systemd-logind[1563]: Removed session 2. May 13 10:02:03.293049 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 52592 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:02:03.294726 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:03.299474 systemd-logind[1563]: New session 3 of user core. May 13 10:02:03.314589 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 10:02:03.367215 sshd[1732]: Connection closed by 10.0.0.1 port 52592 May 13 10:02:03.367648 sshd-session[1729]: pam_unix(sshd:session): session closed for user core May 13 10:02:03.384055 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:52592.service: Deactivated successfully. May 13 10:02:03.385760 systemd[1]: session-3.scope: Deactivated successfully. May 13 10:02:03.386480 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. May 13 10:02:03.388874 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:47290.service - OpenSSH per-connection server daemon (10.0.0.1:47290). May 13 10:02:03.389693 systemd-logind[1563]: Removed session 3. May 13 10:02:03.449082 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 47290 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:02:03.450870 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:03.455769 systemd-logind[1563]: New session 4 of user core. May 13 10:02:03.462593 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 10:02:03.517573 sshd[1740]: Connection closed by 10.0.0.1 port 47290 May 13 10:02:03.517897 sshd-session[1738]: pam_unix(sshd:session): session closed for user core May 13 10:02:03.534103 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:47290.service: Deactivated successfully. May 13 10:02:03.535917 systemd[1]: session-4.scope: Deactivated successfully. May 13 10:02:03.536807 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. May 13 10:02:03.539386 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:47302.service - OpenSSH per-connection server daemon (10.0.0.1:47302). May 13 10:02:03.540238 systemd-logind[1563]: Removed session 4. May 13 10:02:03.606980 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 47302 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:02:03.608765 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:03.613456 systemd-logind[1563]: New session 5 of user core. May 13 10:02:03.623568 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 10:02:03.683814 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 10:02:03.684129 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:02:03.706286 sudo[1749]: pam_unix(sudo:session): session closed for user root May 13 10:02:03.708380 sshd[1748]: Connection closed by 10.0.0.1 port 47302 May 13 10:02:03.708715 sshd-session[1746]: pam_unix(sshd:session): session closed for user core May 13 10:02:03.720388 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:47302.service: Deactivated successfully. May 13 10:02:03.722724 systemd[1]: session-5.scope: Deactivated successfully. May 13 10:02:03.723832 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. May 13 10:02:03.727934 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:47304.service - OpenSSH per-connection server daemon (10.0.0.1:47304). May 13 10:02:03.728726 systemd-logind[1563]: Removed session 5. May 13 10:02:03.790240 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 47304 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:02:03.792149 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:03.796784 systemd-logind[1563]: New session 6 of user core. May 13 10:02:03.807800 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 10:02:03.864091 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 10:02:03.864403 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:02:03.871647 sudo[1759]: pam_unix(sudo:session): session closed for user root May 13 10:02:03.878720 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 10:02:03.879061 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:02:03.890690 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:02:03.951110 augenrules[1781]: No rules May 13 10:02:03.953073 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:02:03.953356 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:02:03.954729 sudo[1758]: pam_unix(sudo:session): session closed for user root May 13 10:02:03.956632 sshd[1757]: Connection closed by 10.0.0.1 port 47304 May 13 10:02:03.956988 sshd-session[1755]: pam_unix(sshd:session): session closed for user core May 13 10:02:03.967162 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:47304.service: Deactivated successfully. May 13 10:02:03.968915 systemd[1]: session-6.scope: Deactivated successfully. May 13 10:02:03.969731 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. May 13 10:02:03.972596 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:47312.service - OpenSSH per-connection server daemon (10.0.0.1:47312). May 13 10:02:03.973244 systemd-logind[1563]: Removed session 6. May 13 10:02:04.023956 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 47312 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:02:04.025468 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:04.030539 systemd-logind[1563]: New session 7 of user core. May 13 10:02:04.044601 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 10:02:04.097989 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 10:02:04.098314 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:02:04.603826 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 10:02:04.621899 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 10:02:05.007200 dockerd[1813]: time="2025-05-13T10:02:05.007052306Z" level=info msg="Starting up" May 13 10:02:05.009000 dockerd[1813]: time="2025-05-13T10:02:05.008975558Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 10:02:05.990420 dockerd[1813]: time="2025-05-13T10:02:05.990325472Z" level=info msg="Loading containers: start." May 13 10:02:06.010459 kernel: Initializing XFRM netlink socket May 13 10:02:06.281640 systemd-networkd[1490]: docker0: Link UP May 13 10:02:06.287692 dockerd[1813]: time="2025-05-13T10:02:06.287647588Z" level=info msg="Loading containers: done." May 13 10:02:06.306495 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck198220824-merged.mount: Deactivated successfully. May 13 10:02:06.307084 dockerd[1813]: time="2025-05-13T10:02:06.307042517Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 10:02:06.307186 dockerd[1813]: time="2025-05-13T10:02:06.307164197Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 10:02:06.307335 dockerd[1813]: time="2025-05-13T10:02:06.307315073Z" level=info msg="Initializing buildkit" May 13 10:02:06.373983 dockerd[1813]: time="2025-05-13T10:02:06.373914612Z" level=info msg="Completed buildkit initialization" May 13 10:02:06.380836 dockerd[1813]: time="2025-05-13T10:02:06.380758989Z" level=info msg="Daemon has completed initialization" May 13 10:02:06.381002 dockerd[1813]: time="2025-05-13T10:02:06.380868297Z" level=info msg="API listen on /run/docker.sock" May 13 10:02:06.381151 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 10:02:07.736037 containerd[1592]: time="2025-05-13T10:02:07.735986426Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 10:02:08.366036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833426599.mount: Deactivated successfully. May 13 10:02:10.793180 containerd[1592]: time="2025-05-13T10:02:10.793110453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:10.795351 containerd[1592]: time="2025-05-13T10:02:10.795283875Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 13 10:02:10.797687 containerd[1592]: time="2025-05-13T10:02:10.797646939Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:10.800983 containerd[1592]: time="2025-05-13T10:02:10.800948030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:10.801865 containerd[1592]: time="2025-05-13T10:02:10.801832162Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 3.065799303s" May 13 10:02:10.801912 containerd[1592]: time="2025-05-13T10:02:10.801866902Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 10:02:10.803445 containerd[1592]: time="2025-05-13T10:02:10.803400914Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 10:02:12.776814 containerd[1592]: time="2025-05-13T10:02:12.776757415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:12.777661 containerd[1592]: time="2025-05-13T10:02:12.777608703Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 13 10:02:12.778906 containerd[1592]: time="2025-05-13T10:02:12.778876721Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:12.781313 containerd[1592]: time="2025-05-13T10:02:12.781284324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:12.782102 containerd[1592]: time="2025-05-13T10:02:12.782060374Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.97861198s" May 13 10:02:12.782102 containerd[1592]: time="2025-05-13T10:02:12.782100399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 10:02:12.782752 containerd[1592]: time="2025-05-13T10:02:12.782634261Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 10:02:12.877404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 10:02:12.879116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:13.203590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:13.215766 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:02:13.352727 kubelet[2092]: E0513 10:02:13.352619 2092 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:02:13.358805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:02:13.359029 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:02:13.359447 systemd[1]: kubelet.service: Consumed 241ms CPU time, 96.1M memory peak. May 13 10:02:15.157776 containerd[1592]: time="2025-05-13T10:02:15.157718282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:15.158576 containerd[1592]: time="2025-05-13T10:02:15.158524584Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 13 10:02:15.159729 containerd[1592]: time="2025-05-13T10:02:15.159684906Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:15.162199 containerd[1592]: time="2025-05-13T10:02:15.162168793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:15.163245 containerd[1592]: time="2025-05-13T10:02:15.163180524Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.380516437s" May 13 10:02:15.163245 containerd[1592]: time="2025-05-13T10:02:15.163231187Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 10:02:15.163774 containerd[1592]: time="2025-05-13T10:02:15.163749267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 10:02:16.264971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252652172.mount: Deactivated successfully. May 13 10:02:17.317490 containerd[1592]: time="2025-05-13T10:02:17.317438713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:17.318257 containerd[1592]: time="2025-05-13T10:02:17.318229643Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 13 10:02:17.319488 containerd[1592]: time="2025-05-13T10:02:17.319455948Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:17.321330 containerd[1592]: time="2025-05-13T10:02:17.321281944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:17.321771 containerd[1592]: time="2025-05-13T10:02:17.321723347Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.157937762s" May 13 10:02:17.321771 containerd[1592]: time="2025-05-13T10:02:17.321768062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 10:02:17.322232 containerd[1592]: time="2025-05-13T10:02:17.322186780Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 10:02:17.862931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716533232.mount: Deactivated successfully. May 13 10:02:18.911474 containerd[1592]: time="2025-05-13T10:02:18.911386785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:18.912528 containerd[1592]: time="2025-05-13T10:02:18.912493257Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 10:02:18.913814 containerd[1592]: time="2025-05-13T10:02:18.913771768Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:18.921804 containerd[1592]: time="2025-05-13T10:02:18.921734160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:18.923502 containerd[1592]: time="2025-05-13T10:02:18.923438164Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.601213943s" May 13 10:02:18.923502 containerd[1592]: time="2025-05-13T10:02:18.923490298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 10:02:18.924094 containerd[1592]: time="2025-05-13T10:02:18.924064857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 10:02:19.434257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69290076.mount: Deactivated successfully. May 13 10:02:19.440570 containerd[1592]: time="2025-05-13T10:02:19.440520111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:02:19.441420 containerd[1592]: time="2025-05-13T10:02:19.441367974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 10:02:19.442760 containerd[1592]: time="2025-05-13T10:02:19.442732124Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:02:19.445068 containerd[1592]: time="2025-05-13T10:02:19.445037271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:02:19.446038 containerd[1592]: time="2025-05-13T10:02:19.445971202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.870192ms" May 13 10:02:19.446038 containerd[1592]: time="2025-05-13T10:02:19.446027190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 10:02:19.446597 containerd[1592]: time="2025-05-13T10:02:19.446563103Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 10:02:19.966845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751912686.mount: Deactivated successfully. May 13 10:02:21.854944 containerd[1592]: time="2025-05-13T10:02:21.854871018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:21.855894 containerd[1592]: time="2025-05-13T10:02:21.855837463Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 13 10:02:21.857380 containerd[1592]: time="2025-05-13T10:02:21.857314132Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:21.860210 containerd[1592]: time="2025-05-13T10:02:21.860160035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:21.861372 containerd[1592]: time="2025-05-13T10:02:21.861328152Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.41473008s" May 13 10:02:21.861372 containerd[1592]: time="2025-05-13T10:02:21.861369769Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 10:02:23.451005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 10:02:23.452725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:23.634493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:23.647925 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:02:23.702247 kubelet[2245]: E0513 10:02:23.702085 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:02:23.706372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:02:23.706641 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:02:23.707008 systemd[1]: kubelet.service: Consumed 208ms CPU time, 95.9M memory peak. May 13 10:02:24.465933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:24.466093 systemd[1]: kubelet.service: Consumed 208ms CPU time, 95.9M memory peak. May 13 10:02:24.468192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:24.492721 systemd[1]: Reload requested from client PID 2260 ('systemctl') (unit session-7.scope)... May 13 10:02:24.492735 systemd[1]: Reloading... May 13 10:02:24.560436 zram_generator::config[2301]: No configuration found. May 13 10:02:24.827147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:02:24.942740 systemd[1]: Reloading finished in 449 ms. May 13 10:02:25.014169 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 10:02:25.014268 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 10:02:25.014571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:25.014612 systemd[1]: kubelet.service: Consumed 133ms CPU time, 83.6M memory peak. May 13 10:02:25.016145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:25.190107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:25.210809 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 10:02:25.247274 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:02:25.247274 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 10:02:25.247274 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:02:25.247690 kubelet[2351]: I0513 10:02:25.247329 2351 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 10:02:25.639427 kubelet[2351]: I0513 10:02:25.639372 2351 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 10:02:25.639427 kubelet[2351]: I0513 10:02:25.639417 2351 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 10:02:25.639671 kubelet[2351]: I0513 10:02:25.639652 2351 server.go:929] "Client rotation is on, will bootstrap in background" May 13 10:02:25.667504 kubelet[2351]: I0513 10:02:25.667464 2351 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 10:02:25.667788 kubelet[2351]: E0513 10:02:25.667741 2351 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" May 13 10:02:25.677884 kubelet[2351]: I0513 10:02:25.677847 2351 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 10:02:25.686278 kubelet[2351]: I0513 10:02:25.686235 2351 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 10:02:25.687619 kubelet[2351]: I0513 10:02:25.687589 2351 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 10:02:25.687812 kubelet[2351]: I0513 10:02:25.687769 2351 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 10:02:25.687979 kubelet[2351]: I0513 10:02:25.687802 2351 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 10:02:25.688085 kubelet[2351]: I0513 10:02:25.687989 2351 topology_manager.go:138] "Creating topology manager with none policy" May 13 10:02:25.688085 kubelet[2351]: I0513 10:02:25.687998 2351 container_manager_linux.go:300] "Creating device plugin manager" May 13 10:02:25.688144 kubelet[2351]: I0513 10:02:25.688125 2351 state_mem.go:36] "Initialized new in-memory state store" May 13 10:02:25.689597 kubelet[2351]: I0513 10:02:25.689571 2351 kubelet.go:408] "Attempting to sync node with API server" May 13 10:02:25.689597 kubelet[2351]: I0513 10:02:25.689593 2351 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 10:02:25.689667 kubelet[2351]: I0513 10:02:25.689651 2351 kubelet.go:314] "Adding apiserver pod source" May 13 10:02:25.689690 kubelet[2351]: I0513 10:02:25.689678 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 10:02:25.697485 kubelet[2351]: W0513 10:02:25.696673 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused May 13 10:02:25.697485 kubelet[2351]: W0513 10:02:25.696729 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused May 13 10:02:25.697485 kubelet[2351]: E0513 10:02:25.696757 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" May 13 10:02:25.697485 kubelet[2351]: E0513 10:02:25.696779 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" May 13 10:02:25.697485 kubelet[2351]: I0513 10:02:25.697041 2351 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 10:02:25.699394 kubelet[2351]: I0513 10:02:25.699043 2351 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 10:02:25.699394 kubelet[2351]: W0513 10:02:25.699123 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 10:02:25.700374 kubelet[2351]: I0513 10:02:25.700294 2351 server.go:1269] "Started kubelet" May 13 10:02:25.701369 kubelet[2351]: I0513 10:02:25.701232 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 10:02:25.701739 kubelet[2351]: I0513 10:02:25.701714 2351 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 10:02:25.701809 kubelet[2351]: I0513 10:02:25.701784 2351 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 10:02:25.701926 kubelet[2351]: I0513 10:02:25.701893 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 10:02:25.703883 kubelet[2351]: I0513 10:02:25.703865 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 10:02:25.705483 kubelet[2351]: I0513 10:02:25.705338 2351 server.go:460] "Adding debug handlers to kubelet server" May 13 10:02:25.706378 kubelet[2351]: I0513 10:02:25.706364 2351 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 10:02:25.706492 kubelet[2351]: E0513 10:02:25.706470 2351 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 10:02:25.706625 kubelet[2351]: I0513 10:02:25.706611 2351 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 10:02:25.706741 kubelet[2351]: I0513 10:02:25.706730 2351 reconciler.go:26] "Reconciler: start to sync state" May 13 10:02:25.706947 kubelet[2351]: I0513 10:02:25.706923 2351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 10:02:25.707183 kubelet[2351]: W0513 10:02:25.707149 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused May 13 10:02:25.707281 kubelet[2351]: E0513 10:02:25.707265 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" May 13 10:02:25.707796 kubelet[2351]: E0513 10:02:25.707779 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:25.707939 kubelet[2351]: E0513 10:02:25.707913 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="200ms" May 13 10:02:25.708316 kubelet[2351]: I0513 10:02:25.708290 2351 factory.go:221] Registration of the containerd container factory successfully May 13 10:02:25.708316 kubelet[2351]: I0513 10:02:25.708303 2351 factory.go:221] Registration of the systemd container factory successfully May 13 10:02:25.709875 kubelet[2351]: E0513 10:02:25.707029 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f0df980b2ef85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 10:02:25.700269957 +0000 UTC m=+0.485299246,LastTimestamp:2025-05-13 10:02:25.700269957 +0000 UTC m=+0.485299246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 10:02:25.720079 kubelet[2351]: I0513 10:02:25.719916 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 10:02:25.721250 kubelet[2351]: I0513 10:02:25.721217 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 10:02:25.721350 kubelet[2351]: I0513 10:02:25.721327 2351 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 10:02:25.721380 kubelet[2351]: I0513 10:02:25.721358 2351 kubelet.go:2321] "Starting kubelet main sync loop" May 13 10:02:25.721473 kubelet[2351]: E0513 10:02:25.721450 2351 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 10:02:25.722607 kubelet[2351]: W0513 10:02:25.722557 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused May 13 10:02:25.722652 kubelet[2351]: E0513 10:02:25.722612 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" May 13 10:02:25.725129 kubelet[2351]: I0513 10:02:25.724191 2351 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 10:02:25.725129 kubelet[2351]: I0513 10:02:25.724206 2351 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 10:02:25.725129 kubelet[2351]: I0513 10:02:25.724226 2351 state_mem.go:36] "Initialized new in-memory state store" May 13 10:02:25.808439 kubelet[2351]: E0513 10:02:25.808358 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:25.822712 kubelet[2351]: E0513 10:02:25.822590 2351 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 10:02:25.908618 kubelet[2351]: E0513 10:02:25.908483 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:25.908618 kubelet[2351]: E0513 10:02:25.908548 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="400ms" May 13 10:02:26.009285 kubelet[2351]: E0513 10:02:26.009223 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:26.023500 kubelet[2351]: E0513 10:02:26.023451 2351 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 10:02:26.109951 kubelet[2351]: E0513 10:02:26.109912 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:26.210949 kubelet[2351]: E0513 10:02:26.210907 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:26.310027 kubelet[2351]: E0513 10:02:26.309961 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="800ms" May 13 10:02:26.312018 kubelet[2351]: E0513 10:02:26.311969 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:26.312175 kubelet[2351]: I0513 10:02:26.312108 2351 policy_none.go:49] "None policy: Start" May 13 10:02:26.312891 kubelet[2351]: I0513 10:02:26.312869 2351 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 10:02:26.312972 kubelet[2351]: I0513 10:02:26.312901 2351 state_mem.go:35] "Initializing new in-memory state store" May 13 10:02:26.320009 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 10:02:26.337456 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 10:02:26.340493 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 10:02:26.349383 kubelet[2351]: I0513 10:02:26.349338 2351 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 10:02:26.349630 kubelet[2351]: I0513 10:02:26.349606 2351 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 10:02:26.349669 kubelet[2351]: I0513 10:02:26.349623 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 10:02:26.350092 kubelet[2351]: I0513 10:02:26.350070 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 10:02:26.352886 kubelet[2351]: E0513 10:02:26.352754 2351 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 10:02:26.432279 systemd[1]: Created slice kubepods-burstable-podd260d23ed5bc0cb24c75f2fc37298bb3.slice - libcontainer container kubepods-burstable-podd260d23ed5bc0cb24c75f2fc37298bb3.slice. May 13 10:02:26.450954 kubelet[2351]: I0513 10:02:26.450925 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 10:02:26.451322 kubelet[2351]: E0513 10:02:26.451275 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" May 13 10:02:26.464109 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 10:02:26.467642 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 10:02:26.512873 kubelet[2351]: I0513 10:02:26.512819 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:26.512948 kubelet[2351]: I0513 10:02:26.512869 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:26.512948 kubelet[2351]: I0513 10:02:26.512904 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 10:02:26.512948 kubelet[2351]: I0513 10:02:26.512920 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d260d23ed5bc0cb24c75f2fc37298bb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d260d23ed5bc0cb24c75f2fc37298bb3\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:26.512948 kubelet[2351]: I0513 10:02:26.512938 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d260d23ed5bc0cb24c75f2fc37298bb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d260d23ed5bc0cb24c75f2fc37298bb3\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:26.513061 kubelet[2351]: I0513 10:02:26.512955 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d260d23ed5bc0cb24c75f2fc37298bb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d260d23ed5bc0cb24c75f2fc37298bb3\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:26.513084 kubelet[2351]: I0513 10:02:26.513040 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:26.513106 kubelet[2351]: I0513 10:02:26.513098 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:26.513178 kubelet[2351]: I0513 10:02:26.513156 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:26.635166 kubelet[2351]: W0513 10:02:26.635073 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused May 13 10:02:26.635166 kubelet[2351]: E0513 10:02:26.635150 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" May 13 10:02:26.638730 kubelet[2351]: W0513 10:02:26.638665 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused May 13 10:02:26.638790 kubelet[2351]: E0513 10:02:26.638724 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" May 13 10:02:26.653556 kubelet[2351]: I0513 10:02:26.653493 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 10:02:26.653875 kubelet[2351]: E0513 10:02:26.653839 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" May 13 10:02:26.762301 kubelet[2351]: E0513 10:02:26.762154 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:26.762974 containerd[1592]: time="2025-05-13T10:02:26.762921486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d260d23ed5bc0cb24c75f2fc37298bb3,Namespace:kube-system,Attempt:0,}" May 13 10:02:26.767087 kubelet[2351]: E0513 10:02:26.767048 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:26.767361 containerd[1592]: time="2025-05-13T10:02:26.767324257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 10:02:26.769559 kubelet[2351]: E0513 10:02:26.769533 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:26.769961 containerd[1592]: time="2025-05-13T10:02:26.769904706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 10:02:26.833981 kubelet[2351]: W0513 10:02:26.833873 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused May 13 10:02:26.833981 kubelet[2351]: E0513 10:02:26.833944 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" May 13 10:02:27.055903 kubelet[2351]: I0513 10:02:27.055801 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 10:02:27.056270 kubelet[2351]: E0513 10:02:27.056216 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" May 13 10:02:27.111115 kubelet[2351]: E0513 10:02:27.111060 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="1.6s" May 13 10:02:27.274937 kubelet[2351]: W0513 10:02:27.274849 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused May 13 10:02:27.274937 kubelet[2351]: E0513 10:02:27.274938 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" May 13 10:02:27.293444 containerd[1592]: time="2025-05-13T10:02:27.293172173Z" level=info msg="connecting to shim 449a1e5028bd5ec24df722580c7824a52bb3feebc3fc4e861b5ac5f9d1cbaf19" address="unix:///run/containerd/s/a890371142b676893550678b3f89c30f606bb551f7590548e33c50ee16f9eee4" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:27.296036 containerd[1592]: time="2025-05-13T10:02:27.296004016Z" level=info msg="connecting to shim adea1ecfef420998e550dc3fd932fba315ccc58122f659078220120695e1098d" address="unix:///run/containerd/s/75ed2b738717e32e7dd82126f2315a4198b287f08825446340ddac3aae6d019f" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:27.298885 containerd[1592]: time="2025-05-13T10:02:27.298834585Z" level=info msg="connecting to shim 8ba072996b0000491c202bd4e626c70a94f8830f09c03911459855db6cde7292" address="unix:///run/containerd/s/b2bc2005cb39193c5ba03fa77e9079583dc6938a125f837613a87166063045a6" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:27.347623 systemd[1]: Started cri-containerd-adea1ecfef420998e550dc3fd932fba315ccc58122f659078220120695e1098d.scope - libcontainer container adea1ecfef420998e550dc3fd932fba315ccc58122f659078220120695e1098d. May 13 10:02:27.352568 systemd[1]: Started cri-containerd-449a1e5028bd5ec24df722580c7824a52bb3feebc3fc4e861b5ac5f9d1cbaf19.scope - libcontainer container 449a1e5028bd5ec24df722580c7824a52bb3feebc3fc4e861b5ac5f9d1cbaf19. May 13 10:02:27.354269 systemd[1]: Started cri-containerd-8ba072996b0000491c202bd4e626c70a94f8830f09c03911459855db6cde7292.scope - libcontainer container 8ba072996b0000491c202bd4e626c70a94f8830f09c03911459855db6cde7292. May 13 10:02:27.410420 containerd[1592]: time="2025-05-13T10:02:27.410335250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ba072996b0000491c202bd4e626c70a94f8830f09c03911459855db6cde7292\"" May 13 10:02:27.411773 kubelet[2351]: E0513 10:02:27.411681 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:27.412099 containerd[1592]: time="2025-05-13T10:02:27.411703901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d260d23ed5bc0cb24c75f2fc37298bb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"449a1e5028bd5ec24df722580c7824a52bb3feebc3fc4e861b5ac5f9d1cbaf19\"" May 13 10:02:27.412639 kubelet[2351]: E0513 10:02:27.412592 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:27.415157 containerd[1592]: time="2025-05-13T10:02:27.415114006Z" level=info msg="CreateContainer within sandbox \"449a1e5028bd5ec24df722580c7824a52bb3feebc3fc4e861b5ac5f9d1cbaf19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 10:02:27.416106 containerd[1592]: time="2025-05-13T10:02:27.415666758Z" level=info msg="CreateContainer within sandbox \"8ba072996b0000491c202bd4e626c70a94f8830f09c03911459855db6cde7292\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 10:02:27.424563 containerd[1592]: time="2025-05-13T10:02:27.424519711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"adea1ecfef420998e550dc3fd932fba315ccc58122f659078220120695e1098d\"" May 13 10:02:27.425215 kubelet[2351]: E0513 10:02:27.425190 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:27.426745 containerd[1592]: time="2025-05-13T10:02:27.426695224Z" level=info msg="CreateContainer within sandbox \"adea1ecfef420998e550dc3fd932fba315ccc58122f659078220120695e1098d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 10:02:27.428986 containerd[1592]: time="2025-05-13T10:02:27.428950221Z" level=info msg="Container 48bae9327e2373198ad08a82cdf823f71bde4dad0b1d4c345ebc4fd69dce2d28: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:27.431550 containerd[1592]: time="2025-05-13T10:02:27.431526029Z" level=info msg="Container 8a57f6f024f550d3f3072786dd0235e843e2ebff52bfb6a3932eff229ffd768a: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:27.438033 containerd[1592]: time="2025-05-13T10:02:27.438000929Z" level=info msg="CreateContainer within sandbox \"449a1e5028bd5ec24df722580c7824a52bb3feebc3fc4e861b5ac5f9d1cbaf19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"48bae9327e2373198ad08a82cdf823f71bde4dad0b1d4c345ebc4fd69dce2d28\"" May 13 10:02:27.438605 containerd[1592]: time="2025-05-13T10:02:27.438581116Z" level=info msg="StartContainer for \"48bae9327e2373198ad08a82cdf823f71bde4dad0b1d4c345ebc4fd69dce2d28\"" May 13 10:02:27.439763 containerd[1592]: time="2025-05-13T10:02:27.439725351Z" level=info msg="connecting to shim 48bae9327e2373198ad08a82cdf823f71bde4dad0b1d4c345ebc4fd69dce2d28" address="unix:///run/containerd/s/a890371142b676893550678b3f89c30f606bb551f7590548e33c50ee16f9eee4" protocol=ttrpc version=3 May 13 10:02:27.443300 containerd[1592]: time="2025-05-13T10:02:27.443262694Z" level=info msg="CreateContainer within sandbox \"8ba072996b0000491c202bd4e626c70a94f8830f09c03911459855db6cde7292\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8a57f6f024f550d3f3072786dd0235e843e2ebff52bfb6a3932eff229ffd768a\"" May 13 10:02:27.443790 containerd[1592]: time="2025-05-13T10:02:27.443712873Z" level=info msg="StartContainer for \"8a57f6f024f550d3f3072786dd0235e843e2ebff52bfb6a3932eff229ffd768a\"" May 13 10:02:27.444917 containerd[1592]: time="2025-05-13T10:02:27.444883097Z" level=info msg="connecting to shim 8a57f6f024f550d3f3072786dd0235e843e2ebff52bfb6a3932eff229ffd768a" address="unix:///run/containerd/s/b2bc2005cb39193c5ba03fa77e9079583dc6938a125f837613a87166063045a6" protocol=ttrpc version=3 May 13 10:02:27.445967 containerd[1592]: time="2025-05-13T10:02:27.445942802Z" level=info msg="Container 0e09c64b11955c6a7c09f400de7dbff3854e52af8b7f415ae8cea1883c8e43cb: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:27.453878 containerd[1592]: time="2025-05-13T10:02:27.453845275Z" level=info msg="CreateContainer within sandbox \"adea1ecfef420998e550dc3fd932fba315ccc58122f659078220120695e1098d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e09c64b11955c6a7c09f400de7dbff3854e52af8b7f415ae8cea1883c8e43cb\"" May 13 10:02:27.454401 containerd[1592]: time="2025-05-13T10:02:27.454368317Z" level=info msg="StartContainer for \"0e09c64b11955c6a7c09f400de7dbff3854e52af8b7f415ae8cea1883c8e43cb\"" May 13 10:02:27.455480 containerd[1592]: time="2025-05-13T10:02:27.455460231Z" level=info msg="connecting to shim 0e09c64b11955c6a7c09f400de7dbff3854e52af8b7f415ae8cea1883c8e43cb" address="unix:///run/containerd/s/75ed2b738717e32e7dd82126f2315a4198b287f08825446340ddac3aae6d019f" protocol=ttrpc version=3 May 13 10:02:27.461551 systemd[1]: Started cri-containerd-48bae9327e2373198ad08a82cdf823f71bde4dad0b1d4c345ebc4fd69dce2d28.scope - libcontainer container 48bae9327e2373198ad08a82cdf823f71bde4dad0b1d4c345ebc4fd69dce2d28. May 13 10:02:27.465924 systemd[1]: Started cri-containerd-8a57f6f024f550d3f3072786dd0235e843e2ebff52bfb6a3932eff229ffd768a.scope - libcontainer container 8a57f6f024f550d3f3072786dd0235e843e2ebff52bfb6a3932eff229ffd768a. May 13 10:02:27.480556 systemd[1]: Started cri-containerd-0e09c64b11955c6a7c09f400de7dbff3854e52af8b7f415ae8cea1883c8e43cb.scope - libcontainer container 0e09c64b11955c6a7c09f400de7dbff3854e52af8b7f415ae8cea1883c8e43cb. May 13 10:02:27.542504 containerd[1592]: time="2025-05-13T10:02:27.542345638Z" level=info msg="StartContainer for \"8a57f6f024f550d3f3072786dd0235e843e2ebff52bfb6a3932eff229ffd768a\" returns successfully" May 13 10:02:27.545513 containerd[1592]: time="2025-05-13T10:02:27.545461782Z" level=info msg="StartContainer for \"48bae9327e2373198ad08a82cdf823f71bde4dad0b1d4c345ebc4fd69dce2d28\" returns successfully" May 13 10:02:27.547750 containerd[1592]: time="2025-05-13T10:02:27.547565284Z" level=info msg="StartContainer for \"0e09c64b11955c6a7c09f400de7dbff3854e52af8b7f415ae8cea1883c8e43cb\" returns successfully" May 13 10:02:27.731305 kubelet[2351]: E0513 10:02:27.731267 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:27.733988 kubelet[2351]: E0513 10:02:27.733974 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:27.735555 kubelet[2351]: E0513 10:02:27.735543 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:27.858987 kubelet[2351]: I0513 10:02:27.858550 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 10:02:28.697975 kubelet[2351]: I0513 10:02:28.697920 2351 apiserver.go:52] "Watching apiserver" May 13 10:02:28.707174 kubelet[2351]: I0513 10:02:28.707151 2351 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 10:02:28.737126 kubelet[2351]: E0513 10:02:28.737103 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:28.879883 kubelet[2351]: I0513 10:02:28.879829 2351 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 10:02:30.665217 systemd[1]: Reload requested from client PID 2621 ('systemctl') (unit session-7.scope)... May 13 10:02:30.665234 systemd[1]: Reloading... May 13 10:02:30.740468 zram_generator::config[2664]: No configuration found. May 13 10:02:30.836549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:02:30.971583 systemd[1]: Reloading finished in 305 ms. May 13 10:02:31.011693 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:31.031776 systemd[1]: kubelet.service: Deactivated successfully. May 13 10:02:31.032116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:31.032172 systemd[1]: kubelet.service: Consumed 919ms CPU time, 117.3M memory peak. May 13 10:02:31.034035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:31.226320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:31.230920 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 10:02:31.264525 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:02:31.264525 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 10:02:31.264525 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:02:31.264916 kubelet[2709]: I0513 10:02:31.264566 2709 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 10:02:31.271188 kubelet[2709]: I0513 10:02:31.271153 2709 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 10:02:31.271188 kubelet[2709]: I0513 10:02:31.271174 2709 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 10:02:31.271385 kubelet[2709]: I0513 10:02:31.271367 2709 server.go:929] "Client rotation is on, will bootstrap in background" May 13 10:02:31.272485 kubelet[2709]: I0513 10:02:31.272463 2709 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 10:02:31.274298 kubelet[2709]: I0513 10:02:31.274273 2709 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 10:02:31.277936 kubelet[2709]: I0513 10:02:31.277913 2709 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 10:02:31.283294 kubelet[2709]: I0513 10:02:31.283244 2709 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 10:02:31.283460 kubelet[2709]: I0513 10:02:31.283377 2709 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 10:02:31.283556 kubelet[2709]: I0513 10:02:31.283511 2709 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 10:02:31.283858 kubelet[2709]: I0513 10:02:31.283544 2709 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 10:02:31.283965 kubelet[2709]: I0513 10:02:31.283861 2709 topology_manager.go:138] "Creating topology manager with none policy" May 13 10:02:31.283965 kubelet[2709]: I0513 10:02:31.283871 2709 container_manager_linux.go:300] "Creating device plugin manager" May 13 10:02:31.283965 kubelet[2709]: I0513 10:02:31.283906 2709 state_mem.go:36] "Initialized new in-memory state store" May 13 10:02:31.284121 kubelet[2709]: I0513 10:02:31.284063 2709 kubelet.go:408] "Attempting to sync node with API server" May 13 10:02:31.284121 kubelet[2709]: I0513 10:02:31.284096 2709 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 10:02:31.284167 kubelet[2709]: I0513 10:02:31.284131 2709 kubelet.go:314] "Adding apiserver pod source" May 13 10:02:31.284167 kubelet[2709]: I0513 10:02:31.284149 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 10:02:31.285589 kubelet[2709]: I0513 10:02:31.285545 2709 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 10:02:31.286401 kubelet[2709]: I0513 10:02:31.286318 2709 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 10:02:31.287208 kubelet[2709]: I0513 10:02:31.287192 2709 server.go:1269] "Started kubelet" May 13 10:02:31.287734 kubelet[2709]: I0513 10:02:31.287711 2709 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 10:02:31.287912 kubelet[2709]: I0513 10:02:31.287879 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 10:02:31.288145 kubelet[2709]: I0513 10:02:31.288132 2709 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 10:02:31.290030 kubelet[2709]: I0513 10:02:31.290016 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 10:02:31.290137 kubelet[2709]: I0513 10:02:31.290119 2709 server.go:460] "Adding debug handlers to kubelet server" May 13 10:02:31.296435 kubelet[2709]: I0513 10:02:31.293838 2709 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 10:02:31.296435 kubelet[2709]: E0513 10:02:31.294024 2709 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:02:31.296435 kubelet[2709]: I0513 10:02:31.294785 2709 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 10:02:31.296435 kubelet[2709]: I0513 10:02:31.294918 2709 reconciler.go:26] "Reconciler: start to sync state" May 13 10:02:31.296435 kubelet[2709]: I0513 10:02:31.295092 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 10:02:31.298022 kubelet[2709]: I0513 10:02:31.297145 2709 factory.go:221] Registration of the systemd container factory successfully May 13 10:02:31.298228 kubelet[2709]: I0513 10:02:31.298210 2709 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 10:02:31.300920 kubelet[2709]: E0513 10:02:31.300866 2709 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 10:02:31.301131 kubelet[2709]: I0513 10:02:31.301116 2709 factory.go:221] Registration of the containerd container factory successfully May 13 10:02:31.306452 kubelet[2709]: I0513 10:02:31.306401 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 10:02:31.307558 kubelet[2709]: I0513 10:02:31.307538 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 10:02:31.307616 kubelet[2709]: I0513 10:02:31.307608 2709 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 10:02:31.307639 kubelet[2709]: I0513 10:02:31.307628 2709 kubelet.go:2321] "Starting kubelet main sync loop" May 13 10:02:31.307709 kubelet[2709]: E0513 10:02:31.307686 2709 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 10:02:31.341192 kubelet[2709]: I0513 10:02:31.341159 2709 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 10:02:31.341192 kubelet[2709]: I0513 10:02:31.341181 2709 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 10:02:31.341192 kubelet[2709]: I0513 10:02:31.341202 2709 state_mem.go:36] "Initialized new in-memory state store" May 13 10:02:31.341379 kubelet[2709]: I0513 10:02:31.341353 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 10:02:31.341379 kubelet[2709]: I0513 10:02:31.341363 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 10:02:31.341379 kubelet[2709]: I0513 10:02:31.341380 2709 policy_none.go:49] "None policy: Start" May 13 10:02:31.341881 kubelet[2709]: I0513 10:02:31.341864 2709 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 10:02:31.341916 kubelet[2709]: I0513 10:02:31.341883 2709 state_mem.go:35] "Initializing new in-memory state store" May 13 10:02:31.342010 kubelet[2709]: I0513 10:02:31.341997 2709 state_mem.go:75] "Updated machine memory state" May 13 10:02:31.346767 kubelet[2709]: I0513 10:02:31.346749 2709 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 10:02:31.347020 kubelet[2709]: I0513 10:02:31.346999 2709 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 10:02:31.347074 kubelet[2709]: I0513 10:02:31.347014 2709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 10:02:31.347224 kubelet[2709]: I0513 10:02:31.347207 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 10:02:31.451815 kubelet[2709]: I0513 10:02:31.451761 2709 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 10:02:31.484646 kubelet[2709]: I0513 10:02:31.484534 2709 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 10:02:31.484646 kubelet[2709]: I0513 10:02:31.484614 2709 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 10:02:31.496463 kubelet[2709]: I0513 10:02:31.496397 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d260d23ed5bc0cb24c75f2fc37298bb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d260d23ed5bc0cb24c75f2fc37298bb3\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:31.496463 kubelet[2709]: I0513 10:02:31.496451 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:31.496463 kubelet[2709]: I0513 10:02:31.496470 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:31.496673 kubelet[2709]: I0513 10:02:31.496499 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:31.496673 kubelet[2709]: I0513 10:02:31.496538 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d260d23ed5bc0cb24c75f2fc37298bb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d260d23ed5bc0cb24c75f2fc37298bb3\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:31.496673 kubelet[2709]: I0513 10:02:31.496553 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:31.496673 kubelet[2709]: I0513 10:02:31.496568 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:02:31.496673 kubelet[2709]: I0513 10:02:31.496585 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 10:02:31.496786 kubelet[2709]: I0513 10:02:31.496602 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d260d23ed5bc0cb24c75f2fc37298bb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d260d23ed5bc0cb24c75f2fc37298bb3\") " pod="kube-system/kube-apiserver-localhost" May 13 10:02:31.631927 sudo[2744]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 10:02:31.632244 sudo[2744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 10:02:31.784925 kubelet[2709]: E0513 10:02:31.784636 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:31.784925 kubelet[2709]: E0513 10:02:31.784731 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:31.784925 kubelet[2709]: E0513 10:02:31.784757 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:32.257103 sudo[2744]: pam_unix(sudo:session): session closed for user root May 13 10:02:32.285180 kubelet[2709]: I0513 10:02:32.285129 2709 apiserver.go:52] "Watching apiserver" May 13 10:02:32.295203 kubelet[2709]: I0513 10:02:32.295165 2709 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 10:02:32.322435 kubelet[2709]: E0513 10:02:32.321188 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:32.322435 kubelet[2709]: E0513 10:02:32.321311 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:32.327733 kubelet[2709]: E0513 10:02:32.327685 2709 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 10:02:32.327872 kubelet[2709]: E0513 10:02:32.327824 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:32.353248 kubelet[2709]: I0513 10:02:32.353147 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.353119375 podStartE2EDuration="1.353119375s" podCreationTimestamp="2025-05-13 10:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:32.344537864 +0000 UTC m=+1.109889723" watchObservedRunningTime="2025-05-13 10:02:32.353119375 +0000 UTC m=+1.118471233" May 13 10:02:32.362682 kubelet[2709]: I0513 10:02:32.362602 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3623880179999999 podStartE2EDuration="1.362388018s" podCreationTimestamp="2025-05-13 10:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:32.354307046 +0000 UTC m=+1.119658904" watchObservedRunningTime="2025-05-13 10:02:32.362388018 +0000 UTC m=+1.127739876" May 13 10:02:32.363163 kubelet[2709]: I0513 10:02:32.363078 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.363069838 podStartE2EDuration="1.363069838s" podCreationTimestamp="2025-05-13 10:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:32.361916173 +0000 UTC m=+1.127268041" watchObservedRunningTime="2025-05-13 10:02:32.363069838 +0000 UTC m=+1.128421696" May 13 10:02:33.322012 kubelet[2709]: E0513 10:02:33.321968 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:34.295341 sudo[1793]: pam_unix(sudo:session): session closed for user root May 13 10:02:34.297086 sshd[1792]: Connection closed by 10.0.0.1 port 47312 May 13 10:02:34.297722 sshd-session[1790]: pam_unix(sshd:session): session closed for user core May 13 10:02:34.300734 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:47312.service: Deactivated successfully. May 13 10:02:34.303188 systemd[1]: session-7.scope: Deactivated successfully. May 13 10:02:34.303471 systemd[1]: session-7.scope: Consumed 4.799s CPU time, 265.6M memory peak. May 13 10:02:34.305423 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. May 13 10:02:34.307301 systemd-logind[1563]: Removed session 7. May 13 10:02:34.881102 kubelet[2709]: E0513 10:02:34.881050 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:34.974679 kubelet[2709]: E0513 10:02:34.974576 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:36.617753 kubelet[2709]: I0513 10:02:36.617720 2709 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 10:02:36.618213 containerd[1592]: time="2025-05-13T10:02:36.618166888Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 10:02:36.618596 kubelet[2709]: I0513 10:02:36.618558 2709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 10:02:37.623085 systemd[1]: Created slice kubepods-besteffort-podadb9d62c_6d56_4c66_9113_3255103998fe.slice - libcontainer container kubepods-besteffort-podadb9d62c_6d56_4c66_9113_3255103998fe.slice. May 13 10:02:37.624740 kubelet[2709]: W0513 10:02:37.624711 2709 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb9d62c_6d56_4c66_9113_3255103998fe.slice/cpu.weight": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb9d62c_6d56_4c66_9113_3255103998fe.slice/cpu.weight: no such device May 13 10:02:37.635509 kubelet[2709]: I0513 10:02:37.635468 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-cgroup\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635565 kubelet[2709]: I0513 10:02:37.635513 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-config-path\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635565 kubelet[2709]: I0513 10:02:37.635529 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d718f05-27e4-4b02-b35f-155151f52c3e-hubble-tls\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635565 kubelet[2709]: I0513 10:02:37.635545 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjk2\" (UniqueName: \"kubernetes.io/projected/5d718f05-27e4-4b02-b35f-155151f52c3e-kube-api-access-fbjk2\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635638 kubelet[2709]: I0513 10:02:37.635571 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62jhs\" (UniqueName: \"kubernetes.io/projected/adb9d62c-6d56-4c66-9113-3255103998fe-kube-api-access-62jhs\") pod \"kube-proxy-52s9k\" (UID: \"adb9d62c-6d56-4c66-9113-3255103998fe\") " pod="kube-system/kube-proxy-52s9k" May 13 10:02:37.635638 kubelet[2709]: I0513 10:02:37.635586 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-hostproc\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635638 kubelet[2709]: I0513 10:02:37.635602 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-etc-cni-netd\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635713 kubelet[2709]: I0513 10:02:37.635647 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-host-proc-sys-net\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635713 kubelet[2709]: I0513 10:02:37.635687 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-xtables-lock\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635713 kubelet[2709]: I0513 10:02:37.635702 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-host-proc-sys-kernel\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635784 kubelet[2709]: I0513 10:02:37.635726 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adb9d62c-6d56-4c66-9113-3255103998fe-xtables-lock\") pod \"kube-proxy-52s9k\" (UID: \"adb9d62c-6d56-4c66-9113-3255103998fe\") " pod="kube-system/kube-proxy-52s9k" May 13 10:02:37.635784 kubelet[2709]: I0513 10:02:37.635744 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-run\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635784 kubelet[2709]: I0513 10:02:37.635758 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/adb9d62c-6d56-4c66-9113-3255103998fe-kube-proxy\") pod \"kube-proxy-52s9k\" (UID: \"adb9d62c-6d56-4c66-9113-3255103998fe\") " pod="kube-system/kube-proxy-52s9k" May 13 10:02:37.635856 kubelet[2709]: I0513 10:02:37.635826 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-bpf-maps\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635893 kubelet[2709]: I0513 10:02:37.635874 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cni-path\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635923 kubelet[2709]: I0513 10:02:37.635899 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-lib-modules\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635948 kubelet[2709]: I0513 10:02:37.635919 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d718f05-27e4-4b02-b35f-155151f52c3e-clustermesh-secrets\") pod \"cilium-c7nzj\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " pod="kube-system/cilium-c7nzj" May 13 10:02:37.635973 kubelet[2709]: I0513 10:02:37.635948 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adb9d62c-6d56-4c66-9113-3255103998fe-lib-modules\") pod \"kube-proxy-52s9k\" (UID: \"adb9d62c-6d56-4c66-9113-3255103998fe\") " pod="kube-system/kube-proxy-52s9k" May 13 10:02:37.650863 systemd[1]: Created slice kubepods-burstable-pod5d718f05_27e4_4b02_b35f_155151f52c3e.slice - libcontainer container kubepods-burstable-pod5d718f05_27e4_4b02_b35f_155151f52c3e.slice. May 13 10:02:38.093075 systemd[1]: Created slice kubepods-besteffort-pod1137b159_ae20_4dd5_b1aa_0dfaa75b7b84.slice - libcontainer container kubepods-besteffort-pod1137b159_ae20_4dd5_b1aa_0dfaa75b7b84.slice. May 13 10:02:38.140108 kubelet[2709]: I0513 10:02:38.140020 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1137b159-ae20-4dd5-b1aa-0dfaa75b7b84-cilium-config-path\") pod \"cilium-operator-5d85765b45-npg4r\" (UID: \"1137b159-ae20-4dd5-b1aa-0dfaa75b7b84\") " pod="kube-system/cilium-operator-5d85765b45-npg4r" May 13 10:02:38.140108 kubelet[2709]: I0513 10:02:38.140074 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrrn\" (UniqueName: \"kubernetes.io/projected/1137b159-ae20-4dd5-b1aa-0dfaa75b7b84-kube-api-access-rgrrn\") pod \"cilium-operator-5d85765b45-npg4r\" (UID: \"1137b159-ae20-4dd5-b1aa-0dfaa75b7b84\") " pod="kube-system/cilium-operator-5d85765b45-npg4r" May 13 10:02:38.249958 kubelet[2709]: E0513 10:02:38.249922 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:38.250739 containerd[1592]: time="2025-05-13T10:02:38.250674662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52s9k,Uid:adb9d62c-6d56-4c66-9113-3255103998fe,Namespace:kube-system,Attempt:0,}" May 13 10:02:38.254016 kubelet[2709]: E0513 10:02:38.253969 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:38.254388 containerd[1592]: time="2025-05-13T10:02:38.254351939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7nzj,Uid:5d718f05-27e4-4b02-b35f-155151f52c3e,Namespace:kube-system,Attempt:0,}" May 13 10:02:38.254945 kubelet[2709]: E0513 10:02:38.254922 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:38.328026 kubelet[2709]: E0513 10:02:38.327976 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:38.401088 kubelet[2709]: E0513 10:02:38.400968 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:38.401486 containerd[1592]: time="2025-05-13T10:02:38.401437345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-npg4r,Uid:1137b159-ae20-4dd5-b1aa-0dfaa75b7b84,Namespace:kube-system,Attempt:0,}" May 13 10:02:38.657514 containerd[1592]: time="2025-05-13T10:02:38.657392553Z" level=info msg="connecting to shim e7f51d95d581efc7f5168e04219d7f4bd9c8fbf01b6bb9876b446e5083c63ed7" address="unix:///run/containerd/s/71bd0cf7e3a3275b6b9ef94b482d96e3f64bbda8dd7858ed5d31bf2d4ee71935" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:38.669234 containerd[1592]: time="2025-05-13T10:02:38.669144486Z" level=info msg="connecting to shim b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e" address="unix:///run/containerd/s/3cd72bc2ac54bd2e38cb74e51d6549db24c469d9c5a75e0f7ab0ab9febeda454" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:38.675008 containerd[1592]: time="2025-05-13T10:02:38.674931260Z" level=info msg="connecting to shim 5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b" address="unix:///run/containerd/s/c540ebefd842673549c28190a198e1106229ac6164db461785953f7759ceba8f" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:38.692587 systemd[1]: Started cri-containerd-e7f51d95d581efc7f5168e04219d7f4bd9c8fbf01b6bb9876b446e5083c63ed7.scope - libcontainer container e7f51d95d581efc7f5168e04219d7f4bd9c8fbf01b6bb9876b446e5083c63ed7. May 13 10:02:38.696440 systemd[1]: Started cri-containerd-b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e.scope - libcontainer container b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e. May 13 10:02:38.701616 systemd[1]: Started cri-containerd-5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b.scope - libcontainer container 5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b. May 13 10:02:38.728289 containerd[1592]: time="2025-05-13T10:02:38.728210576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7nzj,Uid:5d718f05-27e4-4b02-b35f-155151f52c3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\"" May 13 10:02:38.730261 kubelet[2709]: E0513 10:02:38.730168 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:38.732070 containerd[1592]: time="2025-05-13T10:02:38.732018057Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 10:02:38.732339 containerd[1592]: time="2025-05-13T10:02:38.732029074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52s9k,Uid:adb9d62c-6d56-4c66-9113-3255103998fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7f51d95d581efc7f5168e04219d7f4bd9c8fbf01b6bb9876b446e5083c63ed7\"" May 13 10:02:38.732808 kubelet[2709]: E0513 10:02:38.732758 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:38.735359 containerd[1592]: time="2025-05-13T10:02:38.735285736Z" level=info msg="CreateContainer within sandbox \"e7f51d95d581efc7f5168e04219d7f4bd9c8fbf01b6bb9876b446e5083c63ed7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 10:02:38.755014 containerd[1592]: time="2025-05-13T10:02:38.754277730Z" level=info msg="Container fa0346be06894649fa969f888931f3a89dd53be0361dc77d1d4579ec9cabb314: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:38.763982 containerd[1592]: time="2025-05-13T10:02:38.763932799Z" level=info msg="CreateContainer within sandbox \"e7f51d95d581efc7f5168e04219d7f4bd9c8fbf01b6bb9876b446e5083c63ed7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fa0346be06894649fa969f888931f3a89dd53be0361dc77d1d4579ec9cabb314\"" May 13 10:02:38.764144 containerd[1592]: time="2025-05-13T10:02:38.764096147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-npg4r,Uid:1137b159-ae20-4dd5-b1aa-0dfaa75b7b84,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b\"" May 13 10:02:38.764865 kubelet[2709]: E0513 10:02:38.764811 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:38.765440 containerd[1592]: time="2025-05-13T10:02:38.765392693Z" level=info msg="StartContainer for \"fa0346be06894649fa969f888931f3a89dd53be0361dc77d1d4579ec9cabb314\"" May 13 10:02:38.766637 containerd[1592]: time="2025-05-13T10:02:38.766615842Z" level=info msg="connecting to shim fa0346be06894649fa969f888931f3a89dd53be0361dc77d1d4579ec9cabb314" address="unix:///run/containerd/s/71bd0cf7e3a3275b6b9ef94b482d96e3f64bbda8dd7858ed5d31bf2d4ee71935" protocol=ttrpc version=3 May 13 10:02:38.791667 systemd[1]: Started cri-containerd-fa0346be06894649fa969f888931f3a89dd53be0361dc77d1d4579ec9cabb314.scope - libcontainer container fa0346be06894649fa969f888931f3a89dd53be0361dc77d1d4579ec9cabb314. May 13 10:02:38.833376 containerd[1592]: time="2025-05-13T10:02:38.833291217Z" level=info msg="StartContainer for \"fa0346be06894649fa969f888931f3a89dd53be0361dc77d1d4579ec9cabb314\" returns successfully" May 13 10:02:39.333686 kubelet[2709]: E0513 10:02:39.333651 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:39.342319 kubelet[2709]: I0513 10:02:39.342248 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-52s9k" podStartSLOduration=2.34222667 podStartE2EDuration="2.34222667s" podCreationTimestamp="2025-05-13 10:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:39.34192508 +0000 UTC m=+8.107276948" watchObservedRunningTime="2025-05-13 10:02:39.34222667 +0000 UTC m=+8.107578518" May 13 10:02:42.916625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4240141134.mount: Deactivated successfully. May 13 10:02:44.705163 update_engine[1565]: I20250513 10:02:44.705029 1565 update_attempter.cc:509] Updating boot flags... May 13 10:02:45.486246 kubelet[2709]: E0513 10:02:45.486192 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:45.585750 kubelet[2709]: E0513 10:02:45.486447 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:46.959776 containerd[1592]: time="2025-05-13T10:02:46.959692056Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:46.960808 containerd[1592]: time="2025-05-13T10:02:46.960771536Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 10:02:46.961930 containerd[1592]: time="2025-05-13T10:02:46.961894759Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:46.963204 containerd[1592]: time="2025-05-13T10:02:46.963165655Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.231085995s" May 13 10:02:46.963269 containerd[1592]: time="2025-05-13T10:02:46.963206790Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 10:02:46.964178 containerd[1592]: time="2025-05-13T10:02:46.964143149Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 10:02:46.965277 containerd[1592]: time="2025-05-13T10:02:46.965246264Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 10:02:46.978821 containerd[1592]: time="2025-05-13T10:02:46.978759764Z" level=info msg="Container 49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:46.983395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29407809.mount: Deactivated successfully. May 13 10:02:46.988860 containerd[1592]: time="2025-05-13T10:02:46.988815041Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\"" May 13 10:02:46.989480 containerd[1592]: time="2025-05-13T10:02:46.989351609Z" level=info msg="StartContainer for \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\"" May 13 10:02:46.990332 containerd[1592]: time="2025-05-13T10:02:46.990304244Z" level=info msg="connecting to shim 49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134" address="unix:///run/containerd/s/3cd72bc2ac54bd2e38cb74e51d6549db24c469d9c5a75e0f7ab0ab9febeda454" protocol=ttrpc version=3 May 13 10:02:47.036638 systemd[1]: Started cri-containerd-49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134.scope - libcontainer container 49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134. May 13 10:02:47.070121 containerd[1592]: time="2025-05-13T10:02:47.070075212Z" level=info msg="StartContainer for \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\" returns successfully" May 13 10:02:47.081497 systemd[1]: cri-containerd-49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134.scope: Deactivated successfully. May 13 10:02:47.083287 containerd[1592]: time="2025-05-13T10:02:47.083239768Z" level=info msg="received exit event container_id:\"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\" id:\"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\" pid:3145 exited_at:{seconds:1747130567 nanos:82822937}" May 13 10:02:47.083466 containerd[1592]: time="2025-05-13T10:02:47.083311814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\" id:\"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\" pid:3145 exited_at:{seconds:1747130567 nanos:82822937}" May 13 10:02:47.103581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134-rootfs.mount: Deactivated successfully. May 13 10:02:47.487002 kubelet[2709]: E0513 10:02:47.486963 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:47.489149 containerd[1592]: time="2025-05-13T10:02:47.489084270Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 10:02:47.745252 containerd[1592]: time="2025-05-13T10:02:47.745135148Z" level=info msg="Container 6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:47.806819 containerd[1592]: time="2025-05-13T10:02:47.806736037Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\"" May 13 10:02:47.807480 containerd[1592]: time="2025-05-13T10:02:47.807428683Z" level=info msg="StartContainer for \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\"" May 13 10:02:47.808620 containerd[1592]: time="2025-05-13T10:02:47.808591863Z" level=info msg="connecting to shim 6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3" address="unix:///run/containerd/s/3cd72bc2ac54bd2e38cb74e51d6549db24c469d9c5a75e0f7ab0ab9febeda454" protocol=ttrpc version=3 May 13 10:02:47.835722 systemd[1]: Started cri-containerd-6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3.scope - libcontainer container 6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3. May 13 10:02:47.872106 containerd[1592]: time="2025-05-13T10:02:47.872048649Z" level=info msg="StartContainer for \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\" returns successfully" May 13 10:02:47.886283 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 10:02:47.886583 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 10:02:47.886823 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 10:02:47.888711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:02:47.890733 systemd[1]: cri-containerd-6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3.scope: Deactivated successfully. May 13 10:02:47.891382 containerd[1592]: time="2025-05-13T10:02:47.891198908Z" level=info msg="received exit event container_id:\"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\" id:\"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\" pid:3190 exited_at:{seconds:1747130567 nanos:890879281}" May 13 10:02:47.891966 containerd[1592]: time="2025-05-13T10:02:47.891581670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\" id:\"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\" pid:3190 exited_at:{seconds:1747130567 nanos:890879281}" May 13 10:02:47.928596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:02:48.491509 kubelet[2709]: E0513 10:02:48.491473 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:48.493891 containerd[1592]: time="2025-05-13T10:02:48.493833760Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 10:02:48.508526 containerd[1592]: time="2025-05-13T10:02:48.508477468Z" level=info msg="Container 1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:48.518759 containerd[1592]: time="2025-05-13T10:02:48.518690487Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\"" May 13 10:02:48.519299 containerd[1592]: time="2025-05-13T10:02:48.519262474Z" level=info msg="StartContainer for \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\"" May 13 10:02:48.520685 containerd[1592]: time="2025-05-13T10:02:48.520650374Z" level=info msg="connecting to shim 1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827" address="unix:///run/containerd/s/3cd72bc2ac54bd2e38cb74e51d6549db24c469d9c5a75e0f7ab0ab9febeda454" protocol=ttrpc version=3 May 13 10:02:48.544563 systemd[1]: Started cri-containerd-1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827.scope - libcontainer container 1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827. May 13 10:02:48.584115 systemd[1]: cri-containerd-1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827.scope: Deactivated successfully. May 13 10:02:48.584660 containerd[1592]: time="2025-05-13T10:02:48.584619682Z" level=info msg="StartContainer for \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\" returns successfully" May 13 10:02:48.585547 containerd[1592]: time="2025-05-13T10:02:48.585518144Z" level=info msg="received exit event container_id:\"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\" id:\"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\" pid:3236 exited_at:{seconds:1747130568 nanos:585178719}" May 13 10:02:48.585638 containerd[1592]: time="2025-05-13T10:02:48.585565302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\" id:\"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\" pid:3236 exited_at:{seconds:1747130568 nanos:585178719}" May 13 10:02:48.607573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827-rootfs.mount: Deactivated successfully. May 13 10:02:49.276713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609452533.mount: Deactivated successfully. May 13 10:02:49.499100 kubelet[2709]: E0513 10:02:49.499014 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:49.504008 containerd[1592]: time="2025-05-13T10:02:49.503721437Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 10:02:49.516697 containerd[1592]: time="2025-05-13T10:02:49.516616288Z" level=info msg="Container 34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:49.527603 containerd[1592]: time="2025-05-13T10:02:49.527478745Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\"" May 13 10:02:49.528523 containerd[1592]: time="2025-05-13T10:02:49.528387232Z" level=info msg="StartContainer for \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\"" May 13 10:02:49.529816 containerd[1592]: time="2025-05-13T10:02:49.529783175Z" level=info msg="connecting to shim 34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e" address="unix:///run/containerd/s/3cd72bc2ac54bd2e38cb74e51d6549db24c469d9c5a75e0f7ab0ab9febeda454" protocol=ttrpc version=3 May 13 10:02:49.557594 systemd[1]: Started cri-containerd-34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e.scope - libcontainer container 34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e. May 13 10:02:49.593603 systemd[1]: cri-containerd-34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e.scope: Deactivated successfully. May 13 10:02:49.594981 containerd[1592]: time="2025-05-13T10:02:49.594824303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\" id:\"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\" pid:3287 exited_at:{seconds:1747130569 nanos:594572031}" May 13 10:02:49.597898 containerd[1592]: time="2025-05-13T10:02:49.597857794Z" level=info msg="received exit event container_id:\"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\" id:\"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\" pid:3287 exited_at:{seconds:1747130569 nanos:594572031}" May 13 10:02:49.599631 containerd[1592]: time="2025-05-13T10:02:49.599601213Z" level=info msg="StartContainer for \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\" returns successfully" May 13 10:02:49.798449 containerd[1592]: time="2025-05-13T10:02:49.798306691Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:49.801988 containerd[1592]: time="2025-05-13T10:02:49.801948822Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 10:02:49.803036 containerd[1592]: time="2025-05-13T10:02:49.803001016Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:49.804541 containerd[1592]: time="2025-05-13T10:02:49.804499129Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.840326463s" May 13 10:02:49.804617 containerd[1592]: time="2025-05-13T10:02:49.804547138Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 10:02:49.806247 containerd[1592]: time="2025-05-13T10:02:49.806215878Z" level=info msg="CreateContainer within sandbox \"5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 10:02:49.815772 containerd[1592]: time="2025-05-13T10:02:49.815734992Z" level=info msg="Container d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:49.823258 containerd[1592]: time="2025-05-13T10:02:49.823218434Z" level=info msg="CreateContainer within sandbox \"5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\"" May 13 10:02:49.823824 containerd[1592]: time="2025-05-13T10:02:49.823749517Z" level=info msg="StartContainer for \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\"" May 13 10:02:49.824560 containerd[1592]: time="2025-05-13T10:02:49.824487599Z" level=info msg="connecting to shim d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987" address="unix:///run/containerd/s/c540ebefd842673549c28190a198e1106229ac6164db461785953f7759ceba8f" protocol=ttrpc version=3 May 13 10:02:49.848569 systemd[1]: Started cri-containerd-d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987.scope - libcontainer container d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987. May 13 10:02:49.882939 containerd[1592]: time="2025-05-13T10:02:49.882885498Z" level=info msg="StartContainer for \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" returns successfully" May 13 10:02:50.274364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e-rootfs.mount: Deactivated successfully. May 13 10:02:50.502594 kubelet[2709]: E0513 10:02:50.502523 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:50.506362 kubelet[2709]: E0513 10:02:50.506320 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:50.508015 containerd[1592]: time="2025-05-13T10:02:50.507977880Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 10:02:50.843670 containerd[1592]: time="2025-05-13T10:02:50.839596433Z" level=info msg="Container fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:50.842809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896724751.mount: Deactivated successfully. May 13 10:02:50.899502 kubelet[2709]: I0513 10:02:50.899429 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-npg4r" podStartSLOduration=2.859493246 podStartE2EDuration="13.899345053s" podCreationTimestamp="2025-05-13 10:02:37 +0000 UTC" firstStartedPulling="2025-05-13 10:02:38.765247813 +0000 UTC m=+7.530599671" lastFinishedPulling="2025-05-13 10:02:49.80509962 +0000 UTC m=+18.570451478" observedRunningTime="2025-05-13 10:02:50.898329782 +0000 UTC m=+19.663681630" watchObservedRunningTime="2025-05-13 10:02:50.899345053 +0000 UTC m=+19.664696911" May 13 10:02:50.947176 containerd[1592]: time="2025-05-13T10:02:50.947107595Z" level=info msg="CreateContainer within sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\"" May 13 10:02:50.948108 containerd[1592]: time="2025-05-13T10:02:50.948077715Z" level=info msg="StartContainer for \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\"" May 13 10:02:50.949269 containerd[1592]: time="2025-05-13T10:02:50.949239767Z" level=info msg="connecting to shim fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df" address="unix:///run/containerd/s/3cd72bc2ac54bd2e38cb74e51d6549db24c469d9c5a75e0f7ab0ab9febeda454" protocol=ttrpc version=3 May 13 10:02:50.979768 systemd[1]: Started cri-containerd-fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df.scope - libcontainer container fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df. May 13 10:02:51.180727 containerd[1592]: time="2025-05-13T10:02:51.180580455Z" level=info msg="StartContainer for \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" returns successfully" May 13 10:02:51.267597 containerd[1592]: time="2025-05-13T10:02:51.267401913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" id:\"2224b3fe8d16bc949b2d682ebafac756db6382e6eb7a0bbe844c833e7c1eee75\" pid:3405 exited_at:{seconds:1747130571 nanos:266464762}" May 13 10:02:51.336137 kubelet[2709]: I0513 10:02:51.336064 2709 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 10:02:51.394697 systemd[1]: Created slice kubepods-burstable-pod6f49ee38_056c_4a49_b31c_15f8cb491386.slice - libcontainer container kubepods-burstable-pod6f49ee38_056c_4a49_b31c_15f8cb491386.slice. May 13 10:02:51.404711 systemd[1]: Created slice kubepods-burstable-podac3a7171_7ec0_4f65_bd14_e8c3317bedcc.slice - libcontainer container kubepods-burstable-podac3a7171_7ec0_4f65_bd14_e8c3317bedcc.slice. May 13 10:02:51.417524 kubelet[2709]: I0513 10:02:51.416462 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmfmd\" (UniqueName: \"kubernetes.io/projected/6f49ee38-056c-4a49-b31c-15f8cb491386-kube-api-access-qmfmd\") pod \"coredns-6f6b679f8f-5d5gr\" (UID: \"6f49ee38-056c-4a49-b31c-15f8cb491386\") " pod="kube-system/coredns-6f6b679f8f-5d5gr" May 13 10:02:51.417704 kubelet[2709]: I0513 10:02:51.417558 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmpp\" (UniqueName: \"kubernetes.io/projected/ac3a7171-7ec0-4f65-bd14-e8c3317bedcc-kube-api-access-blmpp\") pod \"coredns-6f6b679f8f-cqg2l\" (UID: \"ac3a7171-7ec0-4f65-bd14-e8c3317bedcc\") " pod="kube-system/coredns-6f6b679f8f-cqg2l" May 13 10:02:51.417704 kubelet[2709]: I0513 10:02:51.417654 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f49ee38-056c-4a49-b31c-15f8cb491386-config-volume\") pod \"coredns-6f6b679f8f-5d5gr\" (UID: \"6f49ee38-056c-4a49-b31c-15f8cb491386\") " pod="kube-system/coredns-6f6b679f8f-5d5gr" May 13 10:02:51.417704 kubelet[2709]: I0513 10:02:51.417674 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac3a7171-7ec0-4f65-bd14-e8c3317bedcc-config-volume\") pod \"coredns-6f6b679f8f-cqg2l\" (UID: \"ac3a7171-7ec0-4f65-bd14-e8c3317bedcc\") " pod="kube-system/coredns-6f6b679f8f-cqg2l" May 13 10:02:51.530742 kubelet[2709]: E0513 10:02:51.529583 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:51.531278 kubelet[2709]: E0513 10:02:51.530868 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:51.554849 kubelet[2709]: I0513 10:02:51.554772 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c7nzj" podStartSLOduration=6.322082271 podStartE2EDuration="14.554750615s" podCreationTimestamp="2025-05-13 10:02:37 +0000 UTC" firstStartedPulling="2025-05-13 10:02:38.731332062 +0000 UTC m=+7.496683920" lastFinishedPulling="2025-05-13 10:02:46.964000406 +0000 UTC m=+15.729352264" observedRunningTime="2025-05-13 10:02:51.549177101 +0000 UTC m=+20.314528969" watchObservedRunningTime="2025-05-13 10:02:51.554750615 +0000 UTC m=+20.320102473" May 13 10:02:51.699292 kubelet[2709]: E0513 10:02:51.699231 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:51.700057 containerd[1592]: time="2025-05-13T10:02:51.700000035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5d5gr,Uid:6f49ee38-056c-4a49-b31c-15f8cb491386,Namespace:kube-system,Attempt:0,}" May 13 10:02:51.712269 kubelet[2709]: E0513 10:02:51.710492 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:51.712392 containerd[1592]: time="2025-05-13T10:02:51.712000387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cqg2l,Uid:ac3a7171-7ec0-4f65-bd14-e8c3317bedcc,Namespace:kube-system,Attempt:0,}" May 13 10:02:52.533970 kubelet[2709]: E0513 10:02:52.533927 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:53.334511 systemd-networkd[1490]: cilium_host: Link UP May 13 10:02:53.334780 systemd-networkd[1490]: cilium_net: Link UP May 13 10:02:53.335036 systemd-networkd[1490]: cilium_net: Gained carrier May 13 10:02:53.335272 systemd-networkd[1490]: cilium_host: Gained carrier May 13 10:02:53.434076 systemd-networkd[1490]: cilium_vxlan: Link UP May 13 10:02:53.434088 systemd-networkd[1490]: cilium_vxlan: Gained carrier May 13 10:02:53.535219 kubelet[2709]: E0513 10:02:53.535180 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:53.644435 kernel: NET: Registered PF_ALG protocol family May 13 10:02:53.898592 systemd-networkd[1490]: cilium_host: Gained IPv6LL May 13 10:02:54.281581 systemd-networkd[1490]: cilium_net: Gained IPv6LL May 13 10:02:54.295318 systemd-networkd[1490]: lxc_health: Link UP May 13 10:02:54.307305 systemd-networkd[1490]: lxc_health: Gained carrier May 13 10:02:54.537451 kubelet[2709]: E0513 10:02:54.537304 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:54.537789 systemd-networkd[1490]: cilium_vxlan: Gained IPv6LL May 13 10:02:54.830079 kernel: eth0: renamed from tmp464a5 May 13 10:02:54.830235 kernel: eth0: renamed from tmp0dad1 May 13 10:02:54.831634 systemd-networkd[1490]: lxc844de5f4063f: Link UP May 13 10:02:54.832080 systemd-networkd[1490]: lxce057d3ac4b34: Link UP May 13 10:02:54.834592 systemd-networkd[1490]: lxce057d3ac4b34: Gained carrier May 13 10:02:54.835816 systemd-networkd[1490]: lxc844de5f4063f: Gained carrier May 13 10:02:55.369883 systemd-networkd[1490]: lxc_health: Gained IPv6LL May 13 10:02:55.945609 systemd-networkd[1490]: lxc844de5f4063f: Gained IPv6LL May 13 10:02:56.255948 kubelet[2709]: E0513 10:02:56.255761 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:56.265774 systemd-networkd[1490]: lxce057d3ac4b34: Gained IPv6LL May 13 10:02:56.541514 kubelet[2709]: E0513 10:02:56.541376 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:58.525167 containerd[1592]: time="2025-05-13T10:02:58.525108560Z" level=info msg="connecting to shim 464a517ee8f0bdfcb4555cd020b3d4a3442df7334c554d00e16a7b1ad9763f3c" address="unix:///run/containerd/s/44a72cedd7743d10aaeaa0529b476d3a699ac76ab63f046eaebf422e993d6df3" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:58.526182 containerd[1592]: time="2025-05-13T10:02:58.526153306Z" level=info msg="connecting to shim 0dad1adeee2612df3a42508a27067c0292eb3caae21dcd197b2cf16f52e9f944" address="unix:///run/containerd/s/c23ca3a97b8a28cc978f5cafec58d4c8bbc6bd0dfa7c7801750e3d767ef9e2bc" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:58.561575 systemd[1]: Started cri-containerd-0dad1adeee2612df3a42508a27067c0292eb3caae21dcd197b2cf16f52e9f944.scope - libcontainer container 0dad1adeee2612df3a42508a27067c0292eb3caae21dcd197b2cf16f52e9f944. May 13 10:02:58.562918 systemd[1]: Started cri-containerd-464a517ee8f0bdfcb4555cd020b3d4a3442df7334c554d00e16a7b1ad9763f3c.scope - libcontainer container 464a517ee8f0bdfcb4555cd020b3d4a3442df7334c554d00e16a7b1ad9763f3c. May 13 10:02:58.573782 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:02:58.579986 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:02:58.606819 containerd[1592]: time="2025-05-13T10:02:58.606391312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5d5gr,Uid:6f49ee38-056c-4a49-b31c-15f8cb491386,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dad1adeee2612df3a42508a27067c0292eb3caae21dcd197b2cf16f52e9f944\"" May 13 10:02:58.610650 kubelet[2709]: E0513 10:02:58.610623 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:58.612680 containerd[1592]: time="2025-05-13T10:02:58.612641941Z" level=info msg="CreateContainer within sandbox \"0dad1adeee2612df3a42508a27067c0292eb3caae21dcd197b2cf16f52e9f944\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 10:02:58.650330 containerd[1592]: time="2025-05-13T10:02:58.650270534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cqg2l,Uid:ac3a7171-7ec0-4f65-bd14-e8c3317bedcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"464a517ee8f0bdfcb4555cd020b3d4a3442df7334c554d00e16a7b1ad9763f3c\"" May 13 10:02:58.651098 kubelet[2709]: E0513 10:02:58.651034 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:58.653117 containerd[1592]: time="2025-05-13T10:02:58.653068292Z" level=info msg="CreateContainer within sandbox \"464a517ee8f0bdfcb4555cd020b3d4a3442df7334c554d00e16a7b1ad9763f3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 10:02:58.656344 containerd[1592]: time="2025-05-13T10:02:58.656105292Z" level=info msg="Container 1573f4da3b7b7d70882ad7de8f38738d0dfe059b18f8ad8e4283022f10ed41bf: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:58.664002 containerd[1592]: time="2025-05-13T10:02:58.663955425Z" level=info msg="CreateContainer within sandbox \"0dad1adeee2612df3a42508a27067c0292eb3caae21dcd197b2cf16f52e9f944\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1573f4da3b7b7d70882ad7de8f38738d0dfe059b18f8ad8e4283022f10ed41bf\"" May 13 10:02:58.664535 containerd[1592]: time="2025-05-13T10:02:58.664502115Z" level=info msg="StartContainer for \"1573f4da3b7b7d70882ad7de8f38738d0dfe059b18f8ad8e4283022f10ed41bf\"" May 13 10:02:58.665434 containerd[1592]: time="2025-05-13T10:02:58.665389555Z" level=info msg="connecting to shim 1573f4da3b7b7d70882ad7de8f38738d0dfe059b18f8ad8e4283022f10ed41bf" address="unix:///run/containerd/s/c23ca3a97b8a28cc978f5cafec58d4c8bbc6bd0dfa7c7801750e3d767ef9e2bc" protocol=ttrpc version=3 May 13 10:02:58.667927 containerd[1592]: time="2025-05-13T10:02:58.667898275Z" level=info msg="Container a31ff753079a4009e2943b1591f164a444fe1ed5987557c1166245f87ca9e47b: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:58.690546 systemd[1]: Started cri-containerd-1573f4da3b7b7d70882ad7de8f38738d0dfe059b18f8ad8e4283022f10ed41bf.scope - libcontainer container 1573f4da3b7b7d70882ad7de8f38738d0dfe059b18f8ad8e4283022f10ed41bf. May 13 10:02:58.700095 containerd[1592]: time="2025-05-13T10:02:58.700032997Z" level=info msg="CreateContainer within sandbox \"464a517ee8f0bdfcb4555cd020b3d4a3442df7334c554d00e16a7b1ad9763f3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a31ff753079a4009e2943b1591f164a444fe1ed5987557c1166245f87ca9e47b\"" May 13 10:02:58.701292 containerd[1592]: time="2025-05-13T10:02:58.701216751Z" level=info msg="StartContainer for \"a31ff753079a4009e2943b1591f164a444fe1ed5987557c1166245f87ca9e47b\"" May 13 10:02:58.703070 containerd[1592]: time="2025-05-13T10:02:58.703014379Z" level=info msg="connecting to shim a31ff753079a4009e2943b1591f164a444fe1ed5987557c1166245f87ca9e47b" address="unix:///run/containerd/s/44a72cedd7743d10aaeaa0529b476d3a699ac76ab63f046eaebf422e993d6df3" protocol=ttrpc version=3 May 13 10:02:58.722612 systemd[1]: Started cri-containerd-a31ff753079a4009e2943b1591f164a444fe1ed5987557c1166245f87ca9e47b.scope - libcontainer container a31ff753079a4009e2943b1591f164a444fe1ed5987557c1166245f87ca9e47b. May 13 10:02:58.731197 containerd[1592]: time="2025-05-13T10:02:58.731143137Z" level=info msg="StartContainer for \"1573f4da3b7b7d70882ad7de8f38738d0dfe059b18f8ad8e4283022f10ed41bf\" returns successfully" May 13 10:02:58.757318 containerd[1592]: time="2025-05-13T10:02:58.757279120Z" level=info msg="StartContainer for \"a31ff753079a4009e2943b1591f164a444fe1ed5987557c1166245f87ca9e47b\" returns successfully" May 13 10:02:59.549856 kubelet[2709]: E0513 10:02:59.549577 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:59.551422 kubelet[2709]: E0513 10:02:59.551378 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:59.838997 kubelet[2709]: I0513 10:02:59.838729 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-cqg2l" podStartSLOduration=22.83870717 podStartE2EDuration="22.83870717s" podCreationTimestamp="2025-05-13 10:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:59.837972926 +0000 UTC m=+28.603324814" watchObservedRunningTime="2025-05-13 10:02:59.83870717 +0000 UTC m=+28.604059029" May 13 10:03:00.040396 kubelet[2709]: I0513 10:03:00.040106 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5d5gr" podStartSLOduration=23.040081318 podStartE2EDuration="23.040081318s" podCreationTimestamp="2025-05-13 10:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:59.981193484 +0000 UTC m=+28.746545342" watchObservedRunningTime="2025-05-13 10:03:00.040081318 +0000 UTC m=+28.805433176" May 13 10:03:00.452142 systemd[1]: Started sshd@7-10.0.0.18:22-10.0.0.1:38402.service - OpenSSH per-connection server daemon (10.0.0.1:38402). May 13 10:03:00.507605 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 38402 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:00.509230 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:00.513819 systemd-logind[1563]: New session 8 of user core. May 13 10:03:00.524569 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 10:03:00.555478 kubelet[2709]: E0513 10:03:00.555442 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:00.555620 kubelet[2709]: E0513 10:03:00.555606 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:00.656310 sshd[4051]: Connection closed by 10.0.0.1 port 38402 May 13 10:03:00.656665 sshd-session[4049]: pam_unix(sshd:session): session closed for user core May 13 10:03:00.660288 systemd[1]: sshd@7-10.0.0.18:22-10.0.0.1:38402.service: Deactivated successfully. May 13 10:03:00.662328 systemd[1]: session-8.scope: Deactivated successfully. May 13 10:03:00.663913 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. May 13 10:03:00.665186 systemd-logind[1563]: Removed session 8. May 13 10:03:01.567461 kubelet[2709]: E0513 10:03:01.567400 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:01.567461 kubelet[2709]: E0513 10:03:01.567478 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:05.672113 systemd[1]: Started sshd@8-10.0.0.18:22-10.0.0.1:54306.service - OpenSSH per-connection server daemon (10.0.0.1:54306). May 13 10:03:05.735276 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 54306 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:05.736934 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:05.741640 systemd-logind[1563]: New session 9 of user core. May 13 10:03:05.755608 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 10:03:05.871627 sshd[4071]: Connection closed by 10.0.0.1 port 54306 May 13 10:03:05.871971 sshd-session[4069]: pam_unix(sshd:session): session closed for user core May 13 10:03:05.876956 systemd[1]: sshd@8-10.0.0.18:22-10.0.0.1:54306.service: Deactivated successfully. May 13 10:03:05.879445 systemd[1]: session-9.scope: Deactivated successfully. May 13 10:03:05.880311 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. May 13 10:03:05.881994 systemd-logind[1563]: Removed session 9. May 13 10:03:10.885149 systemd[1]: Started sshd@9-10.0.0.18:22-10.0.0.1:54312.service - OpenSSH per-connection server daemon (10.0.0.1:54312). May 13 10:03:10.943000 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 54312 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:10.944652 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:10.948730 systemd-logind[1563]: New session 10 of user core. May 13 10:03:10.958556 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 10:03:11.071279 sshd[4090]: Connection closed by 10.0.0.1 port 54312 May 13 10:03:11.071579 sshd-session[4088]: pam_unix(sshd:session): session closed for user core May 13 10:03:11.075543 systemd[1]: sshd@9-10.0.0.18:22-10.0.0.1:54312.service: Deactivated successfully. May 13 10:03:11.077521 systemd[1]: session-10.scope: Deactivated successfully. May 13 10:03:11.078417 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. May 13 10:03:11.079656 systemd-logind[1563]: Removed session 10. May 13 10:03:16.085962 systemd[1]: Started sshd@10-10.0.0.18:22-10.0.0.1:47110.service - OpenSSH per-connection server daemon (10.0.0.1:47110). May 13 10:03:16.198112 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 47110 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:16.199883 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:16.204663 systemd-logind[1563]: New session 11 of user core. May 13 10:03:16.215547 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 10:03:16.428562 sshd[4106]: Connection closed by 10.0.0.1 port 47110 May 13 10:03:16.428806 sshd-session[4104]: pam_unix(sshd:session): session closed for user core May 13 10:03:16.432603 systemd[1]: sshd@10-10.0.0.18:22-10.0.0.1:47110.service: Deactivated successfully. May 13 10:03:16.434612 systemd[1]: session-11.scope: Deactivated successfully. May 13 10:03:16.435450 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. May 13 10:03:16.436944 systemd-logind[1563]: Removed session 11. May 13 10:03:21.443599 systemd[1]: Started sshd@11-10.0.0.18:22-10.0.0.1:47124.service - OpenSSH per-connection server daemon (10.0.0.1:47124). May 13 10:03:21.506838 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 47124 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:21.508346 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:21.512824 systemd-logind[1563]: New session 12 of user core. May 13 10:03:21.521623 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 10:03:21.629951 sshd[4122]: Connection closed by 10.0.0.1 port 47124 May 13 10:03:21.630373 sshd-session[4120]: pam_unix(sshd:session): session closed for user core May 13 10:03:21.645369 systemd[1]: sshd@11-10.0.0.18:22-10.0.0.1:47124.service: Deactivated successfully. May 13 10:03:21.647739 systemd[1]: session-12.scope: Deactivated successfully. May 13 10:03:21.648888 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. May 13 10:03:21.652714 systemd[1]: Started sshd@12-10.0.0.18:22-10.0.0.1:47128.service - OpenSSH per-connection server daemon (10.0.0.1:47128). May 13 10:03:21.653441 systemd-logind[1563]: Removed session 12. May 13 10:03:21.712476 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 47128 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:21.713926 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:21.718304 systemd-logind[1563]: New session 13 of user core. May 13 10:03:21.729544 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 10:03:21.858731 sshd[4139]: Connection closed by 10.0.0.1 port 47128 May 13 10:03:21.859036 sshd-session[4137]: pam_unix(sshd:session): session closed for user core May 13 10:03:21.869379 systemd[1]: sshd@12-10.0.0.18:22-10.0.0.1:47128.service: Deactivated successfully. May 13 10:03:21.872629 systemd[1]: session-13.scope: Deactivated successfully. May 13 10:03:21.874500 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. May 13 10:03:21.879100 systemd-logind[1563]: Removed session 13. May 13 10:03:21.880220 systemd[1]: Started sshd@13-10.0.0.18:22-10.0.0.1:47130.service - OpenSSH per-connection server daemon (10.0.0.1:47130). May 13 10:03:21.934363 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 47130 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:21.935753 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:21.939854 systemd-logind[1563]: New session 14 of user core. May 13 10:03:21.950566 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 10:03:22.124717 sshd[4152]: Connection closed by 10.0.0.1 port 47130 May 13 10:03:22.124963 sshd-session[4150]: pam_unix(sshd:session): session closed for user core May 13 10:03:22.128664 systemd[1]: sshd@13-10.0.0.18:22-10.0.0.1:47130.service: Deactivated successfully. May 13 10:03:22.130577 systemd[1]: session-14.scope: Deactivated successfully. May 13 10:03:22.131477 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. May 13 10:03:22.132639 systemd-logind[1563]: Removed session 14. May 13 10:03:27.147991 systemd[1]: Started sshd@14-10.0.0.18:22-10.0.0.1:58610.service - OpenSSH per-connection server daemon (10.0.0.1:58610). May 13 10:03:27.201735 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 58610 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:27.203678 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:27.209860 systemd-logind[1563]: New session 15 of user core. May 13 10:03:27.218582 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 10:03:27.336435 sshd[4168]: Connection closed by 10.0.0.1 port 58610 May 13 10:03:27.336783 sshd-session[4166]: pam_unix(sshd:session): session closed for user core May 13 10:03:27.342142 systemd[1]: sshd@14-10.0.0.18:22-10.0.0.1:58610.service: Deactivated successfully. May 13 10:03:27.344610 systemd[1]: session-15.scope: Deactivated successfully. May 13 10:03:27.345373 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. May 13 10:03:27.346663 systemd-logind[1563]: Removed session 15. May 13 10:03:32.351235 systemd[1]: Started sshd@15-10.0.0.18:22-10.0.0.1:58618.service - OpenSSH per-connection server daemon (10.0.0.1:58618). May 13 10:03:32.409200 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 58618 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:32.410643 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:32.415708 systemd-logind[1563]: New session 16 of user core. May 13 10:03:32.429558 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 10:03:32.534819 sshd[4186]: Connection closed by 10.0.0.1 port 58618 May 13 10:03:32.535179 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 13 10:03:32.539156 systemd[1]: sshd@15-10.0.0.18:22-10.0.0.1:58618.service: Deactivated successfully. May 13 10:03:32.540920 systemd[1]: session-16.scope: Deactivated successfully. May 13 10:03:32.541622 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. May 13 10:03:32.542900 systemd-logind[1563]: Removed session 16. May 13 10:03:37.551110 systemd[1]: Started sshd@16-10.0.0.18:22-10.0.0.1:48018.service - OpenSSH per-connection server daemon (10.0.0.1:48018). May 13 10:03:37.616226 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 48018 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:37.618329 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:37.623250 systemd-logind[1563]: New session 17 of user core. May 13 10:03:37.633750 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 10:03:37.743301 sshd[4202]: Connection closed by 10.0.0.1 port 48018 May 13 10:03:37.743735 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 13 10:03:37.757783 systemd[1]: sshd@16-10.0.0.18:22-10.0.0.1:48018.service: Deactivated successfully. May 13 10:03:37.759702 systemd[1]: session-17.scope: Deactivated successfully. May 13 10:03:37.760503 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. May 13 10:03:37.763799 systemd[1]: Started sshd@17-10.0.0.18:22-10.0.0.1:48032.service - OpenSSH per-connection server daemon (10.0.0.1:48032). May 13 10:03:37.764596 systemd-logind[1563]: Removed session 17. May 13 10:03:37.821122 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 48032 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:37.822695 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:37.827812 systemd-logind[1563]: New session 18 of user core. May 13 10:03:37.839619 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 10:03:38.479585 sshd[4217]: Connection closed by 10.0.0.1 port 48032 May 13 10:03:38.480107 sshd-session[4215]: pam_unix(sshd:session): session closed for user core May 13 10:03:38.493854 systemd[1]: sshd@17-10.0.0.18:22-10.0.0.1:48032.service: Deactivated successfully. May 13 10:03:38.495823 systemd[1]: session-18.scope: Deactivated successfully. May 13 10:03:38.496746 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. May 13 10:03:38.499015 systemd-logind[1563]: Removed session 18. May 13 10:03:38.500341 systemd[1]: Started sshd@18-10.0.0.18:22-10.0.0.1:48038.service - OpenSSH per-connection server daemon (10.0.0.1:48038). May 13 10:03:38.565106 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 48038 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:38.566794 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:38.571952 systemd-logind[1563]: New session 19 of user core. May 13 10:03:38.581582 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 10:03:40.098631 sshd[4230]: Connection closed by 10.0.0.1 port 48038 May 13 10:03:40.099776 sshd-session[4228]: pam_unix(sshd:session): session closed for user core May 13 10:03:40.110488 systemd[1]: sshd@18-10.0.0.18:22-10.0.0.1:48038.service: Deactivated successfully. May 13 10:03:40.112924 systemd[1]: session-19.scope: Deactivated successfully. May 13 10:03:40.114152 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. May 13 10:03:40.117623 systemd[1]: Started sshd@19-10.0.0.18:22-10.0.0.1:48050.service - OpenSSH per-connection server daemon (10.0.0.1:48050). May 13 10:03:40.121647 systemd-logind[1563]: Removed session 19. May 13 10:03:40.167990 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 48050 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:40.169533 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:40.174706 systemd-logind[1563]: New session 20 of user core. May 13 10:03:40.185668 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 10:03:40.414946 sshd[4256]: Connection closed by 10.0.0.1 port 48050 May 13 10:03:40.415604 sshd-session[4254]: pam_unix(sshd:session): session closed for user core May 13 10:03:40.427288 systemd[1]: sshd@19-10.0.0.18:22-10.0.0.1:48050.service: Deactivated successfully. May 13 10:03:40.429721 systemd[1]: session-20.scope: Deactivated successfully. May 13 10:03:40.430914 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. May 13 10:03:40.434284 systemd[1]: Started sshd@20-10.0.0.18:22-10.0.0.1:48064.service - OpenSSH per-connection server daemon (10.0.0.1:48064). May 13 10:03:40.435047 systemd-logind[1563]: Removed session 20. May 13 10:03:40.494346 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 48064 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:40.496593 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:40.501914 systemd-logind[1563]: New session 21 of user core. May 13 10:03:40.511661 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 10:03:40.620574 sshd[4269]: Connection closed by 10.0.0.1 port 48064 May 13 10:03:40.620863 sshd-session[4267]: pam_unix(sshd:session): session closed for user core May 13 10:03:40.624938 systemd[1]: sshd@20-10.0.0.18:22-10.0.0.1:48064.service: Deactivated successfully. May 13 10:03:40.626732 systemd[1]: session-21.scope: Deactivated successfully. May 13 10:03:40.627672 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. May 13 10:03:40.628787 systemd-logind[1563]: Removed session 21. May 13 10:03:45.633506 systemd[1]: Started sshd@21-10.0.0.18:22-10.0.0.1:34564.service - OpenSSH per-connection server daemon (10.0.0.1:34564). May 13 10:03:45.686495 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 34564 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:45.687896 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:45.691990 systemd-logind[1563]: New session 22 of user core. May 13 10:03:45.701531 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 10:03:45.811059 sshd[4284]: Connection closed by 10.0.0.1 port 34564 May 13 10:03:45.811374 sshd-session[4282]: pam_unix(sshd:session): session closed for user core May 13 10:03:45.815722 systemd[1]: sshd@21-10.0.0.18:22-10.0.0.1:34564.service: Deactivated successfully. May 13 10:03:45.817741 systemd[1]: session-22.scope: Deactivated successfully. May 13 10:03:45.818716 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. May 13 10:03:45.820086 systemd-logind[1563]: Removed session 22. May 13 10:03:48.309057 kubelet[2709]: E0513 10:03:48.309001 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:50.828179 systemd[1]: Started sshd@22-10.0.0.18:22-10.0.0.1:34568.service - OpenSSH per-connection server daemon (10.0.0.1:34568). May 13 10:03:50.895591 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 34568 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:50.897602 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:50.903600 systemd-logind[1563]: New session 23 of user core. May 13 10:03:50.909533 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 10:03:51.036149 sshd[4302]: Connection closed by 10.0.0.1 port 34568 May 13 10:03:51.036503 sshd-session[4300]: pam_unix(sshd:session): session closed for user core May 13 10:03:51.041793 systemd[1]: sshd@22-10.0.0.18:22-10.0.0.1:34568.service: Deactivated successfully. May 13 10:03:51.044226 systemd[1]: session-23.scope: Deactivated successfully. May 13 10:03:51.045186 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. May 13 10:03:51.046586 systemd-logind[1563]: Removed session 23. May 13 10:03:54.309276 kubelet[2709]: E0513 10:03:54.309216 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:03:56.049863 systemd[1]: Started sshd@23-10.0.0.18:22-10.0.0.1:54336.service - OpenSSH per-connection server daemon (10.0.0.1:54336). May 13 10:03:56.106723 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 54336 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:03:56.108043 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:56.111958 systemd-logind[1563]: New session 24 of user core. May 13 10:03:56.124546 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 10:03:56.227257 sshd[4318]: Connection closed by 10.0.0.1 port 54336 May 13 10:03:56.227611 sshd-session[4316]: pam_unix(sshd:session): session closed for user core May 13 10:03:56.231285 systemd[1]: sshd@23-10.0.0.18:22-10.0.0.1:54336.service: Deactivated successfully. May 13 10:03:56.233153 systemd[1]: session-24.scope: Deactivated successfully. May 13 10:03:56.234007 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. May 13 10:03:56.235281 systemd-logind[1563]: Removed session 24. May 13 10:03:58.309214 kubelet[2709]: E0513 10:03:58.309173 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:01.248939 systemd[1]: Started sshd@24-10.0.0.18:22-10.0.0.1:54338.service - OpenSSH per-connection server daemon (10.0.0.1:54338). May 13 10:04:01.304478 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 54338 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:04:01.306051 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:01.310125 kubelet[2709]: E0513 10:04:01.310019 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:01.312041 systemd-logind[1563]: New session 25 of user core. May 13 10:04:01.321587 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 10:04:01.431342 sshd[4333]: Connection closed by 10.0.0.1 port 54338 May 13 10:04:01.431819 sshd-session[4331]: pam_unix(sshd:session): session closed for user core May 13 10:04:01.443433 systemd[1]: sshd@24-10.0.0.18:22-10.0.0.1:54338.service: Deactivated successfully. May 13 10:04:01.445473 systemd[1]: session-25.scope: Deactivated successfully. May 13 10:04:01.446329 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. May 13 10:04:01.449673 systemd[1]: Started sshd@25-10.0.0.18:22-10.0.0.1:54340.service - OpenSSH per-connection server daemon (10.0.0.1:54340). May 13 10:04:01.450444 systemd-logind[1563]: Removed session 25. May 13 10:04:01.507233 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 54340 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:04:01.508737 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:01.512859 systemd-logind[1563]: New session 26 of user core. May 13 10:04:01.521526 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 10:04:03.012327 containerd[1592]: time="2025-05-13T10:04:03.012265947Z" level=info msg="StopContainer for \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" with timeout 30 (s)" May 13 10:04:03.019615 containerd[1592]: time="2025-05-13T10:04:03.019565703Z" level=info msg="Stop container \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" with signal terminated" May 13 10:04:03.032553 systemd[1]: cri-containerd-d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987.scope: Deactivated successfully. May 13 10:04:03.034487 containerd[1592]: time="2025-05-13T10:04:03.034445436Z" level=info msg="received exit event container_id:\"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" id:\"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" pid:3328 exited_at:{seconds:1747130643 nanos:33231517}" May 13 10:04:03.035226 containerd[1592]: time="2025-05-13T10:04:03.035197120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" id:\"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" pid:3328 exited_at:{seconds:1747130643 nanos:33231517}" May 13 10:04:03.045808 containerd[1592]: time="2025-05-13T10:04:03.045748805Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 10:04:03.046930 containerd[1592]: time="2025-05-13T10:04:03.046893323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" id:\"c1569d91cf8b311b84157c9db80fcc09d66c78b3396fe2df13cd18365fc4d5f9\" pid:4369 exited_at:{seconds:1747130643 nanos:46523883}" May 13 10:04:03.049140 containerd[1592]: time="2025-05-13T10:04:03.049100613Z" level=info msg="StopContainer for \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" with timeout 2 (s)" May 13 10:04:03.051511 containerd[1592]: time="2025-05-13T10:04:03.051422221Z" level=info msg="Stop container \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" with signal terminated" May 13 10:04:03.057288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987-rootfs.mount: Deactivated successfully. May 13 10:04:03.060258 systemd-networkd[1490]: lxc_health: Link DOWN May 13 10:04:03.060266 systemd-networkd[1490]: lxc_health: Lost carrier May 13 10:04:03.102055 systemd[1]: cri-containerd-fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df.scope: Deactivated successfully. May 13 10:04:03.102461 systemd[1]: cri-containerd-fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df.scope: Consumed 6.769s CPU time, 123.3M memory peak, 364K read from disk, 13.3M written to disk. May 13 10:04:03.102607 containerd[1592]: time="2025-05-13T10:04:03.102568464Z" level=info msg="received exit event container_id:\"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" id:\"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" pid:3366 exited_at:{seconds:1747130643 nanos:102237537}" May 13 10:04:03.102873 containerd[1592]: time="2025-05-13T10:04:03.102841852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" id:\"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" pid:3366 exited_at:{seconds:1747130643 nanos:102237537}" May 13 10:04:03.124301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df-rootfs.mount: Deactivated successfully. May 13 10:04:03.171912 containerd[1592]: time="2025-05-13T10:04:03.171851920Z" level=info msg="StopContainer for \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" returns successfully" May 13 10:04:03.180898 containerd[1592]: time="2025-05-13T10:04:03.180867666Z" level=info msg="StopPodSandbox for \"5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b\"" May 13 10:04:03.180995 containerd[1592]: time="2025-05-13T10:04:03.180937358Z" level=info msg="Container to stop \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:04:03.181609 containerd[1592]: time="2025-05-13T10:04:03.181566470Z" level=info msg="StopContainer for \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" returns successfully" May 13 10:04:03.183457 containerd[1592]: time="2025-05-13T10:04:03.182128053Z" level=info msg="StopPodSandbox for \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\"" May 13 10:04:03.183457 containerd[1592]: time="2025-05-13T10:04:03.182207714Z" level=info msg="Container to stop \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:04:03.183457 containerd[1592]: time="2025-05-13T10:04:03.182217943Z" level=info msg="Container to stop \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:04:03.183457 containerd[1592]: time="2025-05-13T10:04:03.182226440Z" level=info msg="Container to stop \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:04:03.183457 containerd[1592]: time="2025-05-13T10:04:03.182236679Z" level=info msg="Container to stop \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:04:03.183457 containerd[1592]: time="2025-05-13T10:04:03.182244133Z" level=info msg="Container to stop \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:04:03.188810 systemd[1]: cri-containerd-b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e.scope: Deactivated successfully. May 13 10:04:03.190104 systemd[1]: cri-containerd-5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b.scope: Deactivated successfully. May 13 10:04:03.190586 containerd[1592]: time="2025-05-13T10:04:03.190541909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b\" id:\"5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b\" pid:2901 exit_status:137 exited_at:{seconds:1747130643 nanos:189865398}" May 13 10:04:03.216796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e-rootfs.mount: Deactivated successfully. May 13 10:04:03.221391 containerd[1592]: time="2025-05-13T10:04:03.221360320Z" level=info msg="shim disconnected" id=b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e namespace=k8s.io May 13 10:04:03.221391 containerd[1592]: time="2025-05-13T10:04:03.221387733Z" level=warning msg="cleaning up after shim disconnected" id=b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e namespace=k8s.io May 13 10:04:03.221548 containerd[1592]: time="2025-05-13T10:04:03.221401629Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 10:04:03.222843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b-rootfs.mount: Deactivated successfully. May 13 10:04:03.228738 containerd[1592]: time="2025-05-13T10:04:03.228696265Z" level=info msg="shim disconnected" id=5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b namespace=k8s.io May 13 10:04:03.228738 containerd[1592]: time="2025-05-13T10:04:03.228732263Z" level=warning msg="cleaning up after shim disconnected" id=5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b namespace=k8s.io May 13 10:04:03.228939 containerd[1592]: time="2025-05-13T10:04:03.228740489Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 10:04:03.257703 containerd[1592]: time="2025-05-13T10:04:03.257657860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" id:\"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" pid:2893 exit_status:137 exited_at:{seconds:1747130643 nanos:194053000}" May 13 10:04:03.258335 containerd[1592]: time="2025-05-13T10:04:03.258229212Z" level=info msg="TearDown network for sandbox \"5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b\" successfully" May 13 10:04:03.258335 containerd[1592]: time="2025-05-13T10:04:03.258262655Z" level=info msg="StopPodSandbox for \"5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b\" returns successfully" May 13 10:04:03.260689 containerd[1592]: time="2025-05-13T10:04:03.260646179Z" level=info msg="TearDown network for sandbox \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" successfully" May 13 10:04:03.260782 containerd[1592]: time="2025-05-13T10:04:03.260690072Z" level=info msg="StopPodSandbox for \"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" returns successfully" May 13 10:04:03.260952 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b-shm.mount: Deactivated successfully. May 13 10:04:03.261113 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e-shm.mount: Deactivated successfully. May 13 10:04:03.264753 containerd[1592]: time="2025-05-13T10:04:03.264238243Z" level=info msg="received exit event sandbox_id:\"5ef466826cd9a44728d1fe07432aca1513f418806545d87545ef063d2631549b\" exit_status:137 exited_at:{seconds:1747130643 nanos:189865398}" May 13 10:04:03.264753 containerd[1592]: time="2025-05-13T10:04:03.264534575Z" level=info msg="received exit event sandbox_id:\"b6580089c9bf79d30047bcfef2201d7dab5abd56fbb1b557d6dbb06b3f226b8e\" exit_status:137 exited_at:{seconds:1747130643 nanos:194053000}" May 13 10:04:03.349705 kubelet[2709]: I0513 10:04:03.349640 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-host-proc-sys-net\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.349705 kubelet[2709]: I0513 10:04:03.349684 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-xtables-lock\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.349705 kubelet[2709]: I0513 10:04:03.349705 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d718f05-27e4-4b02-b35f-155151f52c3e-clustermesh-secrets\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350210 kubelet[2709]: I0513 10:04:03.349723 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cni-path\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350210 kubelet[2709]: I0513 10:04:03.349738 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-lib-modules\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350210 kubelet[2709]: I0513 10:04:03.349757 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbjk2\" (UniqueName: \"kubernetes.io/projected/5d718f05-27e4-4b02-b35f-155151f52c3e-kube-api-access-fbjk2\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350210 kubelet[2709]: I0513 10:04:03.349770 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-bpf-maps\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350210 kubelet[2709]: I0513 10:04:03.349784 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgrrn\" (UniqueName: \"kubernetes.io/projected/1137b159-ae20-4dd5-b1aa-0dfaa75b7b84-kube-api-access-rgrrn\") pod \"1137b159-ae20-4dd5-b1aa-0dfaa75b7b84\" (UID: \"1137b159-ae20-4dd5-b1aa-0dfaa75b7b84\") " May 13 10:04:03.350210 kubelet[2709]: I0513 10:04:03.349800 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d718f05-27e4-4b02-b35f-155151f52c3e-hubble-tls\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350352 kubelet[2709]: I0513 10:04:03.349781 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.350352 kubelet[2709]: I0513 10:04:03.349817 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-config-path\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350352 kubelet[2709]: I0513 10:04:03.349830 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-host-proc-sys-kernel\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350352 kubelet[2709]: I0513 10:04:03.349845 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-hostproc\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350352 kubelet[2709]: I0513 10:04:03.349852 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.350497 kubelet[2709]: I0513 10:04:03.349858 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-run\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350497 kubelet[2709]: I0513 10:04:03.349868 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cni-path" (OuterVolumeSpecName: "cni-path") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.350497 kubelet[2709]: I0513 10:04:03.349874 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1137b159-ae20-4dd5-b1aa-0dfaa75b7b84-cilium-config-path\") pod \"1137b159-ae20-4dd5-b1aa-0dfaa75b7b84\" (UID: \"1137b159-ae20-4dd5-b1aa-0dfaa75b7b84\") " May 13 10:04:03.350497 kubelet[2709]: I0513 10:04:03.349882 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.350497 kubelet[2709]: I0513 10:04:03.349887 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-etc-cni-netd\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350497 kubelet[2709]: I0513 10:04:03.349902 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-cgroup\") pod \"5d718f05-27e4-4b02-b35f-155151f52c3e\" (UID: \"5d718f05-27e4-4b02-b35f-155151f52c3e\") " May 13 10:04:03.350631 kubelet[2709]: I0513 10:04:03.349942 2709 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.350631 kubelet[2709]: I0513 10:04:03.349952 2709 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.350631 kubelet[2709]: I0513 10:04:03.349959 2709 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.350631 kubelet[2709]: I0513 10:04:03.349968 2709 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.350631 kubelet[2709]: I0513 10:04:03.350003 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.350631 kubelet[2709]: I0513 10:04:03.350330 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.350769 kubelet[2709]: I0513 10:04:03.350358 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-hostproc" (OuterVolumeSpecName: "hostproc") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.352424 kubelet[2709]: I0513 10:04:03.351432 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.354440 kubelet[2709]: I0513 10:04:03.354378 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1137b159-ae20-4dd5-b1aa-0dfaa75b7b84-kube-api-access-rgrrn" (OuterVolumeSpecName: "kube-api-access-rgrrn") pod "1137b159-ae20-4dd5-b1aa-0dfaa75b7b84" (UID: "1137b159-ae20-4dd5-b1aa-0dfaa75b7b84"). InnerVolumeSpecName "kube-api-access-rgrrn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 10:04:03.354492 kubelet[2709]: I0513 10:04:03.354457 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.355254 kubelet[2709]: I0513 10:04:03.355224 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 10:04:03.355382 kubelet[2709]: I0513 10:04:03.355364 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d718f05-27e4-4b02-b35f-155151f52c3e-kube-api-access-fbjk2" (OuterVolumeSpecName: "kube-api-access-fbjk2") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "kube-api-access-fbjk2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 10:04:03.355474 kubelet[2709]: I0513 10:04:03.355452 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d718f05-27e4-4b02-b35f-155151f52c3e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 10:04:03.355756 kubelet[2709]: I0513 10:04:03.355730 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1137b159-ae20-4dd5-b1aa-0dfaa75b7b84-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1137b159-ae20-4dd5-b1aa-0dfaa75b7b84" (UID: "1137b159-ae20-4dd5-b1aa-0dfaa75b7b84"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 10:04:03.356223 kubelet[2709]: I0513 10:04:03.356174 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 10:04:03.356923 kubelet[2709]: I0513 10:04:03.356894 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d718f05-27e4-4b02-b35f-155151f52c3e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5d718f05-27e4-4b02-b35f-155151f52c3e" (UID: "5d718f05-27e4-4b02-b35f-155151f52c3e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 10:04:03.450534 kubelet[2709]: I0513 10:04:03.450483 2709 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450534 kubelet[2709]: I0513 10:04:03.450521 2709 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d718f05-27e4-4b02-b35f-155151f52c3e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450534 kubelet[2709]: I0513 10:04:03.450532 2709 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fbjk2\" (UniqueName: \"kubernetes.io/projected/5d718f05-27e4-4b02-b35f-155151f52c3e-kube-api-access-fbjk2\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450534 kubelet[2709]: I0513 10:04:03.450542 2709 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rgrrn\" (UniqueName: \"kubernetes.io/projected/1137b159-ae20-4dd5-b1aa-0dfaa75b7b84-kube-api-access-rgrrn\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450534 kubelet[2709]: I0513 10:04:03.450550 2709 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d718f05-27e4-4b02-b35f-155151f52c3e-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450534 kubelet[2709]: I0513 10:04:03.450558 2709 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450779 kubelet[2709]: I0513 10:04:03.450568 2709 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450779 kubelet[2709]: I0513 10:04:03.450577 2709 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450779 kubelet[2709]: I0513 10:04:03.450584 2709 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450779 kubelet[2709]: I0513 10:04:03.450591 2709 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1137b159-ae20-4dd5-b1aa-0dfaa75b7b84-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450779 kubelet[2709]: I0513 10:04:03.450599 2709 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.450779 kubelet[2709]: I0513 10:04:03.450606 2709 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d718f05-27e4-4b02-b35f-155151f52c3e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 10:04:03.703833 kubelet[2709]: I0513 10:04:03.703804 2709 scope.go:117] "RemoveContainer" containerID="fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df" May 13 10:04:03.708230 containerd[1592]: time="2025-05-13T10:04:03.708200416Z" level=info msg="RemoveContainer for \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\"" May 13 10:04:03.711008 systemd[1]: Removed slice kubepods-burstable-pod5d718f05_27e4_4b02_b35f_155151f52c3e.slice - libcontainer container kubepods-burstable-pod5d718f05_27e4_4b02_b35f_155151f52c3e.slice. May 13 10:04:03.711187 systemd[1]: kubepods-burstable-pod5d718f05_27e4_4b02_b35f_155151f52c3e.slice: Consumed 6.880s CPU time, 123.6M memory peak, 376K read from disk, 13.3M written to disk. May 13 10:04:03.712836 systemd[1]: Removed slice kubepods-besteffort-pod1137b159_ae20_4dd5_b1aa_0dfaa75b7b84.slice - libcontainer container kubepods-besteffort-pod1137b159_ae20_4dd5_b1aa_0dfaa75b7b84.slice. May 13 10:04:03.716760 containerd[1592]: time="2025-05-13T10:04:03.716723147Z" level=info msg="RemoveContainer for \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" returns successfully" May 13 10:04:03.717095 kubelet[2709]: I0513 10:04:03.716951 2709 scope.go:117] "RemoveContainer" containerID="34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e" May 13 10:04:03.718493 containerd[1592]: time="2025-05-13T10:04:03.718457062Z" level=info msg="RemoveContainer for \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\"" May 13 10:04:03.723745 containerd[1592]: time="2025-05-13T10:04:03.723707036Z" level=info msg="RemoveContainer for \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\" returns successfully" May 13 10:04:03.723961 kubelet[2709]: I0513 10:04:03.723936 2709 scope.go:117] "RemoveContainer" containerID="1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827" May 13 10:04:03.725941 containerd[1592]: time="2025-05-13T10:04:03.725908926Z" level=info msg="RemoveContainer for \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\"" May 13 10:04:03.730584 containerd[1592]: time="2025-05-13T10:04:03.730519158Z" level=info msg="RemoveContainer for \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\" returns successfully" May 13 10:04:03.736793 kubelet[2709]: I0513 10:04:03.736735 2709 scope.go:117] "RemoveContainer" containerID="6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3" May 13 10:04:03.743860 containerd[1592]: time="2025-05-13T10:04:03.743807126Z" level=info msg="RemoveContainer for \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\"" May 13 10:04:03.747951 containerd[1592]: time="2025-05-13T10:04:03.747929784Z" level=info msg="RemoveContainer for \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\" returns successfully" May 13 10:04:03.748090 kubelet[2709]: I0513 10:04:03.748064 2709 scope.go:117] "RemoveContainer" containerID="49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134" May 13 10:04:03.749747 containerd[1592]: time="2025-05-13T10:04:03.749715597Z" level=info msg="RemoveContainer for \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\"" May 13 10:04:03.755178 containerd[1592]: time="2025-05-13T10:04:03.755149659Z" level=info msg="RemoveContainer for \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\" returns successfully" May 13 10:04:03.755299 kubelet[2709]: I0513 10:04:03.755273 2709 scope.go:117] "RemoveContainer" containerID="fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df" May 13 10:04:03.755492 containerd[1592]: time="2025-05-13T10:04:03.755454586Z" level=error msg="ContainerStatus for \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\": not found" May 13 10:04:03.759207 kubelet[2709]: E0513 10:04:03.759175 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\": not found" containerID="fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df" May 13 10:04:03.760007 kubelet[2709]: I0513 10:04:03.759906 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df"} err="failed to get container status \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb29994f68117a196abfd923a6c46f541caa5f1259be7b9f3e3d0d93a3acc1df\": not found" May 13 10:04:03.760007 kubelet[2709]: I0513 10:04:03.760002 2709 scope.go:117] "RemoveContainer" containerID="34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e" May 13 10:04:03.760281 containerd[1592]: time="2025-05-13T10:04:03.760224431Z" level=error msg="ContainerStatus for \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\": not found" May 13 10:04:03.760363 kubelet[2709]: E0513 10:04:03.760344 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\": not found" containerID="34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e" May 13 10:04:03.760418 kubelet[2709]: I0513 10:04:03.760364 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e"} err="failed to get container status \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"34d9894aaae946c750d860200239e0efe0349135844431c7533ccfedf71f0f2e\": not found" May 13 10:04:03.760418 kubelet[2709]: I0513 10:04:03.760376 2709 scope.go:117] "RemoveContainer" containerID="1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827" May 13 10:04:03.760557 containerd[1592]: time="2025-05-13T10:04:03.760527395Z" level=error msg="ContainerStatus for \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\": not found" May 13 10:04:03.760697 kubelet[2709]: E0513 10:04:03.760671 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\": not found" containerID="1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827" May 13 10:04:03.760730 kubelet[2709]: I0513 10:04:03.760701 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827"} err="failed to get container status \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e407d1f7b5ac3e52408fd38f277df93100d35fb12977f8995f619756a006827\": not found" May 13 10:04:03.760730 kubelet[2709]: I0513 10:04:03.760713 2709 scope.go:117] "RemoveContainer" containerID="6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3" May 13 10:04:03.760914 containerd[1592]: time="2025-05-13T10:04:03.760883309Z" level=error msg="ContainerStatus for \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\": not found" May 13 10:04:03.761066 kubelet[2709]: E0513 10:04:03.761037 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\": not found" containerID="6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3" May 13 10:04:03.761099 kubelet[2709]: I0513 10:04:03.761076 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3"} err="failed to get container status \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"6818fd3b6491ca9d72c8516f10f7147884cacb610a8aa26179a28124e8bcd7d3\": not found" May 13 10:04:03.761134 kubelet[2709]: I0513 10:04:03.761107 2709 scope.go:117] "RemoveContainer" containerID="49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134" May 13 10:04:03.761349 containerd[1592]: time="2025-05-13T10:04:03.761314576Z" level=error msg="ContainerStatus for \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\": not found" May 13 10:04:03.761465 kubelet[2709]: E0513 10:04:03.761446 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\": not found" containerID="49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134" May 13 10:04:03.761498 kubelet[2709]: I0513 10:04:03.761462 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134"} err="failed to get container status \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\": rpc error: code = NotFound desc = an error occurred when try to find container \"49f002dfe18e6ce9b618c50e634606778a49dc83d33925eab54d5560c41a2134\": not found" May 13 10:04:03.761498 kubelet[2709]: I0513 10:04:03.761483 2709 scope.go:117] "RemoveContainer" containerID="d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987" May 13 10:04:03.763274 containerd[1592]: time="2025-05-13T10:04:03.762833142Z" level=info msg="RemoveContainer for \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\"" May 13 10:04:03.766446 containerd[1592]: time="2025-05-13T10:04:03.766395840Z" level=info msg="RemoveContainer for \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" returns successfully" May 13 10:04:03.766555 kubelet[2709]: I0513 10:04:03.766544 2709 scope.go:117] "RemoveContainer" containerID="d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987" May 13 10:04:03.766751 containerd[1592]: time="2025-05-13T10:04:03.766716788Z" level=error msg="ContainerStatus for \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\": not found" May 13 10:04:03.766893 kubelet[2709]: E0513 10:04:03.766851 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\": not found" containerID="d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987" May 13 10:04:03.766932 kubelet[2709]: I0513 10:04:03.766891 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987"} err="failed to get container status \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0f13730a2966fabbedcdbb4ee941a44b9ec08bccf28cffa83726d724b1da987\": not found" May 13 10:04:04.057262 systemd[1]: var-lib-kubelet-pods-1137b159\x2dae20\x2d4dd5\x2db1aa\x2d0dfaa75b7b84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drgrrn.mount: Deactivated successfully. May 13 10:04:04.057380 systemd[1]: var-lib-kubelet-pods-5d718f05\x2d27e4\x2d4b02\x2db35f\x2d155151f52c3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfbjk2.mount: Deactivated successfully. May 13 10:04:04.057473 systemd[1]: var-lib-kubelet-pods-5d718f05\x2d27e4\x2d4b02\x2db35f\x2d155151f52c3e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 10:04:04.057545 systemd[1]: var-lib-kubelet-pods-5d718f05\x2d27e4\x2d4b02\x2db35f\x2d155151f52c3e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 10:04:04.888128 sshd[4348]: Connection closed by 10.0.0.1 port 54340 May 13 10:04:04.888855 sshd-session[4346]: pam_unix(sshd:session): session closed for user core May 13 10:04:04.902390 systemd[1]: sshd@25-10.0.0.18:22-10.0.0.1:54340.service: Deactivated successfully. May 13 10:04:04.904369 systemd[1]: session-26.scope: Deactivated successfully. May 13 10:04:04.905166 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. May 13 10:04:04.907893 systemd[1]: Started sshd@26-10.0.0.18:22-10.0.0.1:57144.service - OpenSSH per-connection server daemon (10.0.0.1:57144). May 13 10:04:04.908891 systemd-logind[1563]: Removed session 26. May 13 10:04:04.966229 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 57144 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:04:04.968201 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:04.972924 systemd-logind[1563]: New session 27 of user core. May 13 10:04:04.982565 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 10:04:05.310822 kubelet[2709]: I0513 10:04:05.310777 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1137b159-ae20-4dd5-b1aa-0dfaa75b7b84" path="/var/lib/kubelet/pods/1137b159-ae20-4dd5-b1aa-0dfaa75b7b84/volumes" May 13 10:04:05.311322 kubelet[2709]: I0513 10:04:05.311300 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d718f05-27e4-4b02-b35f-155151f52c3e" path="/var/lib/kubelet/pods/5d718f05-27e4-4b02-b35f-155151f52c3e/volumes" May 13 10:04:05.400024 sshd[4502]: Connection closed by 10.0.0.1 port 57144 May 13 10:04:05.401739 sshd-session[4500]: pam_unix(sshd:session): session closed for user core May 13 10:04:05.412001 systemd[1]: sshd@26-10.0.0.18:22-10.0.0.1:57144.service: Deactivated successfully. May 13 10:04:05.415069 systemd[1]: session-27.scope: Deactivated successfully. May 13 10:04:05.419054 systemd-logind[1563]: Session 27 logged out. Waiting for processes to exit. May 13 10:04:05.420146 kubelet[2709]: E0513 10:04:05.419778 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1137b159-ae20-4dd5-b1aa-0dfaa75b7b84" containerName="cilium-operator" May 13 10:04:05.420146 kubelet[2709]: E0513 10:04:05.419800 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d718f05-27e4-4b02-b35f-155151f52c3e" containerName="cilium-agent" May 13 10:04:05.420146 kubelet[2709]: E0513 10:04:05.419807 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d718f05-27e4-4b02-b35f-155151f52c3e" containerName="mount-cgroup" May 13 10:04:05.420146 kubelet[2709]: E0513 10:04:05.419812 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d718f05-27e4-4b02-b35f-155151f52c3e" containerName="apply-sysctl-overwrites" May 13 10:04:05.420146 kubelet[2709]: E0513 10:04:05.419818 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d718f05-27e4-4b02-b35f-155151f52c3e" containerName="mount-bpf-fs" May 13 10:04:05.420146 kubelet[2709]: E0513 10:04:05.419823 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d718f05-27e4-4b02-b35f-155151f52c3e" containerName="clean-cilium-state" May 13 10:04:05.420146 kubelet[2709]: I0513 10:04:05.419850 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="1137b159-ae20-4dd5-b1aa-0dfaa75b7b84" containerName="cilium-operator" May 13 10:04:05.420146 kubelet[2709]: I0513 10:04:05.419856 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d718f05-27e4-4b02-b35f-155151f52c3e" containerName="cilium-agent" May 13 10:04:05.425677 systemd[1]: Started sshd@27-10.0.0.18:22-10.0.0.1:57146.service - OpenSSH per-connection server daemon (10.0.0.1:57146). May 13 10:04:05.433144 systemd-logind[1563]: Removed session 27. May 13 10:04:05.443585 systemd[1]: Created slice kubepods-burstable-podc740ac13_1147_424f_a7ce_8ae4b86f61c2.slice - libcontainer container kubepods-burstable-podc740ac13_1147_424f_a7ce_8ae4b86f61c2.slice. May 13 10:04:05.464439 kubelet[2709]: I0513 10:04:05.464129 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c740ac13-1147-424f-a7ce-8ae4b86f61c2-hubble-tls\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464439 kubelet[2709]: I0513 10:04:05.464164 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-cilium-run\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464439 kubelet[2709]: I0513 10:04:05.464180 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-cni-path\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464439 kubelet[2709]: I0513 10:04:05.464193 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwk59\" (UniqueName: \"kubernetes.io/projected/c740ac13-1147-424f-a7ce-8ae4b86f61c2-kube-api-access-fwk59\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464439 kubelet[2709]: I0513 10:04:05.464208 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-xtables-lock\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464439 kubelet[2709]: I0513 10:04:05.464221 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-bpf-maps\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464694 kubelet[2709]: I0513 10:04:05.464239 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-host-proc-sys-net\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464694 kubelet[2709]: I0513 10:04:05.464253 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c740ac13-1147-424f-a7ce-8ae4b86f61c2-clustermesh-secrets\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464694 kubelet[2709]: I0513 10:04:05.464267 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c740ac13-1147-424f-a7ce-8ae4b86f61c2-cilium-ipsec-secrets\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464694 kubelet[2709]: I0513 10:04:05.464281 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-lib-modules\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464694 kubelet[2709]: I0513 10:04:05.464296 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c740ac13-1147-424f-a7ce-8ae4b86f61c2-cilium-config-path\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464802 kubelet[2709]: I0513 10:04:05.464312 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-host-proc-sys-kernel\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464802 kubelet[2709]: I0513 10:04:05.464326 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-cilium-cgroup\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464802 kubelet[2709]: I0513 10:04:05.464340 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-etc-cni-netd\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.464802 kubelet[2709]: I0513 10:04:05.464353 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c740ac13-1147-424f-a7ce-8ae4b86f61c2-hostproc\") pod \"cilium-gdgq8\" (UID: \"c740ac13-1147-424f-a7ce-8ae4b86f61c2\") " pod="kube-system/cilium-gdgq8" May 13 10:04:05.483402 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 57146 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:04:05.485159 sshd-session[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:05.489700 systemd-logind[1563]: New session 28 of user core. May 13 10:04:05.496539 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 10:04:05.547855 sshd[4517]: Connection closed by 10.0.0.1 port 57146 May 13 10:04:05.548215 sshd-session[4514]: pam_unix(sshd:session): session closed for user core May 13 10:04:05.557273 systemd[1]: sshd@27-10.0.0.18:22-10.0.0.1:57146.service: Deactivated successfully. May 13 10:04:05.559260 systemd[1]: session-28.scope: Deactivated successfully. May 13 10:04:05.560128 systemd-logind[1563]: Session 28 logged out. Waiting for processes to exit. May 13 10:04:05.563168 systemd[1]: Started sshd@28-10.0.0.18:22-10.0.0.1:57150.service - OpenSSH per-connection server daemon (10.0.0.1:57150). May 13 10:04:05.564114 systemd-logind[1563]: Removed session 28. May 13 10:04:05.616330 sshd[4525]: Accepted publickey for core from 10.0.0.1 port 57150 ssh2: RSA SHA256:dsih6pts0Cq5Ksp2v4gYWPaxpVnRcbJtg+nhIQHpL9E May 13 10:04:05.617802 sshd-session[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:05.622265 systemd-logind[1563]: New session 29 of user core. May 13 10:04:05.630582 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 10:04:05.747393 kubelet[2709]: E0513 10:04:05.747342 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:05.748184 containerd[1592]: time="2025-05-13T10:04:05.747997301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gdgq8,Uid:c740ac13-1147-424f-a7ce-8ae4b86f61c2,Namespace:kube-system,Attempt:0,}" May 13 10:04:05.974856 containerd[1592]: time="2025-05-13T10:04:05.974801134Z" level=info msg="connecting to shim d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201" address="unix:///run/containerd/s/7300ed40cfedf7b26a8d30e4ccb9ef082bc0205c0570b07bee4972daf6c82bed" namespace=k8s.io protocol=ttrpc version=3 May 13 10:04:06.003584 systemd[1]: Started cri-containerd-d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201.scope - libcontainer container d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201. May 13 10:04:06.052706 containerd[1592]: time="2025-05-13T10:04:06.052651434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gdgq8,Uid:c740ac13-1147-424f-a7ce-8ae4b86f61c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\"" May 13 10:04:06.053453 kubelet[2709]: E0513 10:04:06.053429 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:06.055251 containerd[1592]: time="2025-05-13T10:04:06.055195860Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 10:04:06.122522 containerd[1592]: time="2025-05-13T10:04:06.122460646Z" level=info msg="Container 1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac: CDI devices from CRI Config.CDIDevices: []" May 13 10:04:06.133077 containerd[1592]: time="2025-05-13T10:04:06.133027615Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac\"" May 13 10:04:06.133658 containerd[1592]: time="2025-05-13T10:04:06.133620753Z" level=info msg="StartContainer for \"1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac\"" May 13 10:04:06.134865 containerd[1592]: time="2025-05-13T10:04:06.134782420Z" level=info msg="connecting to shim 1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac" address="unix:///run/containerd/s/7300ed40cfedf7b26a8d30e4ccb9ef082bc0205c0570b07bee4972daf6c82bed" protocol=ttrpc version=3 May 13 10:04:06.162010 systemd[1]: Started cri-containerd-1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac.scope - libcontainer container 1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac. May 13 10:04:06.231157 systemd[1]: cri-containerd-1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac.scope: Deactivated successfully. May 13 10:04:06.232196 containerd[1592]: time="2025-05-13T10:04:06.232149337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac\" id:\"1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac\" pid:4596 exited_at:{seconds:1747130646 nanos:231675928}" May 13 10:04:06.276618 containerd[1592]: time="2025-05-13T10:04:06.276586307Z" level=info msg="received exit event container_id:\"1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac\" id:\"1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac\" pid:4596 exited_at:{seconds:1747130646 nanos:231675928}" May 13 10:04:06.277809 containerd[1592]: time="2025-05-13T10:04:06.277784022Z" level=info msg="StartContainer for \"1388a3bb546a66343d2ab2332820857a6950ce0ac4418a688ab4e40d56ef86ac\" returns successfully" May 13 10:04:06.370006 kubelet[2709]: E0513 10:04:06.369921 2709 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 10:04:06.716345 kubelet[2709]: E0513 10:04:06.716306 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:06.718312 containerd[1592]: time="2025-05-13T10:04:06.718273093Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 10:04:06.728449 containerd[1592]: time="2025-05-13T10:04:06.727110465Z" level=info msg="Container 09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506: CDI devices from CRI Config.CDIDevices: []" May 13 10:04:06.736004 containerd[1592]: time="2025-05-13T10:04:06.735945183Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506\"" May 13 10:04:06.737061 containerd[1592]: time="2025-05-13T10:04:06.737026197Z" level=info msg="StartContainer for \"09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506\"" May 13 10:04:06.738273 containerd[1592]: time="2025-05-13T10:04:06.738233661Z" level=info msg="connecting to shim 09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506" address="unix:///run/containerd/s/7300ed40cfedf7b26a8d30e4ccb9ef082bc0205c0570b07bee4972daf6c82bed" protocol=ttrpc version=3 May 13 10:04:06.766546 systemd[1]: Started cri-containerd-09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506.scope - libcontainer container 09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506. May 13 10:04:06.796437 containerd[1592]: time="2025-05-13T10:04:06.796373589Z" level=info msg="StartContainer for \"09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506\" returns successfully" May 13 10:04:06.802792 systemd[1]: cri-containerd-09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506.scope: Deactivated successfully. May 13 10:04:06.803578 containerd[1592]: time="2025-05-13T10:04:06.803478840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506\" id:\"09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506\" pid:4640 exited_at:{seconds:1747130646 nanos:803137472}" May 13 10:04:06.803578 containerd[1592]: time="2025-05-13T10:04:06.803487949Z" level=info msg="received exit event container_id:\"09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506\" id:\"09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506\" pid:4640 exited_at:{seconds:1747130646 nanos:803137472}" May 13 10:04:06.825767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09bba3cc433bf5e6dc65f07bc0d2f2ba91a3aeda5bcdd967e5754f3a0cfe3506-rootfs.mount: Deactivated successfully. May 13 10:04:07.308826 kubelet[2709]: E0513 10:04:07.308785 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:07.719686 kubelet[2709]: E0513 10:04:07.719643 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:07.721696 containerd[1592]: time="2025-05-13T10:04:07.721307872Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 10:04:07.755218 containerd[1592]: time="2025-05-13T10:04:07.755169352Z" level=info msg="Container 08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49: CDI devices from CRI Config.CDIDevices: []" May 13 10:04:07.763321 containerd[1592]: time="2025-05-13T10:04:07.763279097Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49\"" May 13 10:04:07.763758 containerd[1592]: time="2025-05-13T10:04:07.763738782Z" level=info msg="StartContainer for \"08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49\"" May 13 10:04:07.765015 containerd[1592]: time="2025-05-13T10:04:07.764990633Z" level=info msg="connecting to shim 08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49" address="unix:///run/containerd/s/7300ed40cfedf7b26a8d30e4ccb9ef082bc0205c0570b07bee4972daf6c82bed" protocol=ttrpc version=3 May 13 10:04:07.784553 systemd[1]: Started cri-containerd-08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49.scope - libcontainer container 08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49. May 13 10:04:07.830347 containerd[1592]: time="2025-05-13T10:04:07.830204319Z" level=info msg="StartContainer for \"08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49\" returns successfully" May 13 10:04:07.830702 systemd[1]: cri-containerd-08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49.scope: Deactivated successfully. May 13 10:04:07.831813 containerd[1592]: time="2025-05-13T10:04:07.831588631Z" level=info msg="received exit event container_id:\"08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49\" id:\"08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49\" pid:4683 exited_at:{seconds:1747130647 nanos:831365256}" May 13 10:04:07.832519 containerd[1592]: time="2025-05-13T10:04:07.831774545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49\" id:\"08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49\" pid:4683 exited_at:{seconds:1747130647 nanos:831365256}" May 13 10:04:07.853314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08d4280d94a364e214e2c2f4bb2defe1029831dcd93125007da213c4e938cd49-rootfs.mount: Deactivated successfully. May 13 10:04:08.723885 kubelet[2709]: E0513 10:04:08.723847 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:08.725512 containerd[1592]: time="2025-05-13T10:04:08.725472854Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 10:04:08.734585 containerd[1592]: time="2025-05-13T10:04:08.733703796Z" level=info msg="Container bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85: CDI devices from CRI Config.CDIDevices: []" May 13 10:04:08.742799 containerd[1592]: time="2025-05-13T10:04:08.742757804Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85\"" May 13 10:04:08.743384 containerd[1592]: time="2025-05-13T10:04:08.743342919Z" level=info msg="StartContainer for \"bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85\"" May 13 10:04:08.744569 containerd[1592]: time="2025-05-13T10:04:08.744532314Z" level=info msg="connecting to shim bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85" address="unix:///run/containerd/s/7300ed40cfedf7b26a8d30e4ccb9ef082bc0205c0570b07bee4972daf6c82bed" protocol=ttrpc version=3 May 13 10:04:08.769568 systemd[1]: Started cri-containerd-bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85.scope - libcontainer container bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85. May 13 10:04:08.798625 systemd[1]: cri-containerd-bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85.scope: Deactivated successfully. May 13 10:04:08.800249 containerd[1592]: time="2025-05-13T10:04:08.800214793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85\" id:\"bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85\" pid:4724 exited_at:{seconds:1747130648 nanos:798926069}" May 13 10:04:08.801503 containerd[1592]: time="2025-05-13T10:04:08.801468129Z" level=info msg="received exit event container_id:\"bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85\" id:\"bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85\" pid:4724 exited_at:{seconds:1747130648 nanos:798926069}" May 13 10:04:08.810208 containerd[1592]: time="2025-05-13T10:04:08.810162282Z" level=info msg="StartContainer for \"bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85\" returns successfully" May 13 10:04:08.823643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbbdc3ba3cd65e50bc05c970cc7651f3d16e114a5aa92cd8411ec3d1c64cbd85-rootfs.mount: Deactivated successfully. May 13 10:04:09.730728 kubelet[2709]: E0513 10:04:09.730685 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:09.733443 containerd[1592]: time="2025-05-13T10:04:09.732812624Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 10:04:09.808334 containerd[1592]: time="2025-05-13T10:04:09.808285330Z" level=info msg="Container 356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5: CDI devices from CRI Config.CDIDevices: []" May 13 10:04:09.817104 containerd[1592]: time="2025-05-13T10:04:09.817058561Z" level=info msg="CreateContainer within sandbox \"d5ab358ef09e508af9aa83c6b391d72e5bd54427d35dec5c1c7871ccaac7f201\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5\"" May 13 10:04:09.817659 containerd[1592]: time="2025-05-13T10:04:09.817613549Z" level=info msg="StartContainer for \"356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5\"" May 13 10:04:09.818693 containerd[1592]: time="2025-05-13T10:04:09.818521909Z" level=info msg="connecting to shim 356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5" address="unix:///run/containerd/s/7300ed40cfedf7b26a8d30e4ccb9ef082bc0205c0570b07bee4972daf6c82bed" protocol=ttrpc version=3 May 13 10:04:09.839577 systemd[1]: Started cri-containerd-356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5.scope - libcontainer container 356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5. May 13 10:04:09.875385 containerd[1592]: time="2025-05-13T10:04:09.875342438Z" level=info msg="StartContainer for \"356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5\" returns successfully" May 13 10:04:09.953580 containerd[1592]: time="2025-05-13T10:04:09.953533456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5\" id:\"a3190156dab358f157d4e962c2092447a6331c7f476ba7a0970ffac0b001874f\" pid:4793 exited_at:{seconds:1747130649 nanos:953167578}" May 13 10:04:10.301434 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 13 10:04:10.736719 kubelet[2709]: E0513 10:04:10.736672 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:10.751648 kubelet[2709]: I0513 10:04:10.751586 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gdgq8" podStartSLOduration=5.751560159 podStartE2EDuration="5.751560159s" podCreationTimestamp="2025-05-13 10:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:04:10.750935568 +0000 UTC m=+99.516287426" watchObservedRunningTime="2025-05-13 10:04:10.751560159 +0000 UTC m=+99.516912017" May 13 10:04:11.748574 kubelet[2709]: E0513 10:04:11.748458 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:11.978704 containerd[1592]: time="2025-05-13T10:04:11.978621086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5\" id:\"caa9b7700b31da16b5ec59cc9ee9d4690217215f6a58055f656e2aaf5c67e610\" pid:4934 exit_status:1 exited_at:{seconds:1747130651 nanos:978278362}" May 13 10:04:13.434170 systemd-networkd[1490]: lxc_health: Link UP May 13 10:04:13.435808 systemd-networkd[1490]: lxc_health: Gained carrier May 13 10:04:13.751175 kubelet[2709]: E0513 10:04:13.751112 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:14.097392 containerd[1592]: time="2025-05-13T10:04:14.097193676Z" level=info msg="TaskExit event in podsandbox handler container_id:\"356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5\" id:\"73679540fc8117f72f223cea68dff28fb5920bbd686a759ce628a45ce0003849\" pid:5325 exited_at:{seconds:1747130654 nanos:96712476}" May 13 10:04:14.744132 kubelet[2709]: E0513 10:04:14.744075 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:04:15.052620 systemd-networkd[1490]: lxc_health: Gained IPv6LL May 13 10:04:16.189224 containerd[1592]: time="2025-05-13T10:04:16.189171551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5\" id:\"9bf4da3fd23904245e99186b72e54c5f91ae3222deabf164f927a3747caa8dea\" pid:5360 exited_at:{seconds:1747130656 nanos:188743761}" May 13 10:04:18.285059 containerd[1592]: time="2025-05-13T10:04:18.285011827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5\" id:\"a82c39efc1688f1b3951c00ce253cd251c1b2bee2a01ecf756f071a35f42856a\" pid:5390 exited_at:{seconds:1747130658 nanos:284359776}" May 13 10:04:20.387950 containerd[1592]: time="2025-05-13T10:04:20.387874011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"356b73aef90fc4c10b823b0eec0a1c3929782ae165c054db653154d3b0ceb4d5\" id:\"70df39138d972768a5952711c1562d602e82d0f9da0d2e4f880d12dec5b72ac4\" pid:5414 exited_at:{seconds:1747130660 nanos:387493860}" May 13 10:04:20.393984 sshd[4532]: Connection closed by 10.0.0.1 port 57150 May 13 10:04:20.394438 sshd-session[4525]: pam_unix(sshd:session): session closed for user core May 13 10:04:20.398357 systemd[1]: sshd@28-10.0.0.18:22-10.0.0.1:57150.service: Deactivated successfully. May 13 10:04:20.400306 systemd[1]: session-29.scope: Deactivated successfully. May 13 10:04:20.401157 systemd-logind[1563]: Session 29 logged out. Waiting for processes to exit. May 13 10:04:20.402471 systemd-logind[1563]: Removed session 29.