May 27 18:02:48.882358 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 18:02:48.882396 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:02:48.882407 kernel: BIOS-provided physical RAM map: May 27 18:02:48.882414 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 18:02:48.882420 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 18:02:48.882427 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 18:02:48.882435 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 27 18:02:48.882448 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 27 18:02:48.882459 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 18:02:48.882483 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 18:02:48.882491 kernel: NX (Execute Disable) protection: active May 27 18:02:48.882498 kernel: APIC: Static calls initialized May 27 18:02:48.882505 kernel: SMBIOS 2.8 present. May 27 18:02:48.882513 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 27 18:02:48.882525 kernel: DMI: Memory slots populated: 1/1 May 27 18:02:48.882533 kernel: Hypervisor detected: KVM May 27 18:02:48.882544 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 18:02:48.882552 kernel: kvm-clock: using sched offset of 4131214805 cycles May 27 18:02:48.882561 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 18:02:48.882569 kernel: tsc: Detected 2494.140 MHz processor May 27 18:02:48.882577 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 18:02:48.882585 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 18:02:48.882594 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 27 18:02:48.882606 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 18:02:48.882614 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 18:02:48.882622 kernel: ACPI: Early table checksum verification disabled May 27 18:02:48.882630 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 27 18:02:48.882638 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:02:48.882646 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:02:48.882654 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:02:48.882662 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 27 18:02:48.882670 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:02:48.882681 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:02:48.882689 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:02:48.882697 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:02:48.882705 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 27 18:02:48.882715 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 27 18:02:48.883756 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 27 18:02:48.883787 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 27 18:02:48.883796 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 27 18:02:48.883823 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 27 18:02:48.883836 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 27 18:02:48.883849 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 27 18:02:48.883862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 27 18:02:48.883876 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 27 18:02:48.883893 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 27 18:02:48.883907 kernel: Zone ranges: May 27 18:02:48.883918 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 18:02:48.883927 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 27 18:02:48.883936 kernel: Normal empty May 27 18:02:48.883944 kernel: Device empty May 27 18:02:48.883953 kernel: Movable zone start for each node May 27 18:02:48.883962 kernel: Early memory node ranges May 27 18:02:48.883970 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 18:02:48.883981 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 27 18:02:48.883999 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 27 18:02:48.884012 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 18:02:48.884026 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 18:02:48.884035 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 27 18:02:48.884044 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 18:02:48.884052 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 18:02:48.884070 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 18:02:48.884079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 18:02:48.884090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 18:02:48.884104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 18:02:48.884115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 18:02:48.884124 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 18:02:48.884133 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 18:02:48.884142 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 18:02:48.884151 kernel: TSC deadline timer available May 27 18:02:48.884160 kernel: CPU topo: Max. logical packages: 1 May 27 18:02:48.884189 kernel: CPU topo: Max. logical dies: 1 May 27 18:02:48.884198 kernel: CPU topo: Max. dies per package: 1 May 27 18:02:48.884211 kernel: CPU topo: Max. threads per core: 1 May 27 18:02:48.884220 kernel: CPU topo: Num. cores per package: 2 May 27 18:02:48.884228 kernel: CPU topo: Num. threads per package: 2 May 27 18:02:48.884237 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 18:02:48.884246 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 18:02:48.884255 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 27 18:02:48.884263 kernel: Booting paravirtualized kernel on KVM May 27 18:02:48.884272 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 18:02:48.884281 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 18:02:48.884290 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 18:02:48.884303 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 18:02:48.884312 kernel: pcpu-alloc: [0] 0 1 May 27 18:02:48.884321 kernel: kvm-guest: PV spinlocks disabled, no host support May 27 18:02:48.884331 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:02:48.884341 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 18:02:48.884349 kernel: random: crng init done May 27 18:02:48.884358 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 18:02:48.884367 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 18:02:48.884379 kernel: Fallback order for Node 0: 0 May 27 18:02:48.884388 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 27 18:02:48.884397 kernel: Policy zone: DMA32 May 27 18:02:48.884405 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 18:02:48.884414 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 18:02:48.884422 kernel: Kernel/User page tables isolation: enabled May 27 18:02:48.884431 kernel: ftrace: allocating 40081 entries in 157 pages May 27 18:02:48.884439 kernel: ftrace: allocated 157 pages with 5 groups May 27 18:02:48.884448 kernel: Dynamic Preempt: voluntary May 27 18:02:48.884461 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 18:02:48.884472 kernel: rcu: RCU event tracing is enabled. May 27 18:02:48.884480 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 18:02:48.884489 kernel: Trampoline variant of Tasks RCU enabled. May 27 18:02:48.884498 kernel: Rude variant of Tasks RCU enabled. May 27 18:02:48.885854 kernel: Tracing variant of Tasks RCU enabled. May 27 18:02:48.885868 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 18:02:48.885878 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 18:02:48.885887 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:02:48.885915 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:02:48.885926 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:02:48.885936 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 18:02:48.885946 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 18:02:48.885955 kernel: Console: colour VGA+ 80x25 May 27 18:02:48.885964 kernel: printk: legacy console [tty0] enabled May 27 18:02:48.885972 kernel: printk: legacy console [ttyS0] enabled May 27 18:02:48.885981 kernel: ACPI: Core revision 20240827 May 27 18:02:48.885990 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 18:02:48.886019 kernel: APIC: Switch to symmetric I/O mode setup May 27 18:02:48.886033 kernel: x2apic enabled May 27 18:02:48.886045 kernel: APIC: Switched APIC routing to: physical x2apic May 27 18:02:48.886065 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 18:02:48.886080 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 27 18:02:48.886090 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) May 27 18:02:48.886099 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 27 18:02:48.886108 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 27 18:02:48.886118 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 18:02:48.886131 kernel: Spectre V2 : Mitigation: Retpolines May 27 18:02:48.886140 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 18:02:48.886150 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 27 18:02:48.886159 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 18:02:48.886169 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 18:02:48.886178 kernel: MDS: Mitigation: Clear CPU buffers May 27 18:02:48.886187 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 27 18:02:48.886200 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 18:02:48.886209 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 18:02:48.886218 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 18:02:48.886228 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 18:02:48.886237 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 18:02:48.886246 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 27 18:02:48.886255 kernel: Freeing SMP alternatives memory: 32K May 27 18:02:48.886265 kernel: pid_max: default: 32768 minimum: 301 May 27 18:02:48.886274 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 18:02:48.886286 kernel: landlock: Up and running. May 27 18:02:48.886295 kernel: SELinux: Initializing. May 27 18:02:48.886321 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 18:02:48.886334 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 18:02:48.886343 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 27 18:02:48.886353 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 27 18:02:48.886362 kernel: signal: max sigframe size: 1776 May 27 18:02:48.886371 kernel: rcu: Hierarchical SRCU implementation. May 27 18:02:48.886381 kernel: rcu: Max phase no-delay instances is 400. May 27 18:02:48.886395 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 18:02:48.886404 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 18:02:48.886415 kernel: smp: Bringing up secondary CPUs ... May 27 18:02:48.886430 kernel: smpboot: x86: Booting SMP configuration: May 27 18:02:48.886449 kernel: .... node #0, CPUs: #1 May 27 18:02:48.886463 kernel: smp: Brought up 1 node, 2 CPUs May 27 18:02:48.886477 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) May 27 18:02:48.886490 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 125140K reserved, 0K cma-reserved) May 27 18:02:48.886505 kernel: devtmpfs: initialized May 27 18:02:48.886526 kernel: x86/mm: Memory block size: 128MB May 27 18:02:48.886542 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 18:02:48.886557 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 18:02:48.886572 kernel: pinctrl core: initialized pinctrl subsystem May 27 18:02:48.886583 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 18:02:48.886593 kernel: audit: initializing netlink subsys (disabled) May 27 18:02:48.886602 kernel: audit: type=2000 audit(1748368965.946:1): state=initialized audit_enabled=0 res=1 May 27 18:02:48.886612 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 18:02:48.886621 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 18:02:48.886635 kernel: cpuidle: using governor menu May 27 18:02:48.886644 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 18:02:48.886653 kernel: dca service started, version 1.12.1 May 27 18:02:48.886662 kernel: PCI: Using configuration type 1 for base access May 27 18:02:48.886672 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 18:02:48.886681 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 18:02:48.886690 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 18:02:48.886699 kernel: ACPI: Added _OSI(Module Device) May 27 18:02:48.886708 kernel: ACPI: Added _OSI(Processor Device) May 27 18:02:48.886721 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 18:02:48.887011 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 18:02:48.887025 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 18:02:48.887035 kernel: ACPI: Interpreter enabled May 27 18:02:48.887044 kernel: ACPI: PM: (supports S0 S5) May 27 18:02:48.887053 kernel: ACPI: Using IOAPIC for interrupt routing May 27 18:02:48.887063 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 18:02:48.887072 kernel: PCI: Using E820 reservations for host bridge windows May 27 18:02:48.887081 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 27 18:02:48.887091 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 18:02:48.887385 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 18:02:48.887489 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 18:02:48.887634 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 18:02:48.887651 kernel: acpiphp: Slot [3] registered May 27 18:02:48.887664 kernel: acpiphp: Slot [4] registered May 27 18:02:48.887677 kernel: acpiphp: Slot [5] registered May 27 18:02:48.887689 kernel: acpiphp: Slot [6] registered May 27 18:02:48.887709 kernel: acpiphp: Slot [7] registered May 27 18:02:48.887769 kernel: acpiphp: Slot [8] registered May 27 18:02:48.887784 kernel: acpiphp: Slot [9] registered May 27 18:02:48.887799 kernel: acpiphp: Slot [10] registered May 27 18:02:48.887812 kernel: acpiphp: Slot [11] registered May 27 18:02:48.887827 kernel: acpiphp: Slot [12] registered May 27 18:02:48.887842 kernel: acpiphp: Slot [13] registered May 27 18:02:48.887858 kernel: acpiphp: Slot [14] registered May 27 18:02:48.887874 kernel: acpiphp: Slot [15] registered May 27 18:02:48.887895 kernel: acpiphp: Slot [16] registered May 27 18:02:48.887909 kernel: acpiphp: Slot [17] registered May 27 18:02:48.887940 kernel: acpiphp: Slot [18] registered May 27 18:02:48.887952 kernel: acpiphp: Slot [19] registered May 27 18:02:48.887964 kernel: acpiphp: Slot [20] registered May 27 18:02:48.887976 kernel: acpiphp: Slot [21] registered May 27 18:02:48.888007 kernel: acpiphp: Slot [22] registered May 27 18:02:48.888019 kernel: acpiphp: Slot [23] registered May 27 18:02:48.888032 kernel: acpiphp: Slot [24] registered May 27 18:02:48.888047 kernel: acpiphp: Slot [25] registered May 27 18:02:48.888067 kernel: acpiphp: Slot [26] registered May 27 18:02:48.888079 kernel: acpiphp: Slot [27] registered May 27 18:02:48.888092 kernel: acpiphp: Slot [28] registered May 27 18:02:48.888104 kernel: acpiphp: Slot [29] registered May 27 18:02:48.888116 kernel: acpiphp: Slot [30] registered May 27 18:02:48.888129 kernel: acpiphp: Slot [31] registered May 27 18:02:48.888141 kernel: PCI host bridge to bus 0000:00 May 27 18:02:48.888365 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 18:02:48.888498 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 18:02:48.888585 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 18:02:48.888709 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 27 18:02:48.892036 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 27 18:02:48.892225 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 18:02:48.892414 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 18:02:48.892578 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 27 18:02:48.892703 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 27 18:02:48.892827 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 27 18:02:48.892924 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 27 18:02:48.893029 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 27 18:02:48.893167 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 27 18:02:48.893267 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 27 18:02:48.893410 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 27 18:02:48.893539 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 27 18:02:48.893675 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 27 18:02:48.894929 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 27 18:02:48.895114 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 27 18:02:48.895291 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 27 18:02:48.895443 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 27 18:02:48.895604 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 27 18:02:48.896846 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 27 18:02:48.897063 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 27 18:02:48.897169 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 18:02:48.897294 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 18:02:48.897406 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 27 18:02:48.897532 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 27 18:02:48.897630 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 27 18:02:48.898869 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 18:02:48.899070 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 27 18:02:48.899258 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 27 18:02:48.899403 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 27 18:02:48.899526 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 27 18:02:48.899663 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 27 18:02:48.899809 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 27 18:02:48.899907 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 27 18:02:48.900072 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 18:02:48.900172 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 27 18:02:48.900286 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 27 18:02:48.900425 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 27 18:02:48.900584 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 18:02:48.900716 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 27 18:02:48.903000 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 27 18:02:48.903164 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 27 18:02:48.903283 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 27 18:02:48.903381 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 27 18:02:48.903500 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 27 18:02:48.903522 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 18:02:48.903538 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 18:02:48.903550 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 18:02:48.903560 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 18:02:48.903569 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 18:02:48.903579 kernel: iommu: Default domain type: Translated May 27 18:02:48.903589 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 18:02:48.903598 kernel: PCI: Using ACPI for IRQ routing May 27 18:02:48.903613 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 18:02:48.903622 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 18:02:48.903631 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 27 18:02:48.903763 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 27 18:02:48.903871 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 27 18:02:48.904006 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 18:02:48.904020 kernel: vgaarb: loaded May 27 18:02:48.904030 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 18:02:48.904046 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 18:02:48.904061 kernel: clocksource: Switched to clocksource kvm-clock May 27 18:02:48.904075 kernel: VFS: Disk quotas dquot_6.6.0 May 27 18:02:48.904089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 18:02:48.904101 kernel: pnp: PnP ACPI init May 27 18:02:48.904114 kernel: pnp: PnP ACPI: found 4 devices May 27 18:02:48.904126 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 18:02:48.904141 kernel: NET: Registered PF_INET protocol family May 27 18:02:48.904153 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 18:02:48.904172 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 27 18:02:48.904186 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 18:02:48.904201 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 18:02:48.904214 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 27 18:02:48.904227 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 27 18:02:48.904239 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 18:02:48.904252 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 18:02:48.904265 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 18:02:48.904278 kernel: NET: Registered PF_XDP protocol family May 27 18:02:48.904440 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 18:02:48.904552 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 18:02:48.904648 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 18:02:48.906905 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 27 18:02:48.907039 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 27 18:02:48.907156 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 27 18:02:48.907260 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 18:02:48.907276 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 27 18:02:48.907419 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27756 usecs May 27 18:02:48.907437 kernel: PCI: CLS 0 bytes, default 64 May 27 18:02:48.907452 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 18:02:48.907465 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 27 18:02:48.907479 kernel: Initialise system trusted keyrings May 27 18:02:48.907492 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 27 18:02:48.907504 kernel: Key type asymmetric registered May 27 18:02:48.907516 kernel: Asymmetric key parser 'x509' registered May 27 18:02:48.907529 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 18:02:48.907549 kernel: io scheduler mq-deadline registered May 27 18:02:48.907562 kernel: io scheduler kyber registered May 27 18:02:48.907574 kernel: io scheduler bfq registered May 27 18:02:48.907587 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 18:02:48.907600 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 27 18:02:48.907613 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 27 18:02:48.907626 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 27 18:02:48.907640 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 18:02:48.907653 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 18:02:48.907672 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 18:02:48.907686 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 18:02:48.907699 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 18:02:48.907984 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 18:02:48.908022 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 18:02:48.908164 kernel: rtc_cmos 00:03: registered as rtc0 May 27 18:02:48.908308 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T18:02:48 UTC (1748368968) May 27 18:02:48.908463 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 27 18:02:48.908482 kernel: intel_pstate: CPU model not supported May 27 18:02:48.908498 kernel: NET: Registered PF_INET6 protocol family May 27 18:02:48.908513 kernel: Segment Routing with IPv6 May 27 18:02:48.908527 kernel: In-situ OAM (IOAM) with IPv6 May 27 18:02:48.908543 kernel: NET: Registered PF_PACKET protocol family May 27 18:02:48.908558 kernel: Key type dns_resolver registered May 27 18:02:48.908573 kernel: IPI shorthand broadcast: enabled May 27 18:02:48.908586 kernel: sched_clock: Marking stable (3480004734, 91479226)->(3589849282, -18365322) May 27 18:02:48.908600 kernel: registered taskstats version 1 May 27 18:02:48.908624 kernel: Loading compiled-in X.509 certificates May 27 18:02:48.908640 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 18:02:48.908652 kernel: Demotion targets for Node 0: null May 27 18:02:48.908665 kernel: Key type .fscrypt registered May 27 18:02:48.908678 kernel: Key type fscrypt-provisioning registered May 27 18:02:48.912353 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 18:02:48.912433 kernel: ima: Allocated hash algorithm: sha1 May 27 18:02:48.912451 kernel: ima: No architecture policies found May 27 18:02:48.912471 kernel: clk: Disabling unused clocks May 27 18:02:48.912485 kernel: Warning: unable to open an initial console. May 27 18:02:48.912500 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 18:02:48.912515 kernel: Write protecting the kernel read-only data: 24576k May 27 18:02:48.912530 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 18:02:48.912543 kernel: Run /init as init process May 27 18:02:48.912557 kernel: with arguments: May 27 18:02:48.912571 kernel: /init May 27 18:02:48.912584 kernel: with environment: May 27 18:02:48.912607 kernel: HOME=/ May 27 18:02:48.912621 kernel: TERM=linux May 27 18:02:48.912636 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 18:02:48.912655 systemd[1]: Successfully made /usr/ read-only. May 27 18:02:48.912676 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:02:48.912691 systemd[1]: Detected virtualization kvm. May 27 18:02:48.912704 systemd[1]: Detected architecture x86-64. May 27 18:02:48.912718 systemd[1]: Running in initrd. May 27 18:02:48.912761 systemd[1]: No hostname configured, using default hostname. May 27 18:02:48.912776 systemd[1]: Hostname set to . May 27 18:02:48.912790 systemd[1]: Initializing machine ID from VM UUID. May 27 18:02:48.912804 systemd[1]: Queued start job for default target initrd.target. May 27 18:02:48.912820 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:02:48.912835 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:02:48.912851 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 18:02:48.912867 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:02:48.912889 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 18:02:48.912911 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 18:02:48.912930 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 18:02:48.912951 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 18:02:48.912966 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:02:48.912980 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:02:48.912999 systemd[1]: Reached target paths.target - Path Units. May 27 18:02:48.913013 systemd[1]: Reached target slices.target - Slice Units. May 27 18:02:48.913028 systemd[1]: Reached target swap.target - Swaps. May 27 18:02:48.913042 systemd[1]: Reached target timers.target - Timer Units. May 27 18:02:48.913057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:02:48.913072 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:02:48.913093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 18:02:48.913112 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 18:02:48.913128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:02:48.913142 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:02:48.913157 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:02:48.913172 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:02:48.913187 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 18:02:48.913202 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:02:48.913223 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 18:02:48.913239 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 18:02:48.913254 systemd[1]: Starting systemd-fsck-usr.service... May 27 18:02:48.913268 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:02:48.913286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:02:48.913300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:02:48.913315 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 18:02:48.913336 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:02:48.913350 systemd[1]: Finished systemd-fsck-usr.service. May 27 18:02:48.913437 systemd-journald[211]: Collecting audit messages is disabled. May 27 18:02:48.913485 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 18:02:48.913504 systemd-journald[211]: Journal started May 27 18:02:48.913537 systemd-journald[211]: Runtime Journal (/run/log/journal/9793a70bd9fb4fccbfd36e9d5adb3cec) is 4.9M, max 39.5M, 34.6M free. May 27 18:02:48.900718 systemd-modules-load[213]: Inserted module 'overlay' May 27 18:02:48.942490 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:02:48.942523 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 18:02:48.942539 kernel: Bridge firewalling registered May 27 18:02:48.941947 systemd-modules-load[213]: Inserted module 'br_netfilter' May 27 18:02:48.942590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:02:48.943630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:02:48.944449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:02:48.948799 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 18:02:48.950233 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:02:48.954925 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:02:48.958008 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:02:48.979224 systemd-tmpfiles[231]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 18:02:48.979370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:02:48.987080 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:02:48.989422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:02:48.994225 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:02:48.997828 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:02:49.001958 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 18:02:49.032756 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:02:49.063995 systemd-resolved[247]: Positive Trust Anchors: May 27 18:02:49.064815 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:02:49.064886 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:02:49.073306 systemd-resolved[247]: Defaulting to hostname 'linux'. May 27 18:02:49.076551 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:02:49.077235 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:02:49.146785 kernel: SCSI subsystem initialized May 27 18:02:49.157806 kernel: Loading iSCSI transport class v2.0-870. May 27 18:02:49.170989 kernel: iscsi: registered transport (tcp) May 27 18:02:49.196792 kernel: iscsi: registered transport (qla4xxx) May 27 18:02:49.196898 kernel: QLogic iSCSI HBA Driver May 27 18:02:49.223780 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:02:49.261435 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:02:49.265147 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:02:49.334815 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 18:02:49.337266 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 18:02:49.403804 kernel: raid6: avx2x4 gen() 16836 MB/s May 27 18:02:49.420784 kernel: raid6: avx2x2 gen() 17305 MB/s May 27 18:02:49.438198 kernel: raid6: avx2x1 gen() 12131 MB/s May 27 18:02:49.438386 kernel: raid6: using algorithm avx2x2 gen() 17305 MB/s May 27 18:02:49.455860 kernel: raid6: .... xor() 18870 MB/s, rmw enabled May 27 18:02:49.455969 kernel: raid6: using avx2x2 recovery algorithm May 27 18:02:49.481787 kernel: xor: automatically using best checksumming function avx May 27 18:02:49.696800 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 18:02:49.708952 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 18:02:49.712485 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:02:49.752959 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 27 18:02:49.763393 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:02:49.767004 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 18:02:49.800591 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation May 27 18:02:49.834621 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:02:49.837594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:02:49.916915 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:02:49.919963 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 18:02:50.008944 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 27 18:02:50.009213 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 27 18:02:50.013778 kernel: scsi host0: Virtio SCSI HBA May 27 18:02:50.052329 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 27 18:02:50.055201 kernel: cryptd: max_cpu_qlen set to 1000 May 27 18:02:50.065769 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 18:02:50.072974 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 18:02:50.073070 kernel: GPT:9289727 != 125829119 May 27 18:02:50.073084 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 18:02:50.073102 kernel: GPT:9289727 != 125829119 May 27 18:02:50.073120 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 18:02:50.073138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:02:50.096896 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 27 18:02:50.097154 kernel: ACPI: bus type USB registered May 27 18:02:50.099054 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) May 27 18:02:50.103998 kernel: usbcore: registered new interface driver usbfs May 27 18:02:50.104074 kernel: usbcore: registered new interface driver hub May 27 18:02:50.105243 kernel: AES CTR mode by8 optimization enabled May 27 18:02:50.105971 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:02:50.107778 kernel: usbcore: registered new device driver usb May 27 18:02:50.106163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:02:50.106829 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:02:50.112091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:02:50.112819 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:02:50.132075 kernel: libata version 3.00 loaded. May 27 18:02:50.164778 kernel: ata_piix 0000:00:01.1: version 2.13 May 27 18:02:50.186111 kernel: scsi host1: ata_piix May 27 18:02:50.200175 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 27 18:02:50.200358 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 27 18:02:50.200485 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 27 18:02:50.200598 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 27 18:02:50.204775 kernel: scsi host2: ata_piix May 27 18:02:50.205049 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 27 18:02:50.205070 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 27 18:02:50.210006 kernel: hub 1-0:1.0: USB hub found May 27 18:02:50.210252 kernel: hub 1-0:1.0: 2 ports detected May 27 18:02:50.212209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 18:02:50.251913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:02:50.263590 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 18:02:50.288180 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 18:02:50.295756 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 18:02:50.296353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 18:02:50.299367 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 18:02:50.317173 disk-uuid[603]: Primary Header is updated. May 27 18:02:50.317173 disk-uuid[603]: Secondary Entries is updated. May 27 18:02:50.317173 disk-uuid[603]: Secondary Header is updated. May 27 18:02:50.330780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:02:50.463578 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 18:02:50.474952 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:02:50.475411 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:02:50.476228 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:02:50.477951 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 18:02:50.512466 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 18:02:51.340874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:02:51.342821 disk-uuid[604]: The operation has completed successfully. May 27 18:02:51.420527 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 18:02:51.420680 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 18:02:51.456072 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 18:02:51.486907 sh[628]: Success May 27 18:02:51.510137 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 18:02:51.510213 kernel: device-mapper: uevent: version 1.0.3 May 27 18:02:51.510228 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 18:02:51.524769 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 27 18:02:51.583969 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 18:02:51.588865 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 18:02:51.606643 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 18:02:51.621152 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 18:02:51.621261 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (640) May 27 18:02:51.625789 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 18:02:51.625877 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 18:02:51.626981 kernel: BTRFS info (device dm-0): using free-space-tree May 27 18:02:51.637323 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 18:02:51.638839 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 18:02:51.639885 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 18:02:51.641702 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 18:02:51.643554 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 18:02:51.680775 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (672) May 27 18:02:51.684150 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:02:51.684251 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:02:51.685888 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:02:51.703780 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:02:51.706235 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 18:02:51.709572 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 18:02:51.796993 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:02:51.799470 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:02:51.848890 systemd-networkd[810]: lo: Link UP May 27 18:02:51.848901 systemd-networkd[810]: lo: Gained carrier May 27 18:02:51.852028 systemd-networkd[810]: Enumeration completed May 27 18:02:51.852194 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:02:51.853885 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 27 18:02:51.853891 systemd-networkd[810]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 27 18:02:51.854294 systemd[1]: Reached target network.target - Network. May 27 18:02:51.855099 systemd-networkd[810]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:02:51.855104 systemd-networkd[810]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 18:02:51.856843 systemd-networkd[810]: eth0: Link UP May 27 18:02:51.856850 systemd-networkd[810]: eth0: Gained carrier May 27 18:02:51.856866 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 27 18:02:51.862030 systemd-networkd[810]: eth1: Link UP May 27 18:02:51.862035 systemd-networkd[810]: eth1: Gained carrier May 27 18:02:51.862056 systemd-networkd[810]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:02:51.890918 systemd-networkd[810]: eth0: DHCPv4 address 137.184.189.209/20, gateway 137.184.176.1 acquired from 169.254.169.253 May 27 18:02:51.894960 systemd-networkd[810]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 May 27 18:02:51.920857 ignition[730]: Ignition 2.21.0 May 27 18:02:51.920882 ignition[730]: Stage: fetch-offline May 27 18:02:51.920954 ignition[730]: no configs at "/usr/lib/ignition/base.d" May 27 18:02:51.920968 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:02:51.921134 ignition[730]: parsed url from cmdline: "" May 27 18:02:51.921140 ignition[730]: no config URL provided May 27 18:02:51.921151 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:02:51.921163 ignition[730]: no config at "/usr/lib/ignition/user.ign" May 27 18:02:51.921171 ignition[730]: failed to fetch config: resource requires networking May 27 18:02:51.925807 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:02:51.923804 ignition[730]: Ignition finished successfully May 27 18:02:51.930066 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 18:02:51.967188 ignition[820]: Ignition 2.21.0 May 27 18:02:51.967203 ignition[820]: Stage: fetch May 27 18:02:51.967379 ignition[820]: no configs at "/usr/lib/ignition/base.d" May 27 18:02:51.967389 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:02:51.967500 ignition[820]: parsed url from cmdline: "" May 27 18:02:51.967506 ignition[820]: no config URL provided May 27 18:02:51.967515 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:02:51.967526 ignition[820]: no config at "/usr/lib/ignition/user.ign" May 27 18:02:51.967576 ignition[820]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 27 18:02:51.984761 ignition[820]: GET result: OK May 27 18:02:51.984992 ignition[820]: parsing config with SHA512: 1f6c81e091991fb098a3a795914e403740a111fc03a97fc9c80804e599d50b30c8fb27387a995d2b8f19b9ba5e54c821c8ef650e4432375f0df17e8f145cf6d5 May 27 18:02:51.991082 unknown[820]: fetched base config from "system" May 27 18:02:51.991095 unknown[820]: fetched base config from "system" May 27 18:02:51.991596 ignition[820]: fetch: fetch complete May 27 18:02:51.991102 unknown[820]: fetched user config from "digitalocean" May 27 18:02:51.991604 ignition[820]: fetch: fetch passed May 27 18:02:51.991687 ignition[820]: Ignition finished successfully May 27 18:02:51.994498 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 18:02:52.000565 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 18:02:52.038589 ignition[827]: Ignition 2.21.0 May 27 18:02:52.039368 ignition[827]: Stage: kargs May 27 18:02:52.039614 ignition[827]: no configs at "/usr/lib/ignition/base.d" May 27 18:02:52.039631 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:02:52.042560 ignition[827]: kargs: kargs passed May 27 18:02:52.043183 ignition[827]: Ignition finished successfully May 27 18:02:52.046040 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 18:02:52.048478 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 18:02:52.095510 ignition[834]: Ignition 2.21.0 May 27 18:02:52.095526 ignition[834]: Stage: disks May 27 18:02:52.095798 ignition[834]: no configs at "/usr/lib/ignition/base.d" May 27 18:02:52.095816 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:02:52.100184 ignition[834]: disks: disks passed May 27 18:02:52.100291 ignition[834]: Ignition finished successfully May 27 18:02:52.101867 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 18:02:52.103164 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 18:02:52.103640 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 18:02:52.104437 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:02:52.105266 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:02:52.105921 systemd[1]: Reached target basic.target - Basic System. May 27 18:02:52.108205 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 18:02:52.136933 systemd-fsck[843]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 18:02:52.141715 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 18:02:52.144508 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 18:02:52.278770 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 18:02:52.279811 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 18:02:52.281132 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 18:02:52.283723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:02:52.285868 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 18:02:52.298056 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 27 18:02:52.304003 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 18:02:52.306226 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 18:02:52.306418 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:02:52.312526 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (851) May 27 18:02:52.312720 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 18:02:52.316238 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:02:52.316303 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:02:52.317239 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:02:52.323572 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 18:02:52.327410 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:02:52.406803 initrd-setup-root[883]: cut: /sysroot/etc/passwd: No such file or directory May 27 18:02:52.407780 coreos-metadata[853]: May 27 18:02:52.407 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:02:52.414844 initrd-setup-root[890]: cut: /sysroot/etc/group: No such file or directory May 27 18:02:52.417373 coreos-metadata[854]: May 27 18:02:52.417 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:02:52.422753 coreos-metadata[853]: May 27 18:02:52.422 INFO Fetch successful May 27 18:02:52.423319 initrd-setup-root[897]: cut: /sysroot/etc/shadow: No such file or directory May 27 18:02:52.429593 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 27 18:02:52.429853 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 27 18:02:52.433924 coreos-metadata[854]: May 27 18:02:52.431 INFO Fetch successful May 27 18:02:52.435821 initrd-setup-root[904]: cut: /sysroot/etc/gshadow: No such file or directory May 27 18:02:52.439911 coreos-metadata[854]: May 27 18:02:52.439 INFO wrote hostname ci-4344.0.0-1-b2ae16c630 to /sysroot/etc/hostname May 27 18:02:52.441683 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 18:02:52.547394 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 18:02:52.549799 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 18:02:52.551397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 18:02:52.573826 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:02:52.592668 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 18:02:52.605558 ignition[975]: INFO : Ignition 2.21.0 May 27 18:02:52.605558 ignition[975]: INFO : Stage: mount May 27 18:02:52.606577 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:02:52.606577 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:02:52.608050 ignition[975]: INFO : mount: mount passed May 27 18:02:52.608050 ignition[975]: INFO : Ignition finished successfully May 27 18:02:52.609193 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 18:02:52.610611 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 18:02:52.620333 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 18:02:52.636851 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:02:52.666386 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (986) May 27 18:02:52.666465 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:02:52.667944 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:02:52.668927 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:02:52.673713 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:02:52.709586 ignition[1003]: INFO : Ignition 2.21.0 May 27 18:02:52.709586 ignition[1003]: INFO : Stage: files May 27 18:02:52.709586 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:02:52.709586 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:02:52.712535 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping May 27 18:02:52.713484 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 18:02:52.713484 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 18:02:52.716512 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 18:02:52.717340 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 18:02:52.718080 unknown[1003]: wrote ssh authorized keys file for user: core May 27 18:02:52.718873 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 18:02:52.722765 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 18:02:52.722765 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 18:02:52.778318 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 18:02:52.917777 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 18:02:52.917777 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 18:02:52.917777 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 18:02:53.469150 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 18:02:53.476910 systemd-networkd[810]: eth0: Gained IPv6LL May 27 18:02:53.529316 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 18:02:53.534415 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 18:02:53.861067 systemd-networkd[810]: eth1: Gained IPv6LL May 27 18:02:54.241461 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 18:02:54.495324 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 18:02:54.495324 ignition[1003]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 18:02:54.498117 ignition[1003]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 18:02:54.499355 ignition[1003]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 18:02:54.499355 ignition[1003]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 18:02:54.499355 ignition[1003]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 27 18:02:54.501923 ignition[1003]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 27 18:02:54.501923 ignition[1003]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 18:02:54.501923 ignition[1003]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 18:02:54.501923 ignition[1003]: INFO : files: files passed May 27 18:02:54.501923 ignition[1003]: INFO : Ignition finished successfully May 27 18:02:54.503467 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 18:02:54.506215 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 18:02:54.508941 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 18:02:54.525185 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 18:02:54.525338 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 18:02:54.536766 initrd-setup-root-after-ignition[1033]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:02:54.536766 initrd-setup-root-after-ignition[1033]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 18:02:54.538062 initrd-setup-root-after-ignition[1037]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:02:54.540818 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:02:54.541556 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 18:02:54.543419 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 18:02:54.601695 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 18:02:54.601879 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 18:02:54.603204 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 18:02:54.603710 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 18:02:54.604799 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 18:02:54.606294 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 18:02:54.663257 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:02:54.665434 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 18:02:54.696885 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 18:02:54.697932 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:02:54.699219 systemd[1]: Stopped target timers.target - Timer Units. May 27 18:02:54.700374 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 18:02:54.700999 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:02:54.702059 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 18:02:54.702902 systemd[1]: Stopped target basic.target - Basic System. May 27 18:02:54.703851 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 18:02:54.704689 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:02:54.705677 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 18:02:54.706572 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 18:02:54.707593 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 18:02:54.708478 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:02:54.709499 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 18:02:54.710385 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 18:02:54.711390 systemd[1]: Stopped target swap.target - Swaps. May 27 18:02:54.712079 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 18:02:54.712283 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 18:02:54.713454 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 18:02:54.714563 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:02:54.715674 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 18:02:54.715854 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:02:54.716541 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 18:02:54.716817 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 18:02:54.717993 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 18:02:54.718337 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:02:54.719300 systemd[1]: ignition-files.service: Deactivated successfully. May 27 18:02:54.719484 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 18:02:54.720698 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 18:02:54.720988 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 18:02:54.724928 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 18:02:54.729252 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 18:02:54.733503 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 18:02:54.733884 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:02:54.735815 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 18:02:54.736083 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:02:54.748265 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 18:02:54.748440 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 18:02:54.762649 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 18:02:54.777653 ignition[1057]: INFO : Ignition 2.21.0 May 27 18:02:54.784711 ignition[1057]: INFO : Stage: umount May 27 18:02:54.784711 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:02:54.784711 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:02:54.784711 ignition[1057]: INFO : umount: umount passed May 27 18:02:54.784711 ignition[1057]: INFO : Ignition finished successfully May 27 18:02:54.787179 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 18:02:54.789826 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 18:02:54.803329 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 18:02:54.803958 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 18:02:54.805225 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 18:02:54.805709 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 18:02:54.806651 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 18:02:54.806707 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 18:02:54.807921 systemd[1]: Stopped target network.target - Network. May 27 18:02:54.808376 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 18:02:54.808460 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:02:54.809201 systemd[1]: Stopped target paths.target - Path Units. May 27 18:02:54.809968 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 18:02:54.813903 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:02:54.814555 systemd[1]: Stopped target slices.target - Slice Units. May 27 18:02:54.815567 systemd[1]: Stopped target sockets.target - Socket Units. May 27 18:02:54.816344 systemd[1]: iscsid.socket: Deactivated successfully. May 27 18:02:54.816425 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:02:54.817114 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 18:02:54.817179 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:02:54.817890 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 18:02:54.817992 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 18:02:54.818712 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 18:02:54.818820 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 18:02:54.819708 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 18:02:54.820539 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 18:02:54.822318 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 18:02:54.822469 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 18:02:54.824312 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 18:02:54.824482 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 18:02:54.829710 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 18:02:54.829954 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 18:02:54.837083 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 18:02:54.837476 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 18:02:54.837618 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 18:02:54.839695 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 18:02:54.841040 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 18:02:54.841466 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 18:02:54.841509 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 18:02:54.843636 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 18:02:54.844931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 18:02:54.845002 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:02:54.845813 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 18:02:54.845865 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 18:02:54.848906 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 18:02:54.848994 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 18:02:54.849535 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 18:02:54.849599 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:02:54.851354 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:02:54.855225 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 18:02:54.855314 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 18:02:54.871293 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 18:02:54.871603 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:02:54.872924 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 18:02:54.873046 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 18:02:54.874372 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 18:02:54.874469 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 18:02:54.875447 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 18:02:54.875498 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:02:54.876364 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 18:02:54.876448 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 18:02:54.877786 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 18:02:54.877877 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 18:02:54.879165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 18:02:54.879266 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:02:54.881552 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 18:02:54.883090 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 18:02:54.883200 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:02:54.885986 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 18:02:54.886091 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:02:54.888806 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 18:02:54.888866 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:02:54.889935 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 18:02:54.889991 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:02:54.891026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:02:54.891086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:02:54.893816 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 18:02:54.893916 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 18:02:54.893961 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 18:02:54.894004 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:02:54.906401 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 18:02:54.906526 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 18:02:54.907230 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 18:02:54.908634 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 18:02:54.931225 systemd[1]: Switching root. May 27 18:02:54.978583 systemd-journald[211]: Journal stopped May 27 18:02:56.283676 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). May 27 18:02:56.285214 kernel: SELinux: policy capability network_peer_controls=1 May 27 18:02:56.285250 kernel: SELinux: policy capability open_perms=1 May 27 18:02:56.285266 kernel: SELinux: policy capability extended_socket_class=1 May 27 18:02:56.285279 kernel: SELinux: policy capability always_check_network=0 May 27 18:02:56.285290 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 18:02:56.285302 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 18:02:56.285314 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 18:02:56.285329 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 18:02:56.285340 kernel: SELinux: policy capability userspace_initial_context=0 May 27 18:02:56.285352 kernel: audit: type=1403 audit(1748368975.124:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 18:02:56.285367 systemd[1]: Successfully loaded SELinux policy in 58.973ms. May 27 18:02:56.285406 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.185ms. May 27 18:02:56.285429 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:02:56.285444 systemd[1]: Detected virtualization kvm. May 27 18:02:56.285455 systemd[1]: Detected architecture x86-64. May 27 18:02:56.285471 systemd[1]: Detected first boot. May 27 18:02:56.285483 systemd[1]: Hostname set to . May 27 18:02:56.285499 systemd[1]: Initializing machine ID from VM UUID. May 27 18:02:56.285511 zram_generator::config[1104]: No configuration found. May 27 18:02:56.285531 kernel: Guest personality initialized and is inactive May 27 18:02:56.285546 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 18:02:56.285563 kernel: Initialized host personality May 27 18:02:56.285577 kernel: NET: Registered PF_VSOCK protocol family May 27 18:02:56.285595 systemd[1]: Populated /etc with preset unit settings. May 27 18:02:56.285620 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 18:02:56.285633 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 18:02:56.285647 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 18:02:56.285660 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 18:02:56.285674 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 18:02:56.285693 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 18:02:56.285711 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 18:02:56.289605 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 18:02:56.289700 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 18:02:56.289723 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 18:02:56.289764 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 18:02:56.289783 systemd[1]: Created slice user.slice - User and Session Slice. May 27 18:02:56.289802 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:02:56.289821 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:02:56.289841 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 18:02:56.289860 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 18:02:56.289884 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 18:02:56.289903 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:02:56.289922 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 18:02:56.289942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:02:56.289962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:02:56.289981 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 18:02:56.290001 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 18:02:56.290021 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 18:02:56.290034 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 18:02:56.290047 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:02:56.290060 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:02:56.290073 systemd[1]: Reached target slices.target - Slice Units. May 27 18:02:56.290085 systemd[1]: Reached target swap.target - Swaps. May 27 18:02:56.290099 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 18:02:56.290119 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 18:02:56.290132 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 18:02:56.290151 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:02:56.290167 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:02:56.290179 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:02:56.290229 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 18:02:56.290244 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 18:02:56.290256 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 18:02:56.290269 systemd[1]: Mounting media.mount - External Media Directory... May 27 18:02:56.290282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:02:56.290299 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 18:02:56.290325 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 18:02:56.290344 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 18:02:56.290364 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 18:02:56.290382 systemd[1]: Reached target machines.target - Containers. May 27 18:02:56.290400 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 18:02:56.290419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:02:56.290432 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:02:56.290453 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 18:02:56.290476 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:02:56.290498 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:02:56.290516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:02:56.290533 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 18:02:56.290554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:02:56.290572 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 18:02:56.290589 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 18:02:56.290608 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 18:02:56.290626 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 18:02:56.290649 systemd[1]: Stopped systemd-fsck-usr.service. May 27 18:02:56.290672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:02:56.290692 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:02:56.290711 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:02:56.292141 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:02:56.292201 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 18:02:56.292232 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 18:02:56.292251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:02:56.292265 systemd[1]: verity-setup.service: Deactivated successfully. May 27 18:02:56.292281 systemd[1]: Stopped verity-setup.service. May 27 18:02:56.292305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:02:56.292319 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 18:02:56.292332 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 18:02:56.292346 systemd[1]: Mounted media.mount - External Media Directory. May 27 18:02:56.292358 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 18:02:56.292371 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 18:02:56.292384 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 18:02:56.292401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:02:56.292443 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 18:02:56.292465 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:02:56.292485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:02:56.292503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:02:56.292517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:02:56.292531 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 18:02:56.292544 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 18:02:56.292575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:02:56.292588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:02:56.292602 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 18:02:56.292682 systemd-journald[1169]: Collecting audit messages is disabled. May 27 18:02:56.292725 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:02:56.292798 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 18:02:56.292818 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:02:56.292838 systemd-journald[1169]: Journal started May 27 18:02:56.292873 systemd-journald[1169]: Runtime Journal (/run/log/journal/9793a70bd9fb4fccbfd36e9d5adb3cec) is 4.9M, max 39.5M, 34.6M free. May 27 18:02:56.294002 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:02:55.947559 systemd[1]: Queued start job for default target multi-user.target. May 27 18:02:55.973337 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 18:02:55.974132 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 18:02:56.332077 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 18:02:56.335219 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 18:02:56.338679 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:02:56.340953 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 18:02:56.343603 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 18:02:56.344565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:02:56.351029 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 18:02:56.356066 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 18:02:56.358003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:02:56.366388 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 18:02:56.371774 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 18:02:56.377766 kernel: loop: module loaded May 27 18:02:56.374826 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 18:02:56.375875 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:02:56.402008 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:02:56.403851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:02:56.404856 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:02:56.411784 kernel: loop0: detected capacity change from 0 to 224512 May 27 18:02:56.421297 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 27 18:02:56.421320 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 27 18:02:56.424755 kernel: fuse: init (API version 7.41) May 27 18:02:56.427722 systemd-journald[1169]: Time spent on flushing to /var/log/journal/9793a70bd9fb4fccbfd36e9d5adb3cec is 125.562ms for 1011 entries. May 27 18:02:56.427722 systemd-journald[1169]: System Journal (/var/log/journal/9793a70bd9fb4fccbfd36e9d5adb3cec) is 8M, max 195.6M, 187.6M free. May 27 18:02:56.578055 systemd-journald[1169]: Received client request to flush runtime journal. May 27 18:02:56.578127 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 18:02:56.578156 kernel: loop1: detected capacity change from 0 to 146240 May 27 18:02:56.578171 kernel: ACPI: bus type drm_connector registered May 27 18:02:56.445191 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 18:02:56.445485 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 18:02:56.456819 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 18:02:56.459458 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 18:02:56.467072 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 18:02:56.469328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:02:56.528076 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:02:56.528272 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:02:56.535813 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 18:02:56.538253 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 18:02:56.551142 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 18:02:56.583257 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 18:02:56.586880 kernel: loop2: detected capacity change from 0 to 113872 May 27 18:02:56.630872 kernel: loop3: detected capacity change from 0 to 8 May 27 18:02:56.659775 kernel: loop4: detected capacity change from 0 to 224512 May 27 18:02:56.693273 kernel: loop5: detected capacity change from 0 to 146240 May 27 18:02:56.735832 kernel: loop6: detected capacity change from 0 to 113872 May 27 18:02:56.744012 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:02:56.747384 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 18:02:56.754267 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:02:56.764772 kernel: loop7: detected capacity change from 0 to 8 May 27 18:02:56.765588 (sd-merge)[1246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 27 18:02:56.766479 (sd-merge)[1246]: Merged extensions into '/usr'. May 27 18:02:56.783898 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... May 27 18:02:56.783930 systemd[1]: Reloading... May 27 18:02:56.951242 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 27 18:02:56.951265 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 27 18:02:57.005949 zram_generator::config[1276]: No configuration found. May 27 18:02:57.305276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:02:57.311549 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 18:02:57.445142 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 18:02:57.445671 systemd[1]: Reloading finished in 660 ms. May 27 18:02:57.466120 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 18:02:57.467676 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 18:02:57.468898 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:02:57.478977 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 18:02:57.489990 systemd[1]: Starting ensure-sysext.service... May 27 18:02:57.500104 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:02:57.514139 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 18:02:57.530956 systemd[1]: Reload requested from client PID 1321 ('systemctl') (unit ensure-sysext.service)... May 27 18:02:57.530977 systemd[1]: Reloading... May 27 18:02:57.572529 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 18:02:57.572572 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 18:02:57.573063 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 18:02:57.573341 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 18:02:57.574337 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 18:02:57.574661 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 27 18:02:57.574724 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 27 18:02:57.590583 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:02:57.590597 systemd-tmpfiles[1322]: Skipping /boot May 27 18:02:57.617806 zram_generator::config[1346]: No configuration found. May 27 18:02:57.652550 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:02:57.652570 systemd-tmpfiles[1322]: Skipping /boot May 27 18:02:57.844986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:02:57.963366 systemd[1]: Reloading finished in 431 ms. May 27 18:02:57.992682 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 18:02:58.001724 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:02:58.012029 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:02:58.015037 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 18:02:58.023322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 18:02:58.033156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:02:58.038108 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:02:58.046666 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 18:02:58.058082 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:02:58.058713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:02:58.070330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:02:58.073793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:02:58.085310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:02:58.087049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:02:58.087347 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:02:58.087513 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:02:58.097207 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:02:58.097553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:02:58.098898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:02:58.099106 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:02:58.106967 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 18:02:58.107609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:02:58.114234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:02:58.114621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:02:58.131066 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:02:58.131814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:02:58.131972 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:02:58.132126 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:02:58.135624 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 18:02:58.137088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:02:58.137846 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:02:58.153893 systemd[1]: Finished ensure-sysext.service. May 27 18:02:58.154895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:02:58.155152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:02:58.159902 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:02:58.168380 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 18:02:58.169833 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 18:02:58.170375 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:02:58.170608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:02:58.178828 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:02:58.179931 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:02:58.193118 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:02:58.197041 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 18:02:58.216488 augenrules[1433]: No rules May 27 18:02:58.220666 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:02:58.222563 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:02:58.224991 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 18:02:58.235238 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 18:02:58.260218 systemd-udevd[1400]: Using default interface naming scheme 'v255'. May 27 18:02:58.270849 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 18:02:58.283689 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 18:02:58.324422 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:02:58.332135 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:02:58.402391 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 18:02:58.403174 systemd[1]: Reached target time-set.target - System Time Set. May 27 18:02:58.494645 systemd-resolved[1398]: Positive Trust Anchors: May 27 18:02:58.495857 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:02:58.495931 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:02:58.513718 systemd-resolved[1398]: Using system hostname 'ci-4344.0.0-1-b2ae16c630'. May 27 18:02:58.522305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:02:58.524968 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:02:58.525543 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:02:58.526163 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 18:02:58.526775 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 18:02:58.527234 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 18:02:58.527850 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 18:02:58.528323 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 18:02:58.528754 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 18:02:58.529316 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 18:02:58.529350 systemd[1]: Reached target paths.target - Path Units. May 27 18:02:58.529646 systemd[1]: Reached target timers.target - Timer Units. May 27 18:02:58.531230 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 18:02:58.534218 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 18:02:58.540323 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 18:02:58.542192 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 18:02:58.542972 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 18:02:58.551085 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 18:02:58.553850 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 18:02:58.555514 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 18:02:58.562787 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:02:58.564903 systemd[1]: Reached target basic.target - Basic System. May 27 18:02:58.565453 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 18:02:58.565481 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 18:02:58.567937 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 18:02:58.573711 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 18:02:58.576151 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 18:02:58.582282 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 18:02:58.590296 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 18:02:58.590797 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 18:02:58.599477 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 18:02:58.607109 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 18:02:58.617100 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 18:02:58.621704 systemd-networkd[1451]: lo: Link UP May 27 18:02:58.622831 systemd-networkd[1451]: lo: Gained carrier May 27 18:02:58.625381 systemd-networkd[1451]: Enumeration completed May 27 18:02:58.627173 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 18:02:58.633189 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 18:02:58.649182 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 18:02:58.652665 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 18:02:58.657139 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 18:02:58.663944 systemd[1]: Starting update-engine.service - Update Engine... May 27 18:02:58.677988 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 18:02:58.679513 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:02:58.684476 jq[1480]: false May 27 18:02:58.688621 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 18:02:58.689451 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 18:02:58.689794 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 18:02:58.700632 systemd[1]: Reached target network.target - Network. May 27 18:02:58.711088 systemd[1]: Starting containerd.service - containerd container runtime... May 27 18:02:58.713391 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 18:02:58.722181 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 18:02:58.732540 oslogin_cache_refresh[1482]: Refreshing passwd entry cache May 27 18:02:58.736292 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Refreshing passwd entry cache May 27 18:02:58.738782 jq[1492]: true May 27 18:02:58.774196 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Failure getting users, quitting May 27 18:02:58.774196 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:02:58.774196 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Refreshing group entry cache May 27 18:02:58.769769 oslogin_cache_refresh[1482]: Failure getting users, quitting May 27 18:02:58.769806 oslogin_cache_refresh[1482]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:02:58.769934 oslogin_cache_refresh[1482]: Refreshing group entry cache May 27 18:02:58.776327 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Failure getting groups, quitting May 27 18:02:58.776327 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:02:58.775658 oslogin_cache_refresh[1482]: Failure getting groups, quitting May 27 18:02:58.775682 oslogin_cache_refresh[1482]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:02:58.776601 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 18:02:58.778099 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 18:02:58.791936 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 18:02:58.801477 extend-filesystems[1481]: Found loop4 May 27 18:02:58.801477 extend-filesystems[1481]: Found loop5 May 27 18:02:58.801477 extend-filesystems[1481]: Found loop6 May 27 18:02:58.801477 extend-filesystems[1481]: Found loop7 May 27 18:02:58.801477 extend-filesystems[1481]: Found vda May 27 18:02:58.801477 extend-filesystems[1481]: Found vda1 May 27 18:02:58.801477 extend-filesystems[1481]: Found vda2 May 27 18:02:58.801477 extend-filesystems[1481]: Found vda3 May 27 18:02:58.801477 extend-filesystems[1481]: Found usr May 27 18:02:58.801477 extend-filesystems[1481]: Found vda4 May 27 18:02:58.801477 extend-filesystems[1481]: Found vda6 May 27 18:02:58.801477 extend-filesystems[1481]: Found vda7 May 27 18:02:58.801477 extend-filesystems[1481]: Found vda9 May 27 18:02:58.801477 extend-filesystems[1481]: Found vdb May 27 18:02:58.792988 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 18:02:58.853852 coreos-metadata[1477]: May 27 18:02:58.820 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:02:58.853852 coreos-metadata[1477]: May 27 18:02:58.832 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 27 18:02:58.806250 dbus-daemon[1478]: [system] SELinux support is enabled May 27 18:02:58.806716 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 18:02:58.833185 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 18:02:58.833574 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 18:02:58.860577 jq[1503]: true May 27 18:02:58.839377 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 18:02:58.839437 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 18:02:58.840226 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 18:02:58.840259 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 18:02:58.869881 update_engine[1491]: I20250527 18:02:58.866837 1491 main.cc:92] Flatcar Update Engine starting May 27 18:02:58.875808 systemd[1]: Started update-engine.service - Update Engine. May 27 18:02:58.876665 update_engine[1491]: I20250527 18:02:58.876008 1491 update_check_scheduler.cc:74] Next update check in 10m44s May 27 18:02:58.892735 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 18:02:58.894946 systemd[1]: motdgen.service: Deactivated successfully. May 27 18:02:58.895196 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 18:02:58.908368 tar[1501]: linux-amd64/LICENSE May 27 18:02:58.908368 tar[1501]: linux-amd64/helm May 27 18:02:58.924371 (ntainerd)[1521]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 18:02:58.925313 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 18:02:59.078124 bash[1538]: Updated "/home/core/.ssh/authorized_keys" May 27 18:02:59.082877 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 18:02:59.087475 systemd[1]: Starting sshkeys.service... May 27 18:02:59.166069 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 27 18:02:59.168973 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 27 18:02:59.169919 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 18:02:59.230102 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 18:02:59.237449 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 18:02:59.302082 kernel: ISO 9660 Extensions: RRIP_1991A May 27 18:02:59.355130 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 27 18:02:59.357676 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 27 18:02:59.429721 systemd-networkd[1451]: eth0: Configuring with /run/systemd/network/10-9e:e4:b4:66:5f:28.network. May 27 18:02:59.439828 systemd-networkd[1451]: eth0: Link UP May 27 18:02:59.446023 systemd-networkd[1451]: eth0: Gained carrier May 27 18:02:59.467263 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:02:59.537979 coreos-metadata[1545]: May 27 18:02:59.537 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:02:59.562519 coreos-metadata[1545]: May 27 18:02:59.561 INFO Fetch successful May 27 18:02:59.576234 unknown[1545]: wrote ssh authorized keys file for user: core May 27 18:02:59.588427 containerd[1521]: time="2025-05-27T18:02:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 18:02:59.605018 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 18:02:59.612963 containerd[1521]: time="2025-05-27T18:02:59.612873680Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 18:02:59.635669 update-ssh-keys[1559]: Updated "/home/core/.ssh/authorized_keys" May 27 18:02:59.639679 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 18:02:59.648610 systemd[1]: Finished sshkeys.service. May 27 18:02:59.671963 containerd[1521]: time="2025-05-27T18:02:59.671874205Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.939µs" May 27 18:02:59.671963 containerd[1521]: time="2025-05-27T18:02:59.671937952Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 18:02:59.671963 containerd[1521]: time="2025-05-27T18:02:59.671967766Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 18:02:59.672278 containerd[1521]: time="2025-05-27T18:02:59.672196349Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 18:02:59.672278 containerd[1521]: time="2025-05-27T18:02:59.672228346Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 18:02:59.672278 containerd[1521]: time="2025-05-27T18:02:59.672274394Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:02:59.672390 containerd[1521]: time="2025-05-27T18:02:59.672378043Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:02:59.672426 containerd[1521]: time="2025-05-27T18:02:59.672396746Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:02:59.675756 containerd[1521]: time="2025-05-27T18:02:59.675312895Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:02:59.675756 containerd[1521]: time="2025-05-27T18:02:59.675369121Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:02:59.675756 containerd[1521]: time="2025-05-27T18:02:59.675395399Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:02:59.675756 containerd[1521]: time="2025-05-27T18:02:59.675406341Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 18:02:59.675756 containerd[1521]: time="2025-05-27T18:02:59.675629324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 18:02:59.678143 containerd[1521]: time="2025-05-27T18:02:59.676965642Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:02:59.678143 containerd[1521]: time="2025-05-27T18:02:59.677050990Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:02:59.678143 containerd[1521]: time="2025-05-27T18:02:59.677069141Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 18:02:59.678143 containerd[1521]: time="2025-05-27T18:02:59.677108521Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 18:02:59.678143 containerd[1521]: time="2025-05-27T18:02:59.677394946Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 18:02:59.678143 containerd[1521]: time="2025-05-27T18:02:59.677485948Z" level=info msg="metadata content store policy set" policy=shared May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.684813765Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.684896117Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.684911731Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.684946562Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.684965510Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.684986664Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.685006331Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.685018203Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.685029318Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.685039098Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.685049070Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.685101444Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.685312203Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 18:02:59.685902 containerd[1521]: time="2025-05-27T18:02:59.685350647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685378500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685392223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685407337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685418638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685429613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685439585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685453394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685464423Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685474768Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685555463Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685570215Z" level=info msg="Start snapshots syncer" May 27 18:02:59.686521 containerd[1521]: time="2025-05-27T18:02:59.685596564Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 18:02:59.690131 containerd[1521]: time="2025-05-27T18:02:59.685965638Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 18:02:59.690131 containerd[1521]: time="2025-05-27T18:02:59.686047161Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 18:02:59.690383 containerd[1521]: time="2025-05-27T18:02:59.689413708Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 18:02:59.690383 containerd[1521]: time="2025-05-27T18:02:59.689668872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.691824630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.691880429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.691904181Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.691953070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.691971768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.691984132Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.692023996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.692040939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.692057223Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.692105877Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.692129332Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.692199290Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.692219768Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:02:59.693115 containerd[1521]: time="2025-05-27T18:02:59.692233286Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 18:02:59.693776 containerd[1521]: time="2025-05-27T18:02:59.692245524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 18:02:59.693776 containerd[1521]: time="2025-05-27T18:02:59.692259040Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 18:02:59.693776 containerd[1521]: time="2025-05-27T18:02:59.692280627Z" level=info msg="runtime interface created" May 27 18:02:59.693776 containerd[1521]: time="2025-05-27T18:02:59.692287006Z" level=info msg="created NRI interface" May 27 18:02:59.693776 containerd[1521]: time="2025-05-27T18:02:59.692295126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 18:02:59.693776 containerd[1521]: time="2025-05-27T18:02:59.692315332Z" level=info msg="Connect containerd service" May 27 18:02:59.693776 containerd[1521]: time="2025-05-27T18:02:59.692394526Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 18:02:59.701465 containerd[1521]: time="2025-05-27T18:02:59.701398864Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 18:02:59.742070 systemd-logind[1490]: New seat seat0. May 27 18:02:59.749680 systemd[1]: Started systemd-logind.service - User Login Management. May 27 18:02:59.837463 coreos-metadata[1477]: May 27 18:02:59.833 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 27 18:02:59.849210 coreos-metadata[1477]: May 27 18:02:59.848 INFO Fetch successful May 27 18:02:59.880655 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 18:02:59.891256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 18:02:59.973303 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 18:02:59.997102 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 18:02:59.998327 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 18:03:00.008690 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 18:03:00.054598 containerd[1521]: time="2025-05-27T18:03:00.053826447Z" level=info msg="Start subscribing containerd event" May 27 18:03:00.054598 containerd[1521]: time="2025-05-27T18:03:00.053935581Z" level=info msg="Start recovering state" May 27 18:03:00.054598 containerd[1521]: time="2025-05-27T18:03:00.054075944Z" level=info msg="Start event monitor" May 27 18:03:00.054598 containerd[1521]: time="2025-05-27T18:03:00.054097910Z" level=info msg="Start cni network conf syncer for default" May 27 18:03:00.054598 containerd[1521]: time="2025-05-27T18:03:00.054107369Z" level=info msg="Start streaming server" May 27 18:03:00.054598 containerd[1521]: time="2025-05-27T18:03:00.054122660Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 18:03:00.054598 containerd[1521]: time="2025-05-27T18:03:00.054133196Z" level=info msg="runtime interface starting up..." May 27 18:03:00.054598 containerd[1521]: time="2025-05-27T18:03:00.054141140Z" level=info msg="starting plugins..." May 27 18:03:00.054598 containerd[1521]: time="2025-05-27T18:03:00.054173558Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 18:03:00.055278 containerd[1521]: time="2025-05-27T18:03:00.055160984Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 18:03:00.055440 containerd[1521]: time="2025-05-27T18:03:00.055367160Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 18:03:00.055643 containerd[1521]: time="2025-05-27T18:03:00.055623855Z" level=info msg="containerd successfully booted in 0.471129s" May 27 18:03:00.055907 systemd[1]: Started containerd.service - containerd container runtime. May 27 18:03:00.084320 systemd-networkd[1451]: eth1: Configuring with /run/systemd/network/10-16:07:47:fe:ce:0f.network. May 27 18:03:00.086387 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:00.086456 systemd-networkd[1451]: eth1: Link UP May 27 18:03:00.086830 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:00.089069 systemd-networkd[1451]: eth1: Gained carrier May 27 18:03:00.096254 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:00.098376 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:00.140839 kernel: mousedev: PS/2 mouse device common for all mice May 27 18:03:00.169768 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 18:03:00.186761 kernel: ACPI: button: Power Button [PWRF] May 27 18:03:00.214766 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 27 18:03:00.272246 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 18:03:00.687382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:03:00.706959 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 27 18:03:00.715773 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 27 18:03:00.754258 kernel: Console: switching to colour dummy device 80x25 May 27 18:03:00.754832 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 27 18:03:00.754988 kernel: [drm] features: -context_init May 27 18:03:00.764404 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 18:03:00.768143 kernel: [drm] number of scanouts: 1 May 27 18:03:00.768318 kernel: [drm] number of cap sets: 0 May 27 18:03:00.773958 systemd-networkd[1451]: eth0: Gained IPv6LL May 27 18:03:00.776034 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:00.787491 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 18:03:00.788020 systemd[1]: Reached target network-online.target - Network is Online. May 27 18:03:00.794930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:03:00.799890 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 18:03:00.827857 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) May 27 18:03:00.834807 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 27 18:03:00.859791 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 18:03:00.902882 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 18:03:00.916999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:03:00.917391 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:03:00.921683 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:03:00.925077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:03:00.991395 kernel: EDAC MC: Ver: 3.0.0 May 27 18:03:01.006849 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 18:03:01.010718 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 18:03:01.042115 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:03:01.061659 systemd[1]: issuegen.service: Deactivated successfully. May 27 18:03:01.063009 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 18:03:01.067078 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 18:03:01.118984 tar[1501]: linux-amd64/README.md May 27 18:03:01.119868 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 18:03:01.129160 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 18:03:01.132452 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 18:03:01.134242 systemd[1]: Reached target getty.target - Login Prompts. May 27 18:03:01.148914 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 18:03:01.413167 systemd-networkd[1451]: eth1: Gained IPv6LL May 27 18:03:01.415132 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:02.219160 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 18:03:02.221572 systemd[1]: Started sshd@0-137.184.189.209:22-139.178.68.195:56156.service - OpenSSH per-connection server daemon (139.178.68.195:56156). May 27 18:03:02.336902 sshd[1649]: Accepted publickey for core from 139.178.68.195 port 56156 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:03:02.340527 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:03:02.354296 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 18:03:02.355865 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 18:03:02.371177 systemd-logind[1490]: New session 1 of user core. May 27 18:03:02.377008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:03:02.378054 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 18:03:02.388506 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:03:02.394843 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 18:03:02.399567 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 18:03:02.416637 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 18:03:02.423989 systemd-logind[1490]: New session c1 of user core. May 27 18:03:02.630436 systemd[1659]: Queued start job for default target default.target. May 27 18:03:02.638360 systemd[1659]: Created slice app.slice - User Application Slice. May 27 18:03:02.638407 systemd[1659]: Reached target paths.target - Paths. May 27 18:03:02.638462 systemd[1659]: Reached target timers.target - Timers. May 27 18:03:02.642944 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 18:03:02.663994 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 18:03:02.664134 systemd[1659]: Reached target sockets.target - Sockets. May 27 18:03:02.664190 systemd[1659]: Reached target basic.target - Basic System. May 27 18:03:02.664229 systemd[1659]: Reached target default.target - Main User Target. May 27 18:03:02.664267 systemd[1659]: Startup finished in 227ms. May 27 18:03:02.665790 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 18:03:02.682266 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 18:03:02.683244 systemd[1]: Startup finished in 3.558s (kernel) + 6.481s (initrd) + 7.616s (userspace) = 17.655s. May 27 18:03:02.766190 systemd[1]: Started sshd@1-137.184.189.209:22-139.178.68.195:43432.service - OpenSSH per-connection server daemon (139.178.68.195:43432). May 27 18:03:02.853964 sshd[1678]: Accepted publickey for core from 139.178.68.195 port 43432 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:03:02.857071 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:03:02.870850 systemd-logind[1490]: New session 2 of user core. May 27 18:03:02.887814 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 18:03:02.958854 sshd[1680]: Connection closed by 139.178.68.195 port 43432 May 27 18:03:02.959537 sshd-session[1678]: pam_unix(sshd:session): session closed for user core May 27 18:03:02.975795 systemd[1]: sshd@1-137.184.189.209:22-139.178.68.195:43432.service: Deactivated successfully. May 27 18:03:02.980015 systemd[1]: session-2.scope: Deactivated successfully. May 27 18:03:02.982798 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. May 27 18:03:02.987627 systemd[1]: Started sshd@2-137.184.189.209:22-139.178.68.195:43438.service - OpenSSH per-connection server daemon (139.178.68.195:43438). May 27 18:03:02.991968 systemd-logind[1490]: Removed session 2. May 27 18:03:03.055372 sshd[1686]: Accepted publickey for core from 139.178.68.195 port 43438 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:03:03.056503 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:03:03.064331 systemd-logind[1490]: New session 3 of user core. May 27 18:03:03.069089 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 18:03:03.134099 sshd[1688]: Connection closed by 139.178.68.195 port 43438 May 27 18:03:03.136419 sshd-session[1686]: pam_unix(sshd:session): session closed for user core May 27 18:03:03.147274 systemd[1]: sshd@2-137.184.189.209:22-139.178.68.195:43438.service: Deactivated successfully. May 27 18:03:03.150440 systemd[1]: session-3.scope: Deactivated successfully. May 27 18:03:03.153132 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. May 27 18:03:03.159108 systemd[1]: Started sshd@3-137.184.189.209:22-139.178.68.195:43442.service - OpenSSH per-connection server daemon (139.178.68.195:43442). May 27 18:03:03.161033 systemd-logind[1490]: Removed session 3. May 27 18:03:03.204597 kubelet[1657]: E0527 18:03:03.204518 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:03:03.210639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:03:03.210855 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:03:03.211916 systemd[1]: kubelet.service: Consumed 1.452s CPU time, 263.3M memory peak. May 27 18:03:03.224701 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 43442 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:03:03.226617 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:03:03.233242 systemd-logind[1490]: New session 4 of user core. May 27 18:03:03.241029 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 18:03:03.308131 sshd[1698]: Connection closed by 139.178.68.195 port 43442 May 27 18:03:03.308950 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 27 18:03:03.321597 systemd[1]: sshd@3-137.184.189.209:22-139.178.68.195:43442.service: Deactivated successfully. May 27 18:03:03.324503 systemd[1]: session-4.scope: Deactivated successfully. May 27 18:03:03.326014 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. May 27 18:03:03.332046 systemd[1]: Started sshd@4-137.184.189.209:22-139.178.68.195:43452.service - OpenSSH per-connection server daemon (139.178.68.195:43452). May 27 18:03:03.333375 systemd-logind[1490]: Removed session 4. May 27 18:03:03.395297 sshd[1704]: Accepted publickey for core from 139.178.68.195 port 43452 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:03:03.397308 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:03:03.404986 systemd-logind[1490]: New session 5 of user core. May 27 18:03:03.416369 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 18:03:03.493278 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 18:03:03.493651 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:03:03.512321 sudo[1707]: pam_unix(sudo:session): session closed for user root May 27 18:03:03.517533 sshd[1706]: Connection closed by 139.178.68.195 port 43452 May 27 18:03:03.518846 sshd-session[1704]: pam_unix(sshd:session): session closed for user core May 27 18:03:03.536506 systemd[1]: sshd@4-137.184.189.209:22-139.178.68.195:43452.service: Deactivated successfully. May 27 18:03:03.539658 systemd[1]: session-5.scope: Deactivated successfully. May 27 18:03:03.541278 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. May 27 18:03:03.546882 systemd[1]: Started sshd@5-137.184.189.209:22-139.178.68.195:43458.service - OpenSSH per-connection server daemon (139.178.68.195:43458). May 27 18:03:03.548870 systemd-logind[1490]: Removed session 5. May 27 18:03:03.617025 sshd[1713]: Accepted publickey for core from 139.178.68.195 port 43458 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:03:03.619316 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:03:03.626676 systemd-logind[1490]: New session 6 of user core. May 27 18:03:03.638222 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 18:03:03.704333 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 18:03:03.705505 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:03:03.713447 sudo[1717]: pam_unix(sudo:session): session closed for user root May 27 18:03:03.723993 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 18:03:03.724989 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:03:03.746469 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:03:03.807264 augenrules[1739]: No rules May 27 18:03:03.809273 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:03:03.809700 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:03:03.811934 sudo[1716]: pam_unix(sudo:session): session closed for user root May 27 18:03:03.817909 sshd[1715]: Connection closed by 139.178.68.195 port 43458 May 27 18:03:03.819087 sshd-session[1713]: pam_unix(sshd:session): session closed for user core May 27 18:03:03.831760 systemd[1]: sshd@5-137.184.189.209:22-139.178.68.195:43458.service: Deactivated successfully. May 27 18:03:03.834584 systemd[1]: session-6.scope: Deactivated successfully. May 27 18:03:03.836020 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. May 27 18:03:03.841939 systemd[1]: Started sshd@6-137.184.189.209:22-139.178.68.195:43466.service - OpenSSH per-connection server daemon (139.178.68.195:43466). May 27 18:03:03.843644 systemd-logind[1490]: Removed session 6. May 27 18:03:03.906839 sshd[1748]: Accepted publickey for core from 139.178.68.195 port 43466 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:03:03.908899 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:03:03.918088 systemd-logind[1490]: New session 7 of user core. May 27 18:03:03.929190 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 18:03:03.992429 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 18:03:03.992918 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:03:04.569966 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 18:03:04.599547 (dockerd)[1771]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 18:03:04.981145 dockerd[1771]: time="2025-05-27T18:03:04.980969870Z" level=info msg="Starting up" May 27 18:03:04.984219 dockerd[1771]: time="2025-05-27T18:03:04.984056448Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 18:03:05.024894 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1513373591-merged.mount: Deactivated successfully. May 27 18:03:05.102433 dockerd[1771]: time="2025-05-27T18:03:05.102343512Z" level=info msg="Loading containers: start." May 27 18:03:05.113765 kernel: Initializing XFRM netlink socket May 27 18:03:05.412077 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:05.416430 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:05.430991 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:05.485393 systemd-networkd[1451]: docker0: Link UP May 27 18:03:05.486222 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. May 27 18:03:05.489881 dockerd[1771]: time="2025-05-27T18:03:05.488903350Z" level=info msg="Loading containers: done." May 27 18:03:05.512356 dockerd[1771]: time="2025-05-27T18:03:05.512288250Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 18:03:05.512792 dockerd[1771]: time="2025-05-27T18:03:05.512726134Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 18:03:05.513078 dockerd[1771]: time="2025-05-27T18:03:05.513053359Z" level=info msg="Initializing buildkit" May 27 18:03:05.551298 dockerd[1771]: time="2025-05-27T18:03:05.551235130Z" level=info msg="Completed buildkit initialization" May 27 18:03:05.558445 dockerd[1771]: time="2025-05-27T18:03:05.558387326Z" level=info msg="Daemon has completed initialization" May 27 18:03:05.558786 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 18:03:05.559678 dockerd[1771]: time="2025-05-27T18:03:05.559570821Z" level=info msg="API listen on /run/docker.sock" May 27 18:03:06.022863 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3255634323-merged.mount: Deactivated successfully. May 27 18:03:06.506102 containerd[1521]: time="2025-05-27T18:03:06.505594424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 18:03:07.052036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644728077.mount: Deactivated successfully. May 27 18:03:08.328966 containerd[1521]: time="2025-05-27T18:03:08.328884210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:08.330769 containerd[1521]: time="2025-05-27T18:03:08.330535913Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 27 18:03:08.331759 containerd[1521]: time="2025-05-27T18:03:08.331657612Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:08.335572 containerd[1521]: time="2025-05-27T18:03:08.335484181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:08.337666 containerd[1521]: time="2025-05-27T18:03:08.337209376Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.831552186s" May 27 18:03:08.337666 containerd[1521]: time="2025-05-27T18:03:08.337277049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 18:03:08.338255 containerd[1521]: time="2025-05-27T18:03:08.338208878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 18:03:09.891670 containerd[1521]: time="2025-05-27T18:03:09.890694995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:09.892246 containerd[1521]: time="2025-05-27T18:03:09.892120129Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 27 18:03:09.893151 containerd[1521]: time="2025-05-27T18:03:09.892835621Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:09.896088 containerd[1521]: time="2025-05-27T18:03:09.896000704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:09.897413 containerd[1521]: time="2025-05-27T18:03:09.897346077Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.559086801s" May 27 18:03:09.897413 containerd[1521]: time="2025-05-27T18:03:09.897406504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 18:03:09.898898 containerd[1521]: time="2025-05-27T18:03:09.898781971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 18:03:11.123623 containerd[1521]: time="2025-05-27T18:03:11.123477436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:11.125257 containerd[1521]: time="2025-05-27T18:03:11.125172636Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 27 18:03:11.125700 containerd[1521]: time="2025-05-27T18:03:11.125656957Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:11.129755 containerd[1521]: time="2025-05-27T18:03:11.129652894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:11.130725 containerd[1521]: time="2025-05-27T18:03:11.130334760Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.231505496s" May 27 18:03:11.130725 containerd[1521]: time="2025-05-27T18:03:11.130382645Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 18:03:11.131129 containerd[1521]: time="2025-05-27T18:03:11.131068403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 18:03:12.321160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682833830.mount: Deactivated successfully. May 27 18:03:13.022403 containerd[1521]: time="2025-05-27T18:03:13.022321863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:13.023600 containerd[1521]: time="2025-05-27T18:03:13.023366518Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 27 18:03:13.024333 containerd[1521]: time="2025-05-27T18:03:13.024292016Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:13.026296 containerd[1521]: time="2025-05-27T18:03:13.026251755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:13.026973 containerd[1521]: time="2025-05-27T18:03:13.026937948Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.895839914s" May 27 18:03:13.027122 containerd[1521]: time="2025-05-27T18:03:13.027089230Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 18:03:13.027881 containerd[1521]: time="2025-05-27T18:03:13.027796770Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 18:03:13.029532 systemd-resolved[1398]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 27 18:03:13.463802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 18:03:13.467700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:03:13.496078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665537587.mount: Deactivated successfully. May 27 18:03:13.745026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:03:13.760014 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:03:13.912924 kubelet[2071]: E0527 18:03:13.912556 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:03:13.919573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:03:13.919775 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:03:13.921918 systemd[1]: kubelet.service: Consumed 288ms CPU time, 109.7M memory peak. May 27 18:03:14.571760 containerd[1521]: time="2025-05-27T18:03:14.571681146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:14.576761 containerd[1521]: time="2025-05-27T18:03:14.575924505Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 18:03:14.576761 containerd[1521]: time="2025-05-27T18:03:14.576608975Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:14.583702 containerd[1521]: time="2025-05-27T18:03:14.583635844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:14.585064 containerd[1521]: time="2025-05-27T18:03:14.585011576Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.557164393s" May 27 18:03:14.585064 containerd[1521]: time="2025-05-27T18:03:14.585059141Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 18:03:14.586507 containerd[1521]: time="2025-05-27T18:03:14.586467463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 18:03:15.004535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251494806.mount: Deactivated successfully. May 27 18:03:15.008580 containerd[1521]: time="2025-05-27T18:03:15.008524365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:03:15.009940 containerd[1521]: time="2025-05-27T18:03:15.009894380Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 18:03:15.010447 containerd[1521]: time="2025-05-27T18:03:15.010421701Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:03:15.013330 containerd[1521]: time="2025-05-27T18:03:15.013287292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:03:15.014165 containerd[1521]: time="2025-05-27T18:03:15.014125589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 427.438718ms" May 27 18:03:15.014165 containerd[1521]: time="2025-05-27T18:03:15.014162125Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 18:03:15.015210 containerd[1521]: time="2025-05-27T18:03:15.015080563Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 18:03:15.471006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342161513.mount: Deactivated successfully. May 27 18:03:16.132992 systemd-resolved[1398]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 27 18:03:17.406541 containerd[1521]: time="2025-05-27T18:03:17.406482472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:17.408329 containerd[1521]: time="2025-05-27T18:03:17.408264951Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 27 18:03:17.408789 containerd[1521]: time="2025-05-27T18:03:17.408759560Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:17.412296 containerd[1521]: time="2025-05-27T18:03:17.412240683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:17.414007 containerd[1521]: time="2025-05-27T18:03:17.413877369Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.398571713s" May 27 18:03:17.414129 containerd[1521]: time="2025-05-27T18:03:17.414012276Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 18:03:20.971418 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:03:20.971913 systemd[1]: kubelet.service: Consumed 288ms CPU time, 109.7M memory peak. May 27 18:03:20.975032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:03:21.032375 systemd[1]: Reload requested from client PID 2204 ('systemctl') (unit session-7.scope)... May 27 18:03:21.032401 systemd[1]: Reloading... May 27 18:03:21.207790 zram_generator::config[2247]: No configuration found. May 27 18:03:21.348658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:03:21.536571 systemd[1]: Reloading finished in 503 ms. May 27 18:03:21.600703 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 18:03:21.601065 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 18:03:21.602073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:03:21.602135 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.5M memory peak. May 27 18:03:21.604935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:03:21.809992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:03:21.822719 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 18:03:21.892963 kubelet[2301]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:03:21.892963 kubelet[2301]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 18:03:21.892963 kubelet[2301]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:03:21.892963 kubelet[2301]: I0527 18:03:21.891422 2301 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 18:03:22.580444 kubelet[2301]: I0527 18:03:22.580382 2301 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 18:03:22.580840 kubelet[2301]: I0527 18:03:22.580806 2301 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 18:03:22.582239 kubelet[2301]: I0527 18:03:22.582196 2301 server.go:954] "Client rotation is on, will bootstrap in background" May 27 18:03:22.616659 kubelet[2301]: I0527 18:03:22.616575 2301 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 18:03:22.618255 kubelet[2301]: E0527 18:03:22.618157 2301 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.189.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.189.209:6443: connect: connection refused" logger="UnhandledError" May 27 18:03:22.629309 kubelet[2301]: I0527 18:03:22.629263 2301 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 18:03:22.634564 kubelet[2301]: I0527 18:03:22.634145 2301 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 18:03:22.640951 kubelet[2301]: I0527 18:03:22.640846 2301 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 18:03:22.641342 kubelet[2301]: I0527 18:03:22.641126 2301 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-1-b2ae16c630","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 18:03:22.641571 kubelet[2301]: I0527 18:03:22.641558 2301 topology_manager.go:138] "Creating topology manager with none policy" May 27 18:03:22.641623 kubelet[2301]: I0527 18:03:22.641617 2301 container_manager_linux.go:304] "Creating device plugin manager" May 27 18:03:22.643311 kubelet[2301]: I0527 18:03:22.643113 2301 state_mem.go:36] "Initialized new in-memory state store" May 27 18:03:22.647297 kubelet[2301]: I0527 18:03:22.647086 2301 kubelet.go:446] "Attempting to sync node with API server" May 27 18:03:22.647297 kubelet[2301]: I0527 18:03:22.647147 2301 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 18:03:22.647297 kubelet[2301]: I0527 18:03:22.647179 2301 kubelet.go:352] "Adding apiserver pod source" May 27 18:03:22.647297 kubelet[2301]: I0527 18:03:22.647191 2301 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 18:03:22.654450 kubelet[2301]: W0527 18:03:22.653709 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.189.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-1-b2ae16c630&limit=500&resourceVersion=0": dial tcp 137.184.189.209:6443: connect: connection refused May 27 18:03:22.654450 kubelet[2301]: E0527 18:03:22.653951 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.189.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-1-b2ae16c630&limit=500&resourceVersion=0\": dial tcp 137.184.189.209:6443: connect: connection refused" logger="UnhandledError" May 27 18:03:22.655110 kubelet[2301]: W0527 18:03:22.655060 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.189.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.189.209:6443: connect: connection refused May 27 18:03:22.655330 kubelet[2301]: E0527 18:03:22.655309 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.189.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.189.209:6443: connect: connection refused" logger="UnhandledError" May 27 18:03:22.657098 kubelet[2301]: I0527 18:03:22.657051 2301 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 18:03:22.661645 kubelet[2301]: I0527 18:03:22.661589 2301 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 18:03:22.662527 kubelet[2301]: W0527 18:03:22.662477 2301 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 18:03:22.664405 kubelet[2301]: I0527 18:03:22.664361 2301 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 18:03:22.664405 kubelet[2301]: I0527 18:03:22.664416 2301 server.go:1287] "Started kubelet" May 27 18:03:22.676529 kubelet[2301]: I0527 18:03:22.675774 2301 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 18:03:22.676529 kubelet[2301]: I0527 18:03:22.676392 2301 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 18:03:22.677420 kubelet[2301]: I0527 18:03:22.677387 2301 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 18:03:22.677940 kubelet[2301]: I0527 18:03:22.677816 2301 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 18:03:22.679525 kubelet[2301]: I0527 18:03:22.679488 2301 server.go:479] "Adding debug handlers to kubelet server" May 27 18:03:22.685146 kubelet[2301]: E0527 18:03:22.681556 2301 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.189.209:6443/api/v1/namespaces/default/events\": dial tcp 137.184.189.209:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.0.0-1-b2ae16c630.184374583c1d6913 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-1-b2ae16c630,UID:ci-4344.0.0-1-b2ae16c630,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-1-b2ae16c630,},FirstTimestamp:2025-05-27 18:03:22.664388883 +0000 UTC m=+0.835768594,LastTimestamp:2025-05-27 18:03:22.664388883 +0000 UTC m=+0.835768594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-1-b2ae16c630,}" May 27 18:03:22.690226 kubelet[2301]: I0527 18:03:22.688566 2301 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 18:03:22.690226 kubelet[2301]: I0527 18:03:22.688799 2301 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 18:03:22.690226 kubelet[2301]: E0527 18:03:22.689062 2301 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" May 27 18:03:22.693406 kubelet[2301]: I0527 18:03:22.693360 2301 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 18:03:22.697269 kubelet[2301]: I0527 18:03:22.697249 2301 reconciler.go:26] "Reconciler: start to sync state" May 27 18:03:22.699462 kubelet[2301]: E0527 18:03:22.699412 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-1-b2ae16c630?timeout=10s\": dial tcp 137.184.189.209:6443: connect: connection refused" interval="200ms" May 27 18:03:22.700352 kubelet[2301]: W0527 18:03:22.700296 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.189.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.209:6443: connect: connection refused May 27 18:03:22.702529 kubelet[2301]: E0527 18:03:22.702144 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.189.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.189.209:6443: connect: connection refused" logger="UnhandledError" May 27 18:03:22.703949 kubelet[2301]: I0527 18:03:22.703921 2301 factory.go:221] Registration of the systemd container factory successfully May 27 18:03:22.704488 kubelet[2301]: I0527 18:03:22.704287 2301 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 18:03:22.708168 kubelet[2301]: I0527 18:03:22.706910 2301 factory.go:221] Registration of the containerd container factory successfully May 27 18:03:22.719243 kubelet[2301]: I0527 18:03:22.715507 2301 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 18:03:22.719243 kubelet[2301]: I0527 18:03:22.717466 2301 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 18:03:22.719243 kubelet[2301]: I0527 18:03:22.717497 2301 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 18:03:22.719243 kubelet[2301]: I0527 18:03:22.717527 2301 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 18:03:22.719243 kubelet[2301]: I0527 18:03:22.717535 2301 kubelet.go:2382] "Starting kubelet main sync loop" May 27 18:03:22.719243 kubelet[2301]: E0527 18:03:22.717600 2301 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 18:03:22.730950 kubelet[2301]: E0527 18:03:22.730899 2301 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 18:03:22.731613 kubelet[2301]: W0527 18:03:22.731547 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.189.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.209:6443: connect: connection refused May 27 18:03:22.731777 kubelet[2301]: E0527 18:03:22.731619 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.189.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.189.209:6443: connect: connection refused" logger="UnhandledError" May 27 18:03:22.744567 kubelet[2301]: I0527 18:03:22.744510 2301 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 18:03:22.745240 kubelet[2301]: I0527 18:03:22.744883 2301 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 18:03:22.745240 kubelet[2301]: I0527 18:03:22.744919 2301 state_mem.go:36] "Initialized new in-memory state store" May 27 18:03:22.746278 kubelet[2301]: I0527 18:03:22.746252 2301 policy_none.go:49] "None policy: Start" May 27 18:03:22.746423 kubelet[2301]: I0527 18:03:22.746408 2301 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 18:03:22.746502 kubelet[2301]: I0527 18:03:22.746490 2301 state_mem.go:35] "Initializing new in-memory state store" May 27 18:03:22.757440 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 18:03:22.771016 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 18:03:22.775497 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 18:03:22.789412 kubelet[2301]: E0527 18:03:22.789258 2301 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" May 27 18:03:22.795339 kubelet[2301]: I0527 18:03:22.795269 2301 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 18:03:22.795682 kubelet[2301]: I0527 18:03:22.795499 2301 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 18:03:22.795682 kubelet[2301]: I0527 18:03:22.795520 2301 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 18:03:22.798536 kubelet[2301]: I0527 18:03:22.797052 2301 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 18:03:22.798536 kubelet[2301]: E0527 18:03:22.798416 2301 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 18:03:22.798536 kubelet[2301]: E0527 18:03:22.798476 2301 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.0.0-1-b2ae16c630\" not found" May 27 18:03:22.833693 systemd[1]: Created slice kubepods-burstable-pod35ac41c3da898d8e1dcf94541ffcd3db.slice - libcontainer container kubepods-burstable-pod35ac41c3da898d8e1dcf94541ffcd3db.slice. May 27 18:03:22.847600 kubelet[2301]: E0527 18:03:22.847528 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.852192 systemd[1]: Created slice kubepods-burstable-pod2a1b518880fa18f3ff38e9ffeaba7fa6.slice - libcontainer container kubepods-burstable-pod2a1b518880fa18f3ff38e9ffeaba7fa6.slice. May 27 18:03:22.862571 kubelet[2301]: E0527 18:03:22.862159 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.867022 systemd[1]: Created slice kubepods-burstable-podf5a1b5959f961647e2058ec01f350273.slice - libcontainer container kubepods-burstable-podf5a1b5959f961647e2058ec01f350273.slice. May 27 18:03:22.873037 kubelet[2301]: E0527 18:03:22.872980 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.897192 kubelet[2301]: I0527 18:03:22.897147 2301 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.898567 kubelet[2301]: E0527 18:03:22.898515 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.189.209:6443/api/v1/nodes\": dial tcp 137.184.189.209:6443: connect: connection refused" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.901393 kubelet[2301]: E0527 18:03:22.901330 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-1-b2ae16c630?timeout=10s\": dial tcp 137.184.189.209:6443: connect: connection refused" interval="400ms" May 27 18:03:22.999577 kubelet[2301]: I0527 18:03:22.999314 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5a1b5959f961647e2058ec01f350273-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-1-b2ae16c630\" (UID: \"f5a1b5959f961647e2058ec01f350273\") " pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.999577 kubelet[2301]: I0527 18:03:22.999370 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5a1b5959f961647e2058ec01f350273-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-1-b2ae16c630\" (UID: \"f5a1b5959f961647e2058ec01f350273\") " pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.999577 kubelet[2301]: I0527 18:03:22.999398 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.999577 kubelet[2301]: I0527 18:03:22.999414 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.999577 kubelet[2301]: I0527 18:03:22.999432 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.999910 kubelet[2301]: I0527 18:03:22.999448 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35ac41c3da898d8e1dcf94541ffcd3db-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-1-b2ae16c630\" (UID: \"35ac41c3da898d8e1dcf94541ffcd3db\") " pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.999910 kubelet[2301]: I0527 18:03:22.999464 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5a1b5959f961647e2058ec01f350273-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-1-b2ae16c630\" (UID: \"f5a1b5959f961647e2058ec01f350273\") " pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.999910 kubelet[2301]: I0527 18:03:22.999480 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:22.999910 kubelet[2301]: I0527 18:03:22.999496 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:23.100935 kubelet[2301]: I0527 18:03:23.100578 2301 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:23.101337 kubelet[2301]: E0527 18:03:23.101271 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.189.209:6443/api/v1/nodes\": dial tcp 137.184.189.209:6443: connect: connection refused" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:23.149429 kubelet[2301]: E0527 18:03:23.148989 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:23.152071 containerd[1521]: time="2025-05-27T18:03:23.152006125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-1-b2ae16c630,Uid:35ac41c3da898d8e1dcf94541ffcd3db,Namespace:kube-system,Attempt:0,}" May 27 18:03:23.164761 kubelet[2301]: E0527 18:03:23.163631 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:23.174612 containerd[1521]: time="2025-05-27T18:03:23.174554573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-1-b2ae16c630,Uid:2a1b518880fa18f3ff38e9ffeaba7fa6,Namespace:kube-system,Attempt:0,}" May 27 18:03:23.177215 kubelet[2301]: E0527 18:03:23.177148 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:23.178268 containerd[1521]: time="2025-05-27T18:03:23.177798078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-1-b2ae16c630,Uid:f5a1b5959f961647e2058ec01f350273,Namespace:kube-system,Attempt:0,}" May 27 18:03:23.302672 kubelet[2301]: E0527 18:03:23.302524 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.189.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-1-b2ae16c630?timeout=10s\": dial tcp 137.184.189.209:6443: connect: connection refused" interval="800ms" May 27 18:03:23.307192 containerd[1521]: time="2025-05-27T18:03:23.307076456Z" level=info msg="connecting to shim 2f538aabf30c6d19521df3b25045c20592d27e8121595d0eb2ada107c07e0bfc" address="unix:///run/containerd/s/14bceac71263a9672f725cc2146ecef4d6e0947bffc7823ebd232f6374b212c7" namespace=k8s.io protocol=ttrpc version=3 May 27 18:03:23.307988 containerd[1521]: time="2025-05-27T18:03:23.307882726Z" level=info msg="connecting to shim 5862746873e497600dd306abf48aec28be9d73f437627a1dd901bf63c848e3b9" address="unix:///run/containerd/s/102ed6d81670db3f9ae54c928c6a89b9a0d0de1f9ead08025493ad3e26563d8a" namespace=k8s.io protocol=ttrpc version=3 May 27 18:03:23.313069 containerd[1521]: time="2025-05-27T18:03:23.312891211Z" level=info msg="connecting to shim ee6d013dcb9e8602d477daeab2e88e273a5eeb6f52e1cbfce04f51825534fbdc" address="unix:///run/containerd/s/ee14559ad740eb84c9284eb581abd387aafb7d4bbef5e1fabd3d03b5de1ac283" namespace=k8s.io protocol=ttrpc version=3 May 27 18:03:23.422642 systemd[1]: Started cri-containerd-2f538aabf30c6d19521df3b25045c20592d27e8121595d0eb2ada107c07e0bfc.scope - libcontainer container 2f538aabf30c6d19521df3b25045c20592d27e8121595d0eb2ada107c07e0bfc. May 27 18:03:23.438125 systemd[1]: Started cri-containerd-5862746873e497600dd306abf48aec28be9d73f437627a1dd901bf63c848e3b9.scope - libcontainer container 5862746873e497600dd306abf48aec28be9d73f437627a1dd901bf63c848e3b9. May 27 18:03:23.442282 systemd[1]: Started cri-containerd-ee6d013dcb9e8602d477daeab2e88e273a5eeb6f52e1cbfce04f51825534fbdc.scope - libcontainer container ee6d013dcb9e8602d477daeab2e88e273a5eeb6f52e1cbfce04f51825534fbdc. May 27 18:03:23.509609 kubelet[2301]: W0527 18:03:23.509531 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.189.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-1-b2ae16c630&limit=500&resourceVersion=0": dial tcp 137.184.189.209:6443: connect: connection refused May 27 18:03:23.509609 kubelet[2301]: E0527 18:03:23.509603 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.189.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-1-b2ae16c630&limit=500&resourceVersion=0\": dial tcp 137.184.189.209:6443: connect: connection refused" logger="UnhandledError" May 27 18:03:23.510347 kubelet[2301]: I0527 18:03:23.510324 2301 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:23.510652 kubelet[2301]: E0527 18:03:23.510625 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.189.209:6443/api/v1/nodes\": dial tcp 137.184.189.209:6443: connect: connection refused" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:23.515417 kubelet[2301]: W0527 18:03:23.515339 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.189.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.189.209:6443: connect: connection refused May 27 18:03:23.515417 kubelet[2301]: E0527 18:03:23.515420 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.189.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.189.209:6443: connect: connection refused" logger="UnhandledError" May 27 18:03:23.562040 containerd[1521]: time="2025-05-27T18:03:23.561814986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-1-b2ae16c630,Uid:35ac41c3da898d8e1dcf94541ffcd3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f538aabf30c6d19521df3b25045c20592d27e8121595d0eb2ada107c07e0bfc\"" May 27 18:03:23.564025 kubelet[2301]: E0527 18:03:23.563981 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:23.568858 containerd[1521]: time="2025-05-27T18:03:23.568379237Z" level=info msg="CreateContainer within sandbox \"2f538aabf30c6d19521df3b25045c20592d27e8121595d0eb2ada107c07e0bfc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 18:03:23.576642 containerd[1521]: time="2025-05-27T18:03:23.576522162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-1-b2ae16c630,Uid:f5a1b5959f961647e2058ec01f350273,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee6d013dcb9e8602d477daeab2e88e273a5eeb6f52e1cbfce04f51825534fbdc\"" May 27 18:03:23.577530 kubelet[2301]: E0527 18:03:23.577474 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:23.580593 containerd[1521]: time="2025-05-27T18:03:23.580437418Z" level=info msg="CreateContainer within sandbox \"ee6d013dcb9e8602d477daeab2e88e273a5eeb6f52e1cbfce04f51825534fbdc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 18:03:23.598820 containerd[1521]: time="2025-05-27T18:03:23.598683442Z" level=info msg="Container dc0492063beb5c941a2dea25edd9adc48bc1388d05792be17ec15d5c6d700389: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:23.601773 containerd[1521]: time="2025-05-27T18:03:23.601561650Z" level=info msg="Container 4116c389cf068cd964abbd2b1531c099cb1e5bbfd31c229884acf361846e24b1: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:23.609221 containerd[1521]: time="2025-05-27T18:03:23.609073360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-1-b2ae16c630,Uid:2a1b518880fa18f3ff38e9ffeaba7fa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5862746873e497600dd306abf48aec28be9d73f437627a1dd901bf63c848e3b9\"" May 27 18:03:23.610821 kubelet[2301]: E0527 18:03:23.610767 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:23.611147 containerd[1521]: time="2025-05-27T18:03:23.611000247Z" level=info msg="CreateContainer within sandbox \"2f538aabf30c6d19521df3b25045c20592d27e8121595d0eb2ada107c07e0bfc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4116c389cf068cd964abbd2b1531c099cb1e5bbfd31c229884acf361846e24b1\"" May 27 18:03:23.614043 containerd[1521]: time="2025-05-27T18:03:23.613411091Z" level=info msg="StartContainer for \"4116c389cf068cd964abbd2b1531c099cb1e5bbfd31c229884acf361846e24b1\"" May 27 18:03:23.618603 containerd[1521]: time="2025-05-27T18:03:23.618185904Z" level=info msg="CreateContainer within sandbox \"ee6d013dcb9e8602d477daeab2e88e273a5eeb6f52e1cbfce04f51825534fbdc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc0492063beb5c941a2dea25edd9adc48bc1388d05792be17ec15d5c6d700389\"" May 27 18:03:23.619765 containerd[1521]: time="2025-05-27T18:03:23.619633494Z" level=info msg="StartContainer for \"dc0492063beb5c941a2dea25edd9adc48bc1388d05792be17ec15d5c6d700389\"" May 27 18:03:23.620270 containerd[1521]: time="2025-05-27T18:03:23.620149831Z" level=info msg="connecting to shim 4116c389cf068cd964abbd2b1531c099cb1e5bbfd31c229884acf361846e24b1" address="unix:///run/containerd/s/14bceac71263a9672f725cc2146ecef4d6e0947bffc7823ebd232f6374b212c7" protocol=ttrpc version=3 May 27 18:03:23.620917 containerd[1521]: time="2025-05-27T18:03:23.620859075Z" level=info msg="connecting to shim dc0492063beb5c941a2dea25edd9adc48bc1388d05792be17ec15d5c6d700389" address="unix:///run/containerd/s/ee14559ad740eb84c9284eb581abd387aafb7d4bbef5e1fabd3d03b5de1ac283" protocol=ttrpc version=3 May 27 18:03:23.622557 containerd[1521]: time="2025-05-27T18:03:23.622499461Z" level=info msg="CreateContainer within sandbox \"5862746873e497600dd306abf48aec28be9d73f437627a1dd901bf63c848e3b9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 18:03:23.633695 containerd[1521]: time="2025-05-27T18:03:23.633646907Z" level=info msg="Container 11da477103cb82d2c091b8f299e4155f5bbf8ff6a4eaa2be9eea0a91dd931c67: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:23.647818 containerd[1521]: time="2025-05-27T18:03:23.647688238Z" level=info msg="CreateContainer within sandbox \"5862746873e497600dd306abf48aec28be9d73f437627a1dd901bf63c848e3b9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"11da477103cb82d2c091b8f299e4155f5bbf8ff6a4eaa2be9eea0a91dd931c67\"" May 27 18:03:23.648247 systemd[1]: Started cri-containerd-4116c389cf068cd964abbd2b1531c099cb1e5bbfd31c229884acf361846e24b1.scope - libcontainer container 4116c389cf068cd964abbd2b1531c099cb1e5bbfd31c229884acf361846e24b1. May 27 18:03:23.650342 containerd[1521]: time="2025-05-27T18:03:23.650294601Z" level=info msg="StartContainer for \"11da477103cb82d2c091b8f299e4155f5bbf8ff6a4eaa2be9eea0a91dd931c67\"" May 27 18:03:23.656132 containerd[1521]: time="2025-05-27T18:03:23.655653365Z" level=info msg="connecting to shim 11da477103cb82d2c091b8f299e4155f5bbf8ff6a4eaa2be9eea0a91dd931c67" address="unix:///run/containerd/s/102ed6d81670db3f9ae54c928c6a89b9a0d0de1f9ead08025493ad3e26563d8a" protocol=ttrpc version=3 May 27 18:03:23.671091 systemd[1]: Started cri-containerd-dc0492063beb5c941a2dea25edd9adc48bc1388d05792be17ec15d5c6d700389.scope - libcontainer container dc0492063beb5c941a2dea25edd9adc48bc1388d05792be17ec15d5c6d700389. May 27 18:03:23.698039 systemd[1]: Started cri-containerd-11da477103cb82d2c091b8f299e4155f5bbf8ff6a4eaa2be9eea0a91dd931c67.scope - libcontainer container 11da477103cb82d2c091b8f299e4155f5bbf8ff6a4eaa2be9eea0a91dd931c67. May 27 18:03:23.734312 kubelet[2301]: W0527 18:03:23.733576 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.189.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.189.209:6443: connect: connection refused May 27 18:03:23.735130 kubelet[2301]: E0527 18:03:23.734553 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.189.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.189.209:6443: connect: connection refused" logger="UnhandledError" May 27 18:03:23.794523 containerd[1521]: time="2025-05-27T18:03:23.794319318Z" level=info msg="StartContainer for \"4116c389cf068cd964abbd2b1531c099cb1e5bbfd31c229884acf361846e24b1\" returns successfully" May 27 18:03:23.810886 containerd[1521]: time="2025-05-27T18:03:23.810834157Z" level=info msg="StartContainer for \"dc0492063beb5c941a2dea25edd9adc48bc1388d05792be17ec15d5c6d700389\" returns successfully" May 27 18:03:23.834932 containerd[1521]: time="2025-05-27T18:03:23.834892499Z" level=info msg="StartContainer for \"11da477103cb82d2c091b8f299e4155f5bbf8ff6a4eaa2be9eea0a91dd931c67\" returns successfully" May 27 18:03:23.875416 kubelet[2301]: W0527 18:03:23.875330 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.189.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.189.209:6443: connect: connection refused May 27 18:03:23.875587 kubelet[2301]: E0527 18:03:23.875431 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.189.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.189.209:6443: connect: connection refused" logger="UnhandledError" May 27 18:03:24.312817 kubelet[2301]: I0527 18:03:24.312346 2301 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:24.764938 kubelet[2301]: E0527 18:03:24.764700 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:24.766111 kubelet[2301]: E0527 18:03:24.766015 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:24.769912 kubelet[2301]: E0527 18:03:24.769795 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:24.770335 kubelet[2301]: E0527 18:03:24.770080 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:24.773547 kubelet[2301]: E0527 18:03:24.773509 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:24.773901 kubelet[2301]: E0527 18:03:24.773658 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:25.778330 kubelet[2301]: E0527 18:03:25.778080 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:25.778330 kubelet[2301]: E0527 18:03:25.778308 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:25.779363 kubelet[2301]: E0527 18:03:25.778837 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:25.779363 kubelet[2301]: E0527 18:03:25.779023 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:25.779363 kubelet[2301]: E0527 18:03:25.779294 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:25.779514 kubelet[2301]: E0527 18:03:25.779436 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:26.371511 kubelet[2301]: E0527 18:03:26.371436 2301 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.0.0-1-b2ae16c630\" not found" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.447576 kubelet[2301]: I0527 18:03:26.447462 2301 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.490135 kubelet[2301]: I0527 18:03:26.490075 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.517310 kubelet[2301]: E0527 18:03:26.517225 2301 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-1-b2ae16c630\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.517310 kubelet[2301]: I0527 18:03:26.517310 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.520764 kubelet[2301]: E0527 18:03:26.520698 2301 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.520764 kubelet[2301]: I0527 18:03:26.520759 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.524627 kubelet[2301]: E0527 18:03:26.524568 2301 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-1-b2ae16c630\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.657163 kubelet[2301]: I0527 18:03:26.656346 2301 apiserver.go:52] "Watching apiserver" May 27 18:03:26.698240 kubelet[2301]: I0527 18:03:26.698163 2301 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 18:03:26.778993 kubelet[2301]: I0527 18:03:26.778928 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.779552 kubelet[2301]: I0527 18:03:26.779436 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.782078 kubelet[2301]: I0527 18:03:26.779645 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.782746 kubelet[2301]: E0527 18:03:26.782696 2301 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-1-b2ae16c630\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.782996 kubelet[2301]: E0527 18:03:26.782952 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:26.784637 kubelet[2301]: E0527 18:03:26.784598 2301 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.784990 kubelet[2301]: E0527 18:03:26.784950 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:26.786674 kubelet[2301]: E0527 18:03:26.786349 2301 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-1-b2ae16c630\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:26.786674 kubelet[2301]: E0527 18:03:26.786595 2301 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:28.791968 systemd[1]: Reload requested from client PID 2575 ('systemctl') (unit session-7.scope)... May 27 18:03:28.791994 systemd[1]: Reloading... May 27 18:03:28.964797 zram_generator::config[2622]: No configuration found. May 27 18:03:29.141671 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:03:29.368194 systemd[1]: Reloading finished in 575 ms. May 27 18:03:29.402112 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:03:29.422867 systemd[1]: kubelet.service: Deactivated successfully. May 27 18:03:29.423545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:03:29.423890 systemd[1]: kubelet.service: Consumed 1.383s CPU time, 127.1M memory peak. May 27 18:03:29.428000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:03:29.633448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:03:29.649849 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 18:03:29.728875 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:03:29.728875 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 18:03:29.728875 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:03:29.729811 kubelet[2669]: I0527 18:03:29.729247 2669 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 18:03:29.741919 kubelet[2669]: I0527 18:03:29.741336 2669 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 18:03:29.741919 kubelet[2669]: I0527 18:03:29.741410 2669 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 18:03:29.741919 kubelet[2669]: I0527 18:03:29.741925 2669 server.go:954] "Client rotation is on, will bootstrap in background" May 27 18:03:29.746106 kubelet[2669]: I0527 18:03:29.746057 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 18:03:29.758242 kubelet[2669]: I0527 18:03:29.758184 2669 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 18:03:29.768036 kubelet[2669]: I0527 18:03:29.767974 2669 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 18:03:29.773614 kubelet[2669]: I0527 18:03:29.773451 2669 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 18:03:29.774516 kubelet[2669]: I0527 18:03:29.774083 2669 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 18:03:29.774516 kubelet[2669]: I0527 18:03:29.774124 2669 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-1-b2ae16c630","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 18:03:29.774516 kubelet[2669]: I0527 18:03:29.774395 2669 topology_manager.go:138] "Creating topology manager with none policy" May 27 18:03:29.774516 kubelet[2669]: I0527 18:03:29.774410 2669 container_manager_linux.go:304] "Creating device plugin manager" May 27 18:03:29.774872 kubelet[2669]: I0527 18:03:29.774472 2669 state_mem.go:36] "Initialized new in-memory state store" May 27 18:03:29.775333 kubelet[2669]: I0527 18:03:29.775108 2669 kubelet.go:446] "Attempting to sync node with API server" May 27 18:03:29.775443 kubelet[2669]: I0527 18:03:29.775432 2669 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 18:03:29.775521 kubelet[2669]: I0527 18:03:29.775513 2669 kubelet.go:352] "Adding apiserver pod source" May 27 18:03:29.775574 kubelet[2669]: I0527 18:03:29.775567 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 18:03:29.777602 kubelet[2669]: I0527 18:03:29.777564 2669 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 18:03:29.778340 kubelet[2669]: I0527 18:03:29.778315 2669 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 18:03:29.778842 kubelet[2669]: I0527 18:03:29.778817 2669 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 18:03:29.778924 kubelet[2669]: I0527 18:03:29.778856 2669 server.go:1287] "Started kubelet" May 27 18:03:29.789619 kubelet[2669]: I0527 18:03:29.789580 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 18:03:29.793226 kubelet[2669]: I0527 18:03:29.793159 2669 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 18:03:29.798297 kubelet[2669]: I0527 18:03:29.798210 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 18:03:29.798867 kubelet[2669]: I0527 18:03:29.798825 2669 server.go:479] "Adding debug handlers to kubelet server" May 27 18:03:29.803651 kubelet[2669]: I0527 18:03:29.802693 2669 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 18:03:29.810496 kubelet[2669]: I0527 18:03:29.810460 2669 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 18:03:29.820890 kubelet[2669]: I0527 18:03:29.820848 2669 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 18:03:29.822296 kubelet[2669]: E0527 18:03:29.821344 2669 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-1-b2ae16c630\" not found" May 27 18:03:29.823805 kubelet[2669]: I0527 18:03:29.822785 2669 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 18:03:29.824143 kubelet[2669]: E0527 18:03:29.824124 2669 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 18:03:29.827016 kubelet[2669]: I0527 18:03:29.826998 2669 reconciler.go:26] "Reconciler: start to sync state" May 27 18:03:29.831270 kubelet[2669]: I0527 18:03:29.831218 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 18:03:29.835550 kubelet[2669]: I0527 18:03:29.832820 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 18:03:29.835550 kubelet[2669]: I0527 18:03:29.832870 2669 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 18:03:29.835550 kubelet[2669]: I0527 18:03:29.832895 2669 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 18:03:29.835550 kubelet[2669]: I0527 18:03:29.832903 2669 kubelet.go:2382] "Starting kubelet main sync loop" May 27 18:03:29.835550 kubelet[2669]: E0527 18:03:29.832963 2669 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 18:03:29.834411 sudo[2684]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 18:03:29.834798 sudo[2684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 18:03:29.843326 kubelet[2669]: I0527 18:03:29.843271 2669 factory.go:221] Registration of the systemd container factory successfully May 27 18:03:29.843829 kubelet[2669]: I0527 18:03:29.843406 2669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 18:03:29.852854 kubelet[2669]: I0527 18:03:29.851302 2669 factory.go:221] Registration of the containerd container factory successfully May 27 18:03:29.934867 kubelet[2669]: I0527 18:03:29.934827 2669 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 18:03:29.935158 kubelet[2669]: I0527 18:03:29.935127 2669 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 18:03:29.935276 kubelet[2669]: I0527 18:03:29.935263 2669 state_mem.go:36] "Initialized new in-memory state store" May 27 18:03:29.935704 kubelet[2669]: I0527 18:03:29.935676 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 18:03:29.935880 kubelet[2669]: I0527 18:03:29.935832 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 18:03:29.935983 kubelet[2669]: I0527 18:03:29.935972 2669 policy_none.go:49] "None policy: Start" May 27 18:03:29.936066 kubelet[2669]: I0527 18:03:29.936057 2669 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 18:03:29.936175 kubelet[2669]: I0527 18:03:29.936164 2669 state_mem.go:35] "Initializing new in-memory state store" May 27 18:03:29.936767 kubelet[2669]: I0527 18:03:29.936524 2669 state_mem.go:75] "Updated machine memory state" May 27 18:03:29.939484 kubelet[2669]: E0527 18:03:29.939457 2669 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 18:03:29.950532 kubelet[2669]: I0527 18:03:29.950500 2669 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 18:03:29.950902 kubelet[2669]: I0527 18:03:29.950888 2669 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 18:03:29.951282 kubelet[2669]: I0527 18:03:29.951241 2669 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 18:03:29.952129 kubelet[2669]: I0527 18:03:29.952108 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 18:03:29.962926 kubelet[2669]: E0527 18:03:29.962313 2669 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 18:03:30.064079 kubelet[2669]: I0527 18:03:30.062181 2669 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.080915 kubelet[2669]: I0527 18:03:30.079770 2669 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.080915 kubelet[2669]: I0527 18:03:30.079969 2669 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.142531 kubelet[2669]: I0527 18:03:30.142466 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.145074 kubelet[2669]: I0527 18:03:30.145015 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.147079 kubelet[2669]: I0527 18:03:30.146936 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.156630 kubelet[2669]: W0527 18:03:30.155858 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 18:03:30.158676 kubelet[2669]: W0527 18:03:30.158536 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 18:03:30.160286 kubelet[2669]: W0527 18:03:30.160155 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 18:03:30.233169 kubelet[2669]: I0527 18:03:30.232978 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.235718 kubelet[2669]: I0527 18:03:30.235607 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35ac41c3da898d8e1dcf94541ffcd3db-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-1-b2ae16c630\" (UID: \"35ac41c3da898d8e1dcf94541ffcd3db\") " pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.236034 kubelet[2669]: I0527 18:03:30.235920 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5a1b5959f961647e2058ec01f350273-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-1-b2ae16c630\" (UID: \"f5a1b5959f961647e2058ec01f350273\") " pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.236034 kubelet[2669]: I0527 18:03:30.236005 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.236485 kubelet[2669]: I0527 18:03:30.236274 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.236485 kubelet[2669]: I0527 18:03:30.236328 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.236485 kubelet[2669]: I0527 18:03:30.236364 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a1b518880fa18f3ff38e9ffeaba7fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-1-b2ae16c630\" (UID: \"2a1b518880fa18f3ff38e9ffeaba7fa6\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.236485 kubelet[2669]: I0527 18:03:30.236396 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5a1b5959f961647e2058ec01f350273-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-1-b2ae16c630\" (UID: \"f5a1b5959f961647e2058ec01f350273\") " pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.236485 kubelet[2669]: I0527 18:03:30.236424 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5a1b5959f961647e2058ec01f350273-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-1-b2ae16c630\" (UID: \"f5a1b5959f961647e2058ec01f350273\") " pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.457365 kubelet[2669]: E0527 18:03:30.456700 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:30.460550 kubelet[2669]: E0527 18:03:30.460161 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:30.461832 kubelet[2669]: E0527 18:03:30.460387 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:30.631301 sudo[2684]: pam_unix(sudo:session): session closed for user root May 27 18:03:30.777568 kubelet[2669]: I0527 18:03:30.776975 2669 apiserver.go:52] "Watching apiserver" May 27 18:03:30.824908 kubelet[2669]: I0527 18:03:30.824826 2669 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 18:03:30.908848 kubelet[2669]: E0527 18:03:30.908234 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:30.908848 kubelet[2669]: I0527 18:03:30.908425 2669 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.909339 kubelet[2669]: E0527 18:03:30.909214 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:30.922885 kubelet[2669]: W0527 18:03:30.922849 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 18:03:30.923169 kubelet[2669]: E0527 18:03:30.922977 2669 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-1-b2ae16c630\" already exists" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:30.923851 kubelet[2669]: E0527 18:03:30.923721 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:30.970224 kubelet[2669]: I0527 18:03:30.969371 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" podStartSLOduration=0.969341932 podStartE2EDuration="969.341932ms" podCreationTimestamp="2025-05-27 18:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:03:30.956001091 +0000 UTC m=+1.299408033" watchObservedRunningTime="2025-05-27 18:03:30.969341932 +0000 UTC m=+1.312748872" May 27 18:03:30.985330 kubelet[2669]: I0527 18:03:30.984652 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" podStartSLOduration=0.984611238 podStartE2EDuration="984.611238ms" podCreationTimestamp="2025-05-27 18:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:03:30.970937374 +0000 UTC m=+1.314344331" watchObservedRunningTime="2025-05-27 18:03:30.984611238 +0000 UTC m=+1.328018178" May 27 18:03:31.910375 kubelet[2669]: E0527 18:03:31.910292 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:31.912264 kubelet[2669]: E0527 18:03:31.910406 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:32.515221 sudo[1751]: pam_unix(sudo:session): session closed for user root May 27 18:03:32.517982 sshd[1750]: Connection closed by 139.178.68.195 port 43466 May 27 18:03:32.519056 sshd-session[1748]: pam_unix(sshd:session): session closed for user core May 27 18:03:32.524211 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. May 27 18:03:32.525206 systemd[1]: sshd@6-137.184.189.209:22-139.178.68.195:43466.service: Deactivated successfully. May 27 18:03:32.528065 systemd[1]: session-7.scope: Deactivated successfully. May 27 18:03:32.528435 systemd[1]: session-7.scope: Consumed 6.195s CPU time, 224.4M memory peak. May 27 18:03:32.532424 systemd-logind[1490]: Removed session 7. May 27 18:03:32.913039 kubelet[2669]: E0527 18:03:32.911547 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:33.914171 kubelet[2669]: E0527 18:03:33.914134 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:34.862331 kubelet[2669]: I0527 18:03:34.862241 2669 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 18:03:34.862855 containerd[1521]: time="2025-05-27T18:03:34.862574812Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 18:03:34.863820 kubelet[2669]: I0527 18:03:34.863426 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 18:03:35.634237 kubelet[2669]: I0527 18:03:35.634160 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" podStartSLOduration=5.634120914 podStartE2EDuration="5.634120914s" podCreationTimestamp="2025-05-27 18:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:03:30.988252822 +0000 UTC m=+1.331659758" watchObservedRunningTime="2025-05-27 18:03:35.634120914 +0000 UTC m=+5.977527833" May 27 18:03:35.655950 systemd[1]: Created slice kubepods-besteffort-podd9836831_1029_463b_8933_10edbcef47b1.slice - libcontainer container kubepods-besteffort-podd9836831_1029_463b_8933_10edbcef47b1.slice. May 27 18:03:35.671890 systemd-resolved[1398]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 27 18:03:35.674466 systemd[1]: Created slice kubepods-burstable-pod5c7d5090_2acf_417a_ba26_4d3b35648ee4.slice - libcontainer container kubepods-burstable-pod5c7d5090_2acf_417a_ba26_4d3b35648ee4.slice. May 27 18:03:35.675361 kubelet[2669]: I0527 18:03:35.674226 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-run\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.675361 kubelet[2669]: I0527 18:03:35.674276 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cni-path\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.675361 kubelet[2669]: I0527 18:03:35.674308 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-hubble-tls\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.675361 kubelet[2669]: I0527 18:03:35.674338 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf496\" (UniqueName: \"kubernetes.io/projected/d9836831-1029-463b-8933-10edbcef47b1-kube-api-access-jf496\") pod \"kube-proxy-chjkm\" (UID: \"d9836831-1029-463b-8933-10edbcef47b1\") " pod="kube-system/kube-proxy-chjkm" May 27 18:03:35.675361 kubelet[2669]: I0527 18:03:35.674363 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9836831-1029-463b-8933-10edbcef47b1-lib-modules\") pod \"kube-proxy-chjkm\" (UID: \"d9836831-1029-463b-8933-10edbcef47b1\") " pod="kube-system/kube-proxy-chjkm" May 27 18:03:35.675361 kubelet[2669]: I0527 18:03:35.674378 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-xtables-lock\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.675590 kubelet[2669]: I0527 18:03:35.674405 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-config-path\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.675590 kubelet[2669]: I0527 18:03:35.674428 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-etc-cni-netd\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.679243 kubelet[2669]: I0527 18:03:35.675835 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-lib-modules\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.679243 kubelet[2669]: I0527 18:03:35.676387 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbgv2\" (UniqueName: \"kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-kube-api-access-jbgv2\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.679243 kubelet[2669]: I0527 18:03:35.676413 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9836831-1029-463b-8933-10edbcef47b1-kube-proxy\") pod \"kube-proxy-chjkm\" (UID: \"d9836831-1029-463b-8933-10edbcef47b1\") " pod="kube-system/kube-proxy-chjkm" May 27 18:03:35.679243 kubelet[2669]: I0527 18:03:35.676429 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-host-proc-sys-net\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.679243 kubelet[2669]: I0527 18:03:35.676456 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-host-proc-sys-kernel\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.679243 kubelet[2669]: I0527 18:03:35.676473 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-bpf-maps\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.679572 kubelet[2669]: I0527 18:03:35.676500 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-cgroup\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.679572 kubelet[2669]: I0527 18:03:35.676718 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c7d5090-2acf-417a-ba26-4d3b35648ee4-clustermesh-secrets\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.679572 kubelet[2669]: I0527 18:03:35.676778 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9836831-1029-463b-8933-10edbcef47b1-xtables-lock\") pod \"kube-proxy-chjkm\" (UID: \"d9836831-1029-463b-8933-10edbcef47b1\") " pod="kube-system/kube-proxy-chjkm" May 27 18:03:35.679572 kubelet[2669]: I0527 18:03:35.676797 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-hostproc\") pod \"cilium-fcfvc\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " pod="kube-system/cilium-fcfvc" May 27 18:03:35.846969 kubelet[2669]: E0527 18:03:35.846920 2669 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 18:03:35.846969 kubelet[2669]: E0527 18:03:35.846961 2669 projected.go:194] Error preparing data for projected volume kube-api-access-jf496 for pod kube-system/kube-proxy-chjkm: configmap "kube-root-ca.crt" not found May 27 18:03:35.847224 kubelet[2669]: E0527 18:03:35.847036 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9836831-1029-463b-8933-10edbcef47b1-kube-api-access-jf496 podName:d9836831-1029-463b-8933-10edbcef47b1 nodeName:}" failed. No retries permitted until 2025-05-27 18:03:36.34700731 +0000 UTC m=+6.690414227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jf496" (UniqueName: "kubernetes.io/projected/d9836831-1029-463b-8933-10edbcef47b1-kube-api-access-jf496") pod "kube-proxy-chjkm" (UID: "d9836831-1029-463b-8933-10edbcef47b1") : configmap "kube-root-ca.crt" not found May 27 18:03:35.848827 kubelet[2669]: E0527 18:03:35.846942 2669 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 18:03:35.849199 kubelet[2669]: E0527 18:03:35.849042 2669 projected.go:194] Error preparing data for projected volume kube-api-access-jbgv2 for pod kube-system/cilium-fcfvc: configmap "kube-root-ca.crt" not found May 27 18:03:35.849199 kubelet[2669]: E0527 18:03:35.849158 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-kube-api-access-jbgv2 podName:5c7d5090-2acf-417a-ba26-4d3b35648ee4 nodeName:}" failed. No retries permitted until 2025-05-27 18:03:36.349125555 +0000 UTC m=+6.692532493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jbgv2" (UniqueName: "kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-kube-api-access-jbgv2") pod "cilium-fcfvc" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4") : configmap "kube-root-ca.crt" not found May 27 18:03:35.963200 systemd[1]: Created slice kubepods-besteffort-pod741b0771_3993_406f_aea3_2a2f4befd27e.slice - libcontainer container kubepods-besteffort-pod741b0771_3993_406f_aea3_2a2f4befd27e.slice. May 27 18:03:35.979921 kubelet[2669]: I0527 18:03:35.979853 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/741b0771-3993-406f-aea3-2a2f4befd27e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-5pm2x\" (UID: \"741b0771-3993-406f-aea3-2a2f4befd27e\") " pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:03:35.980440 kubelet[2669]: I0527 18:03:35.980178 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdxd6\" (UniqueName: \"kubernetes.io/projected/741b0771-3993-406f-aea3-2a2f4befd27e-kube-api-access-cdxd6\") pod \"cilium-operator-6c4d7847fc-5pm2x\" (UID: \"741b0771-3993-406f-aea3-2a2f4befd27e\") " pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:03:37.118493 systemd-resolved[1398]: Clock change detected. Flushing caches. May 27 18:03:37.118750 systemd-timesyncd[1425]: Contacted time server 66.118.230.14:123 (2.flatcar.pool.ntp.org). May 27 18:03:37.118817 systemd-timesyncd[1425]: Initial clock synchronization to Tue 2025-05-27 18:03:37.118239 UTC. May 27 18:03:37.199982 kubelet[2669]: E0527 18:03:37.199540 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:37.201129 containerd[1521]: time="2025-05-27T18:03:37.201080771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5pm2x,Uid:741b0771-3993-406f-aea3-2a2f4befd27e,Namespace:kube-system,Attempt:0,}" May 27 18:03:37.237424 containerd[1521]: time="2025-05-27T18:03:37.237252848Z" level=info msg="connecting to shim b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b" address="unix:///run/containerd/s/c73288b7a392a390b8037457852c4fffe29ef75c2f8a7a336b1625fa3f733c2a" namespace=k8s.io protocol=ttrpc version=3 May 27 18:03:37.282191 systemd[1]: Started cri-containerd-b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b.scope - libcontainer container b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b. May 27 18:03:37.352306 containerd[1521]: time="2025-05-27T18:03:37.352188495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5pm2x,Uid:741b0771-3993-406f-aea3-2a2f4befd27e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\"" May 27 18:03:37.354073 kubelet[2669]: E0527 18:03:37.354036 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:37.357584 containerd[1521]: time="2025-05-27T18:03:37.357477330Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 18:03:37.505540 kubelet[2669]: E0527 18:03:37.505270 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:37.506373 containerd[1521]: time="2025-05-27T18:03:37.506117405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chjkm,Uid:d9836831-1029-463b-8933-10edbcef47b1,Namespace:kube-system,Attempt:0,}" May 27 18:03:37.514715 kubelet[2669]: E0527 18:03:37.514625 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:37.516574 containerd[1521]: time="2025-05-27T18:03:37.516105280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcfvc,Uid:5c7d5090-2acf-417a-ba26-4d3b35648ee4,Namespace:kube-system,Attempt:0,}" May 27 18:03:37.537354 containerd[1521]: time="2025-05-27T18:03:37.536944869Z" level=info msg="connecting to shim 1f0927cf051c738e473103f0dbbf762d5d28d6e29988e54aeb6243ee1a0b3de4" address="unix:///run/containerd/s/7abfeb3fd596a293294d9869ef614ed3b6bab2353a84eaef570cd247ef59fbd6" namespace=k8s.io protocol=ttrpc version=3 May 27 18:03:37.543940 containerd[1521]: time="2025-05-27T18:03:37.543505240Z" level=info msg="connecting to shim f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125" address="unix:///run/containerd/s/605784722212810f250af225db330b1a90973f034c280b158885f87fdcfcc95c" namespace=k8s.io protocol=ttrpc version=3 May 27 18:03:37.578169 systemd[1]: Started cri-containerd-1f0927cf051c738e473103f0dbbf762d5d28d6e29988e54aeb6243ee1a0b3de4.scope - libcontainer container 1f0927cf051c738e473103f0dbbf762d5d28d6e29988e54aeb6243ee1a0b3de4. May 27 18:03:37.584642 systemd[1]: Started cri-containerd-f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125.scope - libcontainer container f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125. May 27 18:03:37.631284 containerd[1521]: time="2025-05-27T18:03:37.631231926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcfvc,Uid:5c7d5090-2acf-417a-ba26-4d3b35648ee4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\"" May 27 18:03:37.632914 kubelet[2669]: E0527 18:03:37.632819 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:37.643727 containerd[1521]: time="2025-05-27T18:03:37.643662510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chjkm,Uid:d9836831-1029-463b-8933-10edbcef47b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f0927cf051c738e473103f0dbbf762d5d28d6e29988e54aeb6243ee1a0b3de4\"" May 27 18:03:37.645140 kubelet[2669]: E0527 18:03:37.645100 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:37.649326 containerd[1521]: time="2025-05-27T18:03:37.649276161Z" level=info msg="CreateContainer within sandbox \"1f0927cf051c738e473103f0dbbf762d5d28d6e29988e54aeb6243ee1a0b3de4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 18:03:37.660311 containerd[1521]: time="2025-05-27T18:03:37.660267927Z" level=info msg="Container 427f1f7ad11b2ec38ef4e1a8c6ce0b4f2c6daaf5f7b4c6f6850d9150cc82b1a2: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:37.668206 containerd[1521]: time="2025-05-27T18:03:37.668148453Z" level=info msg="CreateContainer within sandbox \"1f0927cf051c738e473103f0dbbf762d5d28d6e29988e54aeb6243ee1a0b3de4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"427f1f7ad11b2ec38ef4e1a8c6ce0b4f2c6daaf5f7b4c6f6850d9150cc82b1a2\"" May 27 18:03:37.669437 containerd[1521]: time="2025-05-27T18:03:37.669398037Z" level=info msg="StartContainer for \"427f1f7ad11b2ec38ef4e1a8c6ce0b4f2c6daaf5f7b4c6f6850d9150cc82b1a2\"" May 27 18:03:37.672239 containerd[1521]: time="2025-05-27T18:03:37.672197581Z" level=info msg="connecting to shim 427f1f7ad11b2ec38ef4e1a8c6ce0b4f2c6daaf5f7b4c6f6850d9150cc82b1a2" address="unix:///run/containerd/s/7abfeb3fd596a293294d9869ef614ed3b6bab2353a84eaef570cd247ef59fbd6" protocol=ttrpc version=3 May 27 18:03:37.701230 systemd[1]: Started cri-containerd-427f1f7ad11b2ec38ef4e1a8c6ce0b4f2c6daaf5f7b4c6f6850d9150cc82b1a2.scope - libcontainer container 427f1f7ad11b2ec38ef4e1a8c6ce0b4f2c6daaf5f7b4c6f6850d9150cc82b1a2. May 27 18:03:37.779187 containerd[1521]: time="2025-05-27T18:03:37.778812519Z" level=info msg="StartContainer for \"427f1f7ad11b2ec38ef4e1a8c6ce0b4f2c6daaf5f7b4c6f6850d9150cc82b1a2\" returns successfully" May 27 18:03:37.858761 kubelet[2669]: E0527 18:03:37.858721 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:39.079432 kubelet[2669]: E0527 18:03:39.079125 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:39.109672 kubelet[2669]: I0527 18:03:39.109420 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-chjkm" podStartSLOduration=4.109396762 podStartE2EDuration="4.109396762s" podCreationTimestamp="2025-05-27 18:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:03:37.8809364 +0000 UTC m=+7.293202912" watchObservedRunningTime="2025-05-27 18:03:39.109396762 +0000 UTC m=+8.521663277" May 27 18:03:39.319101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1174742090.mount: Deactivated successfully. May 27 18:03:39.591114 kubelet[2669]: E0527 18:03:39.590834 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:39.869044 kubelet[2669]: E0527 18:03:39.868238 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:39.869487 kubelet[2669]: E0527 18:03:39.869458 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:40.874628 kubelet[2669]: E0527 18:03:40.874576 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:41.262216 containerd[1521]: time="2025-05-27T18:03:41.262024773Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:41.263207 containerd[1521]: time="2025-05-27T18:03:41.263152236Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 18:03:41.264085 containerd[1521]: time="2025-05-27T18:03:41.264021038Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:41.265939 containerd[1521]: time="2025-05-27T18:03:41.265312036Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.90767747s" May 27 18:03:41.265939 containerd[1521]: time="2025-05-27T18:03:41.265351474Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 18:03:41.268198 containerd[1521]: time="2025-05-27T18:03:41.268157531Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 18:03:41.271928 containerd[1521]: time="2025-05-27T18:03:41.271742894Z" level=info msg="CreateContainer within sandbox \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 18:03:41.282225 containerd[1521]: time="2025-05-27T18:03:41.281504251Z" level=info msg="Container 07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:41.301321 containerd[1521]: time="2025-05-27T18:03:41.301257580Z" level=info msg="CreateContainer within sandbox \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\"" May 27 18:03:41.302529 containerd[1521]: time="2025-05-27T18:03:41.302314974Z" level=info msg="StartContainer for \"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\"" May 27 18:03:41.306056 containerd[1521]: time="2025-05-27T18:03:41.306003352Z" level=info msg="connecting to shim 07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c" address="unix:///run/containerd/s/c73288b7a392a390b8037457852c4fffe29ef75c2f8a7a336b1625fa3f733c2a" protocol=ttrpc version=3 May 27 18:03:41.336457 systemd[1]: Started cri-containerd-07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c.scope - libcontainer container 07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c. May 27 18:03:41.390697 containerd[1521]: time="2025-05-27T18:03:41.390605459Z" level=info msg="StartContainer for \"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" returns successfully" May 27 18:03:41.882033 kubelet[2669]: E0527 18:03:41.881925 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:42.893364 kubelet[2669]: E0527 18:03:42.893320 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:42.938513 kubelet[2669]: E0527 18:03:42.938433 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:42.956888 kubelet[2669]: I0527 18:03:42.955144 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" podStartSLOduration=4.044385838 podStartE2EDuration="7.955111603s" podCreationTimestamp="2025-05-27 18:03:35 +0000 UTC" firstStartedPulling="2025-05-27 18:03:37.356380362 +0000 UTC m=+6.768646867" lastFinishedPulling="2025-05-27 18:03:41.267106123 +0000 UTC m=+10.679372632" observedRunningTime="2025-05-27 18:03:42.067639217 +0000 UTC m=+11.479905735" watchObservedRunningTime="2025-05-27 18:03:42.955111603 +0000 UTC m=+12.367378111" May 27 18:03:43.899142 kubelet[2669]: E0527 18:03:43.898159 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:44.962664 update_engine[1491]: I20250527 18:03:44.961925 1491 update_attempter.cc:509] Updating boot flags... May 27 18:03:47.327988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098655299.mount: Deactivated successfully. May 27 18:03:49.936113 containerd[1521]: time="2025-05-27T18:03:49.936018789Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:49.938519 containerd[1521]: time="2025-05-27T18:03:49.938224541Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 18:03:49.939479 containerd[1521]: time="2025-05-27T18:03:49.939421829Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:03:49.943071 containerd[1521]: time="2025-05-27T18:03:49.942903673Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.674478195s" May 27 18:03:49.943071 containerd[1521]: time="2025-05-27T18:03:49.942998012Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 18:03:49.947257 containerd[1521]: time="2025-05-27T18:03:49.947218104Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 18:03:49.982908 containerd[1521]: time="2025-05-27T18:03:49.981665760Z" level=info msg="Container 3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:49.986223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622310746.mount: Deactivated successfully. May 27 18:03:50.001580 containerd[1521]: time="2025-05-27T18:03:50.001494209Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\"" May 27 18:03:50.002906 containerd[1521]: time="2025-05-27T18:03:50.002829740Z" level=info msg="StartContainer for \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\"" May 27 18:03:50.004482 containerd[1521]: time="2025-05-27T18:03:50.004422336Z" level=info msg="connecting to shim 3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6" address="unix:///run/containerd/s/605784722212810f250af225db330b1a90973f034c280b158885f87fdcfcc95c" protocol=ttrpc version=3 May 27 18:03:50.067321 systemd[1]: Started cri-containerd-3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6.scope - libcontainer container 3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6. May 27 18:03:50.121897 containerd[1521]: time="2025-05-27T18:03:50.121818801Z" level=info msg="StartContainer for \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\" returns successfully" May 27 18:03:50.154300 systemd[1]: cri-containerd-3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6.scope: Deactivated successfully. May 27 18:03:50.156290 systemd[1]: cri-containerd-3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6.scope: Consumed 36ms CPU time, 6.3M memory peak, 182K read from disk, 3.2M written to disk. May 27 18:03:50.236272 containerd[1521]: time="2025-05-27T18:03:50.235574814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\" id:\"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\" pid:3149 exited_at:{seconds:1748369030 nanos:152212750}" May 27 18:03:50.238635 containerd[1521]: time="2025-05-27T18:03:50.238559530Z" level=info msg="received exit event container_id:\"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\" id:\"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\" pid:3149 exited_at:{seconds:1748369030 nanos:152212750}" May 27 18:03:50.275425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6-rootfs.mount: Deactivated successfully. May 27 18:03:50.925812 kubelet[2669]: E0527 18:03:50.925587 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:50.933896 containerd[1521]: time="2025-05-27T18:03:50.933782420Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 18:03:50.944243 containerd[1521]: time="2025-05-27T18:03:50.944198806Z" level=info msg="Container dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:50.952124 containerd[1521]: time="2025-05-27T18:03:50.952048454Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\"" May 27 18:03:50.954530 containerd[1521]: time="2025-05-27T18:03:50.952861746Z" level=info msg="StartContainer for \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\"" May 27 18:03:50.956017 containerd[1521]: time="2025-05-27T18:03:50.955920690Z" level=info msg="connecting to shim dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1" address="unix:///run/containerd/s/605784722212810f250af225db330b1a90973f034c280b158885f87fdcfcc95c" protocol=ttrpc version=3 May 27 18:03:51.007185 systemd[1]: Started cri-containerd-dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1.scope - libcontainer container dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1. May 27 18:03:51.067449 containerd[1521]: time="2025-05-27T18:03:51.067331319Z" level=info msg="StartContainer for \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\" returns successfully" May 27 18:03:51.072626 kubelet[2669]: I0527 18:03:51.072541 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:03:51.072626 kubelet[2669]: I0527 18:03:51.072621 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:03:51.077301 kubelet[2669]: I0527 18:03:51.077257 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:03:51.098391 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 18:03:51.099795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 18:03:51.100084 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 18:03:51.103498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:03:51.108030 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 18:03:51.109760 systemd[1]: cri-containerd-dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1.scope: Deactivated successfully. May 27 18:03:51.119141 containerd[1521]: time="2025-05-27T18:03:51.119067932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\" id:\"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\" pid:3195 exited_at:{seconds:1748369031 nanos:115685075}" May 27 18:03:51.119491 containerd[1521]: time="2025-05-27T18:03:51.119409572Z" level=info msg="received exit event container_id:\"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\" id:\"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\" pid:3195 exited_at:{seconds:1748369031 nanos:115685075}" May 27 18:03:51.131161 kubelet[2669]: I0527 18:03:51.131050 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:03:51.131161 kubelet[2669]: I0527 18:03:51.131131 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-fcfvc","kube-system/cilium-operator-6c4d7847fc-5pm2x","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:03:51.131942 kubelet[2669]: E0527 18:03:51.131180 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-fcfvc" May 27 18:03:51.131942 kubelet[2669]: E0527 18:03:51.131196 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:03:51.131942 kubelet[2669]: E0527 18:03:51.131206 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:03:51.131942 kubelet[2669]: E0527 18:03:51.131216 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:03:51.131942 kubelet[2669]: E0527 18:03:51.131225 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:03:51.131942 kubelet[2669]: E0527 18:03:51.131234 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:03:51.131942 kubelet[2669]: I0527 18:03:51.131244 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:03:51.150943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:03:51.169199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1-rootfs.mount: Deactivated successfully. May 27 18:03:51.932507 kubelet[2669]: E0527 18:03:51.931544 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:51.936923 containerd[1521]: time="2025-05-27T18:03:51.936814020Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 18:03:51.972892 containerd[1521]: time="2025-05-27T18:03:51.972812972Z" level=info msg="Container 66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:51.985075 containerd[1521]: time="2025-05-27T18:03:51.984943679Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\"" May 27 18:03:51.986968 containerd[1521]: time="2025-05-27T18:03:51.986319253Z" level=info msg="StartContainer for \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\"" May 27 18:03:51.990065 containerd[1521]: time="2025-05-27T18:03:51.989674610Z" level=info msg="connecting to shim 66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6" address="unix:///run/containerd/s/605784722212810f250af225db330b1a90973f034c280b158885f87fdcfcc95c" protocol=ttrpc version=3 May 27 18:03:52.025216 systemd[1]: Started cri-containerd-66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6.scope - libcontainer container 66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6. May 27 18:03:52.091757 systemd[1]: cri-containerd-66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6.scope: Deactivated successfully. May 27 18:03:52.095937 containerd[1521]: time="2025-05-27T18:03:52.094893297Z" level=info msg="received exit event container_id:\"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\" id:\"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\" pid:3245 exited_at:{seconds:1748369032 nanos:94471138}" May 27 18:03:52.095937 containerd[1521]: time="2025-05-27T18:03:52.095493115Z" level=info msg="StartContainer for \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\" returns successfully" May 27 18:03:52.097816 containerd[1521]: time="2025-05-27T18:03:52.097536733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\" id:\"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\" pid:3245 exited_at:{seconds:1748369032 nanos:94471138}" May 27 18:03:52.132225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6-rootfs.mount: Deactivated successfully. May 27 18:03:52.937867 kubelet[2669]: E0527 18:03:52.937815 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:52.943465 containerd[1521]: time="2025-05-27T18:03:52.943413434Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 18:03:52.976785 containerd[1521]: time="2025-05-27T18:03:52.976739744Z" level=info msg="Container 2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:52.990100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309158718.mount: Deactivated successfully. May 27 18:03:52.997902 containerd[1521]: time="2025-05-27T18:03:52.997768709Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\"" May 27 18:03:53.000899 containerd[1521]: time="2025-05-27T18:03:53.000823878Z" level=info msg="StartContainer for \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\"" May 27 18:03:53.002251 containerd[1521]: time="2025-05-27T18:03:53.002059179Z" level=info msg="connecting to shim 2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26" address="unix:///run/containerd/s/605784722212810f250af225db330b1a90973f034c280b158885f87fdcfcc95c" protocol=ttrpc version=3 May 27 18:03:53.028250 systemd[1]: Started cri-containerd-2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26.scope - libcontainer container 2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26. May 27 18:03:53.072150 systemd[1]: cri-containerd-2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26.scope: Deactivated successfully. May 27 18:03:53.075969 containerd[1521]: time="2025-05-27T18:03:53.075933937Z" level=info msg="received exit event container_id:\"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\" id:\"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\" pid:3285 exited_at:{seconds:1748369033 nanos:74657088}" May 27 18:03:53.077672 containerd[1521]: time="2025-05-27T18:03:53.077632290Z" level=info msg="StartContainer for \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\" returns successfully" May 27 18:03:53.078853 containerd[1521]: time="2025-05-27T18:03:53.078822533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\" id:\"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\" pid:3285 exited_at:{seconds:1748369033 nanos:74657088}" May 27 18:03:53.110139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26-rootfs.mount: Deactivated successfully. May 27 18:03:53.946464 kubelet[2669]: E0527 18:03:53.946399 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:53.952370 containerd[1521]: time="2025-05-27T18:03:53.951660290Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 18:03:53.978642 containerd[1521]: time="2025-05-27T18:03:53.978303649Z" level=info msg="Container d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977: CDI devices from CRI Config.CDIDevices: []" May 27 18:03:53.989682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262579305.mount: Deactivated successfully. May 27 18:03:53.993672 containerd[1521]: time="2025-05-27T18:03:53.993606612Z" level=info msg="CreateContainer within sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\"" May 27 18:03:53.994736 containerd[1521]: time="2025-05-27T18:03:53.994690186Z" level=info msg="StartContainer for \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\"" May 27 18:03:53.996909 containerd[1521]: time="2025-05-27T18:03:53.996738085Z" level=info msg="connecting to shim d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977" address="unix:///run/containerd/s/605784722212810f250af225db330b1a90973f034c280b158885f87fdcfcc95c" protocol=ttrpc version=3 May 27 18:03:54.033262 systemd[1]: Started cri-containerd-d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977.scope - libcontainer container d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977. May 27 18:03:54.084288 containerd[1521]: time="2025-05-27T18:03:54.084237313Z" level=info msg="StartContainer for \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" returns successfully" May 27 18:03:54.234429 containerd[1521]: time="2025-05-27T18:03:54.231765211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" id:\"5a003e023704ba6b83e861af088fb95b5519a1a0ae6ba7427f8417673f413be4\" pid:3352 exited_at:{seconds:1748369034 nanos:230984133}" May 27 18:03:54.254120 kubelet[2669]: I0527 18:03:54.254068 2669 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 18:03:54.968887 kubelet[2669]: E0527 18:03:54.968439 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:54.992486 kubelet[2669]: I0527 18:03:54.991989 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fcfvc" podStartSLOduration=7.68274709 podStartE2EDuration="19.991946899s" podCreationTimestamp="2025-05-27 18:03:35 +0000 UTC" firstStartedPulling="2025-05-27 18:03:37.635587874 +0000 UTC m=+7.047854371" lastFinishedPulling="2025-05-27 18:03:49.944787669 +0000 UTC m=+19.357054180" observedRunningTime="2025-05-27 18:03:54.989415098 +0000 UTC m=+24.401681632" watchObservedRunningTime="2025-05-27 18:03:54.991946899 +0000 UTC m=+24.404213459" May 27 18:03:55.971575 kubelet[2669]: E0527 18:03:55.971445 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:56.298209 systemd-networkd[1451]: cilium_host: Link UP May 27 18:03:56.299297 systemd-networkd[1451]: cilium_net: Link UP May 27 18:03:56.300663 systemd-networkd[1451]: cilium_host: Gained carrier May 27 18:03:56.300914 systemd-networkd[1451]: cilium_net: Gained carrier May 27 18:03:56.312153 systemd-networkd[1451]: cilium_host: Gained IPv6LL May 27 18:03:56.465176 systemd-networkd[1451]: cilium_vxlan: Link UP May 27 18:03:56.465185 systemd-networkd[1451]: cilium_vxlan: Gained carrier May 27 18:03:56.855778 kernel: NET: Registered PF_ALG protocol family May 27 18:03:56.974687 kubelet[2669]: E0527 18:03:56.974639 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:57.192112 systemd-networkd[1451]: cilium_net: Gained IPv6LL May 27 18:03:57.821225 systemd-networkd[1451]: lxc_health: Link UP May 27 18:03:57.832673 systemd-networkd[1451]: lxc_health: Gained carrier May 27 18:03:58.216197 systemd-networkd[1451]: cilium_vxlan: Gained IPv6LL May 27 18:03:58.556412 kubelet[2669]: E0527 18:03:58.556253 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:59.497189 systemd-networkd[1451]: lxc_health: Gained IPv6LL May 27 18:03:59.519305 kubelet[2669]: E0527 18:03:59.519250 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:03:59.985899 kubelet[2669]: E0527 18:03:59.985245 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:04:01.159632 kubelet[2669]: I0527 18:04:01.159559 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:04:01.160971 kubelet[2669]: I0527 18:04:01.160236 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:04:01.167292 kubelet[2669]: I0527 18:04:01.167100 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:04:01.195266 kubelet[2669]: I0527 18:04:01.194920 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:04:01.195266 kubelet[2669]: I0527 18:04:01.195096 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-5pm2x","kube-system/cilium-fcfvc","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:04:01.195266 kubelet[2669]: E0527 18:04:01.195153 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:04:01.195266 kubelet[2669]: E0527 18:04:01.195168 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-fcfvc" May 27 18:04:01.195266 kubelet[2669]: E0527 18:04:01.195181 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:04:01.195266 kubelet[2669]: E0527 18:04:01.195193 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:04:01.195266 kubelet[2669]: E0527 18:04:01.195207 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:04:01.195266 kubelet[2669]: E0527 18:04:01.195221 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:04:01.195266 kubelet[2669]: I0527 18:04:01.195233 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:04:11.221259 kubelet[2669]: I0527 18:04:11.221199 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:04:11.222859 kubelet[2669]: I0527 18:04:11.221968 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:04:11.228442 kubelet[2669]: I0527 18:04:11.228384 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:04:11.248956 kubelet[2669]: I0527 18:04:11.248909 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:04:11.249165 kubelet[2669]: I0527 18:04:11.249135 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-5pm2x","kube-system/cilium-fcfvc","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:04:11.249220 kubelet[2669]: E0527 18:04:11.249189 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:04:11.249220 kubelet[2669]: E0527 18:04:11.249203 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-fcfvc" May 27 18:04:11.249220 kubelet[2669]: E0527 18:04:11.249213 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:04:11.249220 kubelet[2669]: E0527 18:04:11.249223 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:04:11.249403 kubelet[2669]: E0527 18:04:11.249237 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:04:11.249403 kubelet[2669]: E0527 18:04:11.249249 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:04:11.249403 kubelet[2669]: I0527 18:04:11.249264 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:04:16.330342 systemd[1]: Started sshd@7-137.184.189.209:22-139.178.68.195:39070.service - OpenSSH per-connection server daemon (139.178.68.195:39070). May 27 18:04:16.433280 sshd[3800]: Accepted publickey for core from 139.178.68.195 port 39070 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:16.435753 sshd-session[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:16.445627 systemd-logind[1490]: New session 8 of user core. May 27 18:04:16.452180 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 18:04:17.113603 sshd[3802]: Connection closed by 139.178.68.195 port 39070 May 27 18:04:17.114632 sshd-session[3800]: pam_unix(sshd:session): session closed for user core May 27 18:04:17.121296 systemd[1]: sshd@7-137.184.189.209:22-139.178.68.195:39070.service: Deactivated successfully. May 27 18:04:17.124597 systemd[1]: session-8.scope: Deactivated successfully. May 27 18:04:17.126646 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. May 27 18:04:17.129783 systemd-logind[1490]: Removed session 8. May 27 18:04:21.274304 kubelet[2669]: I0527 18:04:21.274191 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:04:21.274304 kubelet[2669]: I0527 18:04:21.274241 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:04:21.280073 kubelet[2669]: I0527 18:04:21.280008 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:04:21.302885 kubelet[2669]: I0527 18:04:21.302835 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:04:21.303084 kubelet[2669]: I0527 18:04:21.302964 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-5pm2x","kube-system/cilium-fcfvc","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:04:21.303084 kubelet[2669]: E0527 18:04:21.303003 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:04:21.303084 kubelet[2669]: E0527 18:04:21.303018 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-fcfvc" May 27 18:04:21.303084 kubelet[2669]: E0527 18:04:21.303061 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:04:21.303084 kubelet[2669]: E0527 18:04:21.303070 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:04:21.303084 kubelet[2669]: E0527 18:04:21.303080 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:04:21.303084 kubelet[2669]: E0527 18:04:21.303087 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:04:21.303375 kubelet[2669]: I0527 18:04:21.303116 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:04:22.139935 systemd[1]: Started sshd@8-137.184.189.209:22-139.178.68.195:39086.service - OpenSSH per-connection server daemon (139.178.68.195:39086). May 27 18:04:22.212098 sshd[3815]: Accepted publickey for core from 139.178.68.195 port 39086 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:22.214722 sshd-session[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:22.222488 systemd-logind[1490]: New session 9 of user core. May 27 18:04:22.232654 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 18:04:22.423757 sshd[3817]: Connection closed by 139.178.68.195 port 39086 May 27 18:04:22.424266 sshd-session[3815]: pam_unix(sshd:session): session closed for user core May 27 18:04:22.432154 systemd[1]: sshd@8-137.184.189.209:22-139.178.68.195:39086.service: Deactivated successfully. May 27 18:04:22.437912 systemd[1]: session-9.scope: Deactivated successfully. May 27 18:04:22.439940 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. May 27 18:04:22.444076 systemd-logind[1490]: Removed session 9. May 27 18:04:27.444930 systemd[1]: Started sshd@9-137.184.189.209:22-139.178.68.195:60628.service - OpenSSH per-connection server daemon (139.178.68.195:60628). May 27 18:04:27.540727 sshd[3830]: Accepted publickey for core from 139.178.68.195 port 60628 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:27.544181 sshd-session[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:27.551385 systemd-logind[1490]: New session 10 of user core. May 27 18:04:27.562300 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 18:04:27.727363 sshd[3832]: Connection closed by 139.178.68.195 port 60628 May 27 18:04:27.726691 sshd-session[3830]: pam_unix(sshd:session): session closed for user core May 27 18:04:27.733088 systemd[1]: sshd@9-137.184.189.209:22-139.178.68.195:60628.service: Deactivated successfully. May 27 18:04:27.735704 systemd[1]: session-10.scope: Deactivated successfully. May 27 18:04:27.736761 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. May 27 18:04:27.739152 systemd-logind[1490]: Removed session 10. May 27 18:04:31.335097 kubelet[2669]: I0527 18:04:31.334997 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:04:31.335097 kubelet[2669]: I0527 18:04:31.335103 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:04:31.342512 kubelet[2669]: I0527 18:04:31.342453 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:04:31.366036 kubelet[2669]: I0527 18:04:31.365982 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:04:31.366323 kubelet[2669]: I0527 18:04:31.366301 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-5pm2x","kube-system/cilium-fcfvc","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:04:31.366554 kubelet[2669]: E0527 18:04:31.366404 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:04:31.366701 kubelet[2669]: E0527 18:04:31.366422 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-fcfvc" May 27 18:04:31.366701 kubelet[2669]: E0527 18:04:31.366663 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:04:31.366701 kubelet[2669]: E0527 18:04:31.366684 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:04:31.366995 kubelet[2669]: E0527 18:04:31.366941 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:04:31.366995 kubelet[2669]: E0527 18:04:31.366961 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:04:31.366995 kubelet[2669]: I0527 18:04:31.366973 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:04:32.743984 systemd[1]: Started sshd@10-137.184.189.209:22-139.178.68.195:60642.service - OpenSSH per-connection server daemon (139.178.68.195:60642). May 27 18:04:32.821182 sshd[3847]: Accepted publickey for core from 139.178.68.195 port 60642 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:32.823660 sshd-session[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:32.834487 systemd-logind[1490]: New session 11 of user core. May 27 18:04:32.839284 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 18:04:32.988249 sshd[3849]: Connection closed by 139.178.68.195 port 60642 May 27 18:04:32.988830 sshd-session[3847]: pam_unix(sshd:session): session closed for user core May 27 18:04:33.006693 systemd[1]: sshd@10-137.184.189.209:22-139.178.68.195:60642.service: Deactivated successfully. May 27 18:04:33.011865 systemd[1]: session-11.scope: Deactivated successfully. May 27 18:04:33.013425 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. May 27 18:04:33.019114 systemd[1]: Started sshd@11-137.184.189.209:22-139.178.68.195:60646.service - OpenSSH per-connection server daemon (139.178.68.195:60646). May 27 18:04:33.020822 systemd-logind[1490]: Removed session 11. May 27 18:04:33.084530 sshd[3862]: Accepted publickey for core from 139.178.68.195 port 60646 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:33.087287 sshd-session[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:33.095989 systemd-logind[1490]: New session 12 of user core. May 27 18:04:33.108265 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 18:04:33.341779 sshd[3864]: Connection closed by 139.178.68.195 port 60646 May 27 18:04:33.346860 sshd-session[3862]: pam_unix(sshd:session): session closed for user core May 27 18:04:33.364458 systemd[1]: sshd@11-137.184.189.209:22-139.178.68.195:60646.service: Deactivated successfully. May 27 18:04:33.367641 systemd[1]: session-12.scope: Deactivated successfully. May 27 18:04:33.372833 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. May 27 18:04:33.382299 systemd[1]: Started sshd@12-137.184.189.209:22-139.178.68.195:60648.service - OpenSSH per-connection server daemon (139.178.68.195:60648). May 27 18:04:33.386529 systemd-logind[1490]: Removed session 12. May 27 18:04:33.471950 sshd[3874]: Accepted publickey for core from 139.178.68.195 port 60648 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:33.474368 sshd-session[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:33.482406 systemd-logind[1490]: New session 13 of user core. May 27 18:04:33.492313 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 18:04:33.680606 sshd[3876]: Connection closed by 139.178.68.195 port 60648 May 27 18:04:33.682461 sshd-session[3874]: pam_unix(sshd:session): session closed for user core May 27 18:04:33.692274 systemd[1]: sshd@12-137.184.189.209:22-139.178.68.195:60648.service: Deactivated successfully. May 27 18:04:33.696890 systemd[1]: session-13.scope: Deactivated successfully. May 27 18:04:33.698903 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. May 27 18:04:33.702380 systemd-logind[1490]: Removed session 13. May 27 18:04:38.700539 systemd[1]: Started sshd@13-137.184.189.209:22-139.178.68.195:41744.service - OpenSSH per-connection server daemon (139.178.68.195:41744). May 27 18:04:38.771364 sshd[3891]: Accepted publickey for core from 139.178.68.195 port 41744 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:38.773319 sshd-session[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:38.780748 systemd-logind[1490]: New session 14 of user core. May 27 18:04:38.787140 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 18:04:38.951576 sshd[3893]: Connection closed by 139.178.68.195 port 41744 May 27 18:04:38.952610 sshd-session[3891]: pam_unix(sshd:session): session closed for user core May 27 18:04:38.959767 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. May 27 18:04:38.960742 systemd[1]: sshd@13-137.184.189.209:22-139.178.68.195:41744.service: Deactivated successfully. May 27 18:04:38.963836 systemd[1]: session-14.scope: Deactivated successfully. May 27 18:04:38.966601 systemd-logind[1490]: Removed session 14. May 27 18:04:41.390904 kubelet[2669]: I0527 18:04:41.390059 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:04:41.390904 kubelet[2669]: I0527 18:04:41.390130 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:04:41.400223 kubelet[2669]: I0527 18:04:41.400168 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:04:41.425994 kubelet[2669]: I0527 18:04:41.425938 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:04:41.426481 kubelet[2669]: I0527 18:04:41.426446 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-5pm2x","kube-system/cilium-fcfvc","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:04:41.426795 kubelet[2669]: E0527 18:04:41.426768 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:04:41.426934 kubelet[2669]: E0527 18:04:41.426920 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-fcfvc" May 27 18:04:41.427024 kubelet[2669]: E0527 18:04:41.427009 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:04:41.427183 kubelet[2669]: E0527 18:04:41.427111 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:04:41.427183 kubelet[2669]: E0527 18:04:41.427129 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:04:41.427183 kubelet[2669]: E0527 18:04:41.427143 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:04:41.427183 kubelet[2669]: I0527 18:04:41.427162 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:04:43.982036 systemd[1]: Started sshd@14-137.184.189.209:22-139.178.68.195:48074.service - OpenSSH per-connection server daemon (139.178.68.195:48074). May 27 18:04:44.055110 sshd[3912]: Accepted publickey for core from 139.178.68.195 port 48074 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:44.057174 sshd-session[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:44.065041 systemd-logind[1490]: New session 15 of user core. May 27 18:04:44.072195 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 18:04:44.230016 sshd[3914]: Connection closed by 139.178.68.195 port 48074 May 27 18:04:44.230417 sshd-session[3912]: pam_unix(sshd:session): session closed for user core May 27 18:04:44.238257 systemd[1]: sshd@14-137.184.189.209:22-139.178.68.195:48074.service: Deactivated successfully. May 27 18:04:44.241898 systemd[1]: session-15.scope: Deactivated successfully. May 27 18:04:44.243252 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. May 27 18:04:44.246702 systemd-logind[1490]: Removed session 15. May 27 18:04:49.247290 systemd[1]: Started sshd@15-137.184.189.209:22-139.178.68.195:48086.service - OpenSSH per-connection server daemon (139.178.68.195:48086). May 27 18:04:49.323206 sshd[3926]: Accepted publickey for core from 139.178.68.195 port 48086 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:49.325321 sshd-session[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:49.331639 systemd-logind[1490]: New session 16 of user core. May 27 18:04:49.341213 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 18:04:49.519971 sshd[3928]: Connection closed by 139.178.68.195 port 48086 May 27 18:04:49.522497 sshd-session[3926]: pam_unix(sshd:session): session closed for user core May 27 18:04:49.540136 systemd[1]: sshd@15-137.184.189.209:22-139.178.68.195:48086.service: Deactivated successfully. May 27 18:04:49.545429 systemd[1]: session-16.scope: Deactivated successfully. May 27 18:04:49.547275 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. May 27 18:04:49.553126 systemd[1]: Started sshd@16-137.184.189.209:22-139.178.68.195:48100.service - OpenSSH per-connection server daemon (139.178.68.195:48100). May 27 18:04:49.555498 systemd-logind[1490]: Removed session 16. May 27 18:04:49.642231 sshd[3940]: Accepted publickey for core from 139.178.68.195 port 48100 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:49.644625 sshd-session[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:49.652711 systemd-logind[1490]: New session 17 of user core. May 27 18:04:49.664266 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 18:04:49.766279 kubelet[2669]: E0527 18:04:49.766103 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:04:50.143410 sshd[3942]: Connection closed by 139.178.68.195 port 48100 May 27 18:04:50.144363 sshd-session[3940]: pam_unix(sshd:session): session closed for user core May 27 18:04:50.158085 systemd[1]: sshd@16-137.184.189.209:22-139.178.68.195:48100.service: Deactivated successfully. May 27 18:04:50.161527 systemd[1]: session-17.scope: Deactivated successfully. May 27 18:04:50.163502 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. May 27 18:04:50.167047 systemd-logind[1490]: Removed session 17. May 27 18:04:50.170211 systemd[1]: Started sshd@17-137.184.189.209:22-139.178.68.195:48110.service - OpenSSH per-connection server daemon (139.178.68.195:48110). May 27 18:04:50.258767 sshd[3952]: Accepted publickey for core from 139.178.68.195 port 48110 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:50.261310 sshd-session[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:50.267906 systemd-logind[1490]: New session 18 of user core. May 27 18:04:50.276201 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 18:04:51.382816 sshd[3954]: Connection closed by 139.178.68.195 port 48110 May 27 18:04:51.383470 sshd-session[3952]: pam_unix(sshd:session): session closed for user core May 27 18:04:51.398152 systemd[1]: sshd@17-137.184.189.209:22-139.178.68.195:48110.service: Deactivated successfully. May 27 18:04:51.405435 systemd[1]: session-18.scope: Deactivated successfully. May 27 18:04:51.409328 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. May 27 18:04:51.419714 systemd[1]: Started sshd@18-137.184.189.209:22-139.178.68.195:48118.service - OpenSSH per-connection server daemon (139.178.68.195:48118). May 27 18:04:51.424340 systemd-logind[1490]: Removed session 18. May 27 18:04:51.484131 kubelet[2669]: I0527 18:04:51.484051 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:04:51.484131 kubelet[2669]: I0527 18:04:51.484098 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:04:51.492243 kubelet[2669]: I0527 18:04:51.492110 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:04:51.520650 kubelet[2669]: I0527 18:04:51.520254 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:04:51.520650 kubelet[2669]: I0527 18:04:51.520444 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-5pm2x","kube-system/cilium-fcfvc","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:04:51.520650 kubelet[2669]: E0527 18:04:51.520503 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:04:51.520650 kubelet[2669]: E0527 18:04:51.520527 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-fcfvc" May 27 18:04:51.520650 kubelet[2669]: E0527 18:04:51.520548 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:04:51.520650 kubelet[2669]: E0527 18:04:51.520563 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:04:51.520650 kubelet[2669]: E0527 18:04:51.520578 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:04:51.520650 kubelet[2669]: E0527 18:04:51.520592 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:04:51.520650 kubelet[2669]: I0527 18:04:51.520608 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:04:51.544866 sshd[3969]: Accepted publickey for core from 139.178.68.195 port 48118 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:51.547699 sshd-session[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:51.559347 systemd-logind[1490]: New session 19 of user core. May 27 18:04:51.569367 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 18:04:51.889675 sshd[3973]: Connection closed by 139.178.68.195 port 48118 May 27 18:04:51.889551 sshd-session[3969]: pam_unix(sshd:session): session closed for user core May 27 18:04:51.903122 systemd[1]: sshd@18-137.184.189.209:22-139.178.68.195:48118.service: Deactivated successfully. May 27 18:04:51.908151 systemd[1]: session-19.scope: Deactivated successfully. May 27 18:04:51.909995 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. May 27 18:04:51.916226 systemd[1]: Started sshd@19-137.184.189.209:22-139.178.68.195:48128.service - OpenSSH per-connection server daemon (139.178.68.195:48128). May 27 18:04:51.918557 systemd-logind[1490]: Removed session 19. May 27 18:04:52.001110 sshd[3983]: Accepted publickey for core from 139.178.68.195 port 48128 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:52.004385 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:52.012169 systemd-logind[1490]: New session 20 of user core. May 27 18:04:52.019207 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 18:04:52.185841 sshd[3985]: Connection closed by 139.178.68.195 port 48128 May 27 18:04:52.187625 sshd-session[3983]: pam_unix(sshd:session): session closed for user core May 27 18:04:52.193786 systemd[1]: sshd@19-137.184.189.209:22-139.178.68.195:48128.service: Deactivated successfully. May 27 18:04:52.196679 systemd[1]: session-20.scope: Deactivated successfully. May 27 18:04:52.198379 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. May 27 18:04:52.200837 systemd-logind[1490]: Removed session 20. May 27 18:04:56.766183 kubelet[2669]: E0527 18:04:56.765308 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:04:57.209663 systemd[1]: Started sshd@20-137.184.189.209:22-139.178.68.195:39638.service - OpenSSH per-connection server daemon (139.178.68.195:39638). May 27 18:04:57.284628 sshd[3998]: Accepted publickey for core from 139.178.68.195 port 39638 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:04:57.287688 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:04:57.296127 systemd-logind[1490]: New session 21 of user core. May 27 18:04:57.303395 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 18:04:57.457140 sshd[4000]: Connection closed by 139.178.68.195 port 39638 May 27 18:04:57.458070 sshd-session[3998]: pam_unix(sshd:session): session closed for user core May 27 18:04:57.465189 systemd[1]: sshd@20-137.184.189.209:22-139.178.68.195:39638.service: Deactivated successfully. May 27 18:04:57.469110 systemd[1]: session-21.scope: Deactivated successfully. May 27 18:04:57.470461 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. May 27 18:04:57.474635 systemd-logind[1490]: Removed session 21. May 27 18:05:01.550064 kubelet[2669]: I0527 18:05:01.549787 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:05:01.550064 kubelet[2669]: I0527 18:05:01.549957 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:05:01.555428 kubelet[2669]: I0527 18:05:01.555042 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:05:01.583545 kubelet[2669]: I0527 18:05:01.583138 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:05:01.583545 kubelet[2669]: I0527 18:05:01.583329 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-5pm2x","kube-system/cilium-fcfvc","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:05:01.583545 kubelet[2669]: E0527 18:05:01.583390 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:05:01.583545 kubelet[2669]: E0527 18:05:01.583413 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-fcfvc" May 27 18:05:01.583545 kubelet[2669]: E0527 18:05:01.583428 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:05:01.583545 kubelet[2669]: E0527 18:05:01.583444 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:05:01.583545 kubelet[2669]: E0527 18:05:01.583458 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:05:01.583545 kubelet[2669]: E0527 18:05:01.583491 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:05:01.583545 kubelet[2669]: I0527 18:05:01.583509 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:05:01.776926 kubelet[2669]: E0527 18:05:01.776258 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:02.482625 systemd[1]: Started sshd@21-137.184.189.209:22-139.178.68.195:39652.service - OpenSSH per-connection server daemon (139.178.68.195:39652). May 27 18:05:02.600847 sshd[4013]: Accepted publickey for core from 139.178.68.195 port 39652 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:05:02.604663 sshd-session[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:05:02.623006 systemd-logind[1490]: New session 22 of user core. May 27 18:05:02.628214 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 18:05:02.768803 kubelet[2669]: E0527 18:05:02.766990 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:02.856246 sshd[4015]: Connection closed by 139.178.68.195 port 39652 May 27 18:05:02.856269 sshd-session[4013]: pam_unix(sshd:session): session closed for user core May 27 18:05:02.862812 systemd[1]: sshd@21-137.184.189.209:22-139.178.68.195:39652.service: Deactivated successfully. May 27 18:05:02.867655 systemd[1]: session-22.scope: Deactivated successfully. May 27 18:05:02.873817 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. May 27 18:05:02.882220 systemd-logind[1490]: Removed session 22. May 27 18:05:07.877273 systemd[1]: Started sshd@22-137.184.189.209:22-139.178.68.195:52680.service - OpenSSH per-connection server daemon (139.178.68.195:52680). May 27 18:05:07.947436 sshd[4027]: Accepted publickey for core from 139.178.68.195 port 52680 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:05:07.949560 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:05:07.958977 systemd-logind[1490]: New session 23 of user core. May 27 18:05:07.964658 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 18:05:08.158089 sshd[4029]: Connection closed by 139.178.68.195 port 52680 May 27 18:05:08.159545 sshd-session[4027]: pam_unix(sshd:session): session closed for user core May 27 18:05:08.168375 systemd[1]: sshd@22-137.184.189.209:22-139.178.68.195:52680.service: Deactivated successfully. May 27 18:05:08.172314 systemd[1]: session-23.scope: Deactivated successfully. May 27 18:05:08.174301 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. May 27 18:05:08.177482 systemd-logind[1490]: Removed session 23. May 27 18:05:10.769180 kubelet[2669]: E0527 18:05:10.769089 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:11.609896 kubelet[2669]: I0527 18:05:11.609818 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:05:11.610775 kubelet[2669]: I0527 18:05:11.610132 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:05:11.615913 kubelet[2669]: I0527 18:05:11.615828 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:05:11.644190 kubelet[2669]: I0527 18:05:11.644108 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:05:11.644571 kubelet[2669]: I0527 18:05:11.644498 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-5pm2x","kube-system/cilium-fcfvc","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:05:11.644789 kubelet[2669]: E0527 18:05:11.644616 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-5pm2x" May 27 18:05:11.644789 kubelet[2669]: E0527 18:05:11.644640 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-fcfvc" May 27 18:05:11.644789 kubelet[2669]: E0527 18:05:11.644656 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:05:11.644789 kubelet[2669]: E0527 18:05:11.644686 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:05:11.644789 kubelet[2669]: E0527 18:05:11.644699 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:05:11.644789 kubelet[2669]: E0527 18:05:11.644712 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:05:11.644789 kubelet[2669]: I0527 18:05:11.644728 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:05:11.766597 kubelet[2669]: E0527 18:05:11.766033 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:13.184564 systemd[1]: Started sshd@23-137.184.189.209:22-139.178.68.195:52692.service - OpenSSH per-connection server daemon (139.178.68.195:52692). May 27 18:05:13.279083 sshd[4043]: Accepted publickey for core from 139.178.68.195 port 52692 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:05:13.281900 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:05:13.290950 systemd-logind[1490]: New session 24 of user core. May 27 18:05:13.293280 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 18:05:13.461519 sshd[4045]: Connection closed by 139.178.68.195 port 52692 May 27 18:05:13.462531 sshd-session[4043]: pam_unix(sshd:session): session closed for user core May 27 18:05:13.474737 systemd[1]: sshd@23-137.184.189.209:22-139.178.68.195:52692.service: Deactivated successfully. May 27 18:05:13.479376 systemd[1]: session-24.scope: Deactivated successfully. May 27 18:05:13.483025 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. May 27 18:05:13.490458 systemd[1]: Started sshd@24-137.184.189.209:22-139.178.68.195:52698.service - OpenSSH per-connection server daemon (139.178.68.195:52698). May 27 18:05:13.493966 systemd-logind[1490]: Removed session 24. May 27 18:05:13.559977 sshd[4057]: Accepted publickey for core from 139.178.68.195 port 52698 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:05:13.562687 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:05:13.572085 systemd-logind[1490]: New session 25 of user core. May 27 18:05:13.580273 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 18:05:15.076910 containerd[1521]: time="2025-05-27T18:05:15.075994724Z" level=info msg="StopContainer for \"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" with timeout 30 (s)" May 27 18:05:15.076910 containerd[1521]: time="2025-05-27T18:05:15.076514600Z" level=info msg="Stop container \"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" with signal terminated" May 27 18:05:15.106351 systemd[1]: cri-containerd-07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c.scope: Deactivated successfully. May 27 18:05:15.112687 containerd[1521]: time="2025-05-27T18:05:15.112627890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" id:\"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" pid:3071 exited_at:{seconds:1748369115 nanos:110243852}" May 27 18:05:15.115951 containerd[1521]: time="2025-05-27T18:05:15.115851330Z" level=info msg="received exit event container_id:\"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" id:\"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" pid:3071 exited_at:{seconds:1748369115 nanos:110243852}" May 27 18:05:15.116133 containerd[1521]: time="2025-05-27T18:05:15.116063445Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 18:05:15.125224 containerd[1521]: time="2025-05-27T18:05:15.125167142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" id:\"62f46ee713afc0d6ac9275cfc3deaeaaa6c03583fd44f1520d2e169b919d5ea5\" pid:4079 exited_at:{seconds:1748369115 nanos:124588645}" May 27 18:05:15.129042 containerd[1521]: time="2025-05-27T18:05:15.128995533Z" level=info msg="StopContainer for \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" with timeout 2 (s)" May 27 18:05:15.129759 containerd[1521]: time="2025-05-27T18:05:15.129721035Z" level=info msg="Stop container \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" with signal terminated" May 27 18:05:15.145698 systemd-networkd[1451]: lxc_health: Link DOWN May 27 18:05:15.145708 systemd-networkd[1451]: lxc_health: Lost carrier May 27 18:05:15.170528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c-rootfs.mount: Deactivated successfully. May 27 18:05:15.175555 systemd[1]: cri-containerd-d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977.scope: Deactivated successfully. May 27 18:05:15.176670 systemd[1]: cri-containerd-d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977.scope: Consumed 9.198s CPU time, 146.6M memory peak, 28.7M read from disk, 13.3M written to disk. May 27 18:05:15.179533 containerd[1521]: time="2025-05-27T18:05:15.179483493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" id:\"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" pid:3323 exited_at:{seconds:1748369115 nanos:178649211}" May 27 18:05:15.180005 containerd[1521]: time="2025-05-27T18:05:15.179970386Z" level=info msg="received exit event container_id:\"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" id:\"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" pid:3323 exited_at:{seconds:1748369115 nanos:178649211}" May 27 18:05:15.182055 containerd[1521]: time="2025-05-27T18:05:15.182017316Z" level=info msg="StopContainer for \"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" returns successfully" May 27 18:05:15.183124 containerd[1521]: time="2025-05-27T18:05:15.182982895Z" level=info msg="StopPodSandbox for \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\"" May 27 18:05:15.183326 containerd[1521]: time="2025-05-27T18:05:15.183300360Z" level=info msg="Container to stop \"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:05:15.208448 systemd[1]: cri-containerd-b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b.scope: Deactivated successfully. May 27 18:05:15.220589 containerd[1521]: time="2025-05-27T18:05:15.220250352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" id:\"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" pid:2777 exit_status:137 exited_at:{seconds:1748369115 nanos:218643660}" May 27 18:05:15.235351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977-rootfs.mount: Deactivated successfully. May 27 18:05:15.248642 containerd[1521]: time="2025-05-27T18:05:15.248565919Z" level=info msg="StopContainer for \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" returns successfully" May 27 18:05:15.250899 containerd[1521]: time="2025-05-27T18:05:15.250709243Z" level=info msg="StopPodSandbox for \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\"" May 27 18:05:15.250899 containerd[1521]: time="2025-05-27T18:05:15.250787044Z" level=info msg="Container to stop \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:05:15.250899 containerd[1521]: time="2025-05-27T18:05:15.250800042Z" level=info msg="Container to stop \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:05:15.250899 containerd[1521]: time="2025-05-27T18:05:15.250809842Z" level=info msg="Container to stop \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:05:15.250899 containerd[1521]: time="2025-05-27T18:05:15.250821050Z" level=info msg="Container to stop \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:05:15.250899 containerd[1521]: time="2025-05-27T18:05:15.250834347Z" level=info msg="Container to stop \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:05:15.264780 systemd[1]: cri-containerd-f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125.scope: Deactivated successfully. May 27 18:05:15.280236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b-rootfs.mount: Deactivated successfully. May 27 18:05:15.282809 containerd[1521]: time="2025-05-27T18:05:15.282740742Z" level=info msg="shim disconnected" id=b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b namespace=k8s.io May 27 18:05:15.283269 containerd[1521]: time="2025-05-27T18:05:15.282783922Z" level=warning msg="cleaning up after shim disconnected" id=b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b namespace=k8s.io May 27 18:05:15.283676 containerd[1521]: time="2025-05-27T18:05:15.283457641Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 18:05:15.315228 containerd[1521]: time="2025-05-27T18:05:15.315148221Z" level=info msg="received exit event sandbox_id:\"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" exit_status:137 exited_at:{seconds:1748369115 nanos:218643660}" May 27 18:05:15.319757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b-shm.mount: Deactivated successfully. May 27 18:05:15.322624 containerd[1521]: time="2025-05-27T18:05:15.315179789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" id:\"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" pid:2865 exit_status:137 exited_at:{seconds:1748369115 nanos:271410648}" May 27 18:05:15.326101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125-rootfs.mount: Deactivated successfully. May 27 18:05:15.333376 containerd[1521]: time="2025-05-27T18:05:15.333224597Z" level=info msg="shim disconnected" id=f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125 namespace=k8s.io May 27 18:05:15.333791 containerd[1521]: time="2025-05-27T18:05:15.333760882Z" level=warning msg="cleaning up after shim disconnected" id=f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125 namespace=k8s.io May 27 18:05:15.333963 containerd[1521]: time="2025-05-27T18:05:15.333927117Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 18:05:15.335452 containerd[1521]: time="2025-05-27T18:05:15.335400982Z" level=info msg="received exit event sandbox_id:\"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" exit_status:137 exited_at:{seconds:1748369115 nanos:271410648}" May 27 18:05:15.341444 containerd[1521]: time="2025-05-27T18:05:15.341074547Z" level=info msg="TearDown network for sandbox \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" successfully" May 27 18:05:15.341444 containerd[1521]: time="2025-05-27T18:05:15.341134053Z" level=info msg="StopPodSandbox for \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" returns successfully" May 27 18:05:15.343662 containerd[1521]: time="2025-05-27T18:05:15.341685922Z" level=info msg="TearDown network for sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" successfully" May 27 18:05:15.343662 containerd[1521]: time="2025-05-27T18:05:15.341720198Z" level=info msg="StopPodSandbox for \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" returns successfully" May 27 18:05:15.499917 kubelet[2669]: I0527 18:05:15.499733 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-xtables-lock\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.499917 kubelet[2669]: I0527 18:05:15.499804 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-config-path\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.499917 kubelet[2669]: I0527 18:05:15.499833 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cni-path\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.499917 kubelet[2669]: I0527 18:05:15.499883 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.502658 kubelet[2669]: I0527 18:05:15.499865 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-etc-cni-netd\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.502658 kubelet[2669]: I0527 18:05:15.500494 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdxd6\" (UniqueName: \"kubernetes.io/projected/741b0771-3993-406f-aea3-2a2f4befd27e-kube-api-access-cdxd6\") pod \"741b0771-3993-406f-aea3-2a2f4befd27e\" (UID: \"741b0771-3993-406f-aea3-2a2f4befd27e\") " May 27 18:05:15.502658 kubelet[2669]: I0527 18:05:15.500533 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-lib-modules\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.502658 kubelet[2669]: I0527 18:05:15.500551 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbgv2\" (UniqueName: \"kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-kube-api-access-jbgv2\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.502658 kubelet[2669]: I0527 18:05:15.500569 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-host-proc-sys-kernel\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.502658 kubelet[2669]: I0527 18:05:15.500583 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-cgroup\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.502972 kubelet[2669]: I0527 18:05:15.500615 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-hostproc\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.502972 kubelet[2669]: I0527 18:05:15.500629 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-host-proc-sys-net\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.502972 kubelet[2669]: I0527 18:05:15.500651 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/741b0771-3993-406f-aea3-2a2f4befd27e-cilium-config-path\") pod \"741b0771-3993-406f-aea3-2a2f4befd27e\" (UID: \"741b0771-3993-406f-aea3-2a2f4befd27e\") " May 27 18:05:15.502972 kubelet[2669]: I0527 18:05:15.500682 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-hubble-tls\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.502972 kubelet[2669]: I0527 18:05:15.500700 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-bpf-maps\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.502972 kubelet[2669]: I0527 18:05:15.500717 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c7d5090-2acf-417a-ba26-4d3b35648ee4-clustermesh-secrets\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.503171 kubelet[2669]: I0527 18:05:15.500782 2669 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-run\") pod \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\" (UID: \"5c7d5090-2acf-417a-ba26-4d3b35648ee4\") " May 27 18:05:15.503171 kubelet[2669]: I0527 18:05:15.500833 2669 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-xtables-lock\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.503171 kubelet[2669]: I0527 18:05:15.500883 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.503171 kubelet[2669]: I0527 18:05:15.500908 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cni-path" (OuterVolumeSpecName: "cni-path") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.503171 kubelet[2669]: I0527 18:05:15.500921 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.503337 kubelet[2669]: I0527 18:05:15.502312 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 18:05:15.503337 kubelet[2669]: I0527 18:05:15.502393 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-hostproc" (OuterVolumeSpecName: "hostproc") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.503337 kubelet[2669]: I0527 18:05:15.502413 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.511529 kubelet[2669]: I0527 18:05:15.511321 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.511529 kubelet[2669]: I0527 18:05:15.511392 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.513058 kubelet[2669]: I0527 18:05:15.513015 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.519137 kubelet[2669]: I0527 18:05:15.516227 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:05:15.523499 kubelet[2669]: I0527 18:05:15.522608 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/741b0771-3993-406f-aea3-2a2f4befd27e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "741b0771-3993-406f-aea3-2a2f4befd27e" (UID: "741b0771-3993-406f-aea3-2a2f4befd27e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 18:05:15.525095 kubelet[2669]: I0527 18:05:15.525037 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7d5090-2acf-417a-ba26-4d3b35648ee4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 18:05:15.525340 kubelet[2669]: I0527 18:05:15.525083 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-kube-api-access-jbgv2" (OuterVolumeSpecName: "kube-api-access-jbgv2") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "kube-api-access-jbgv2". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 18:05:15.525529 kubelet[2669]: I0527 18:05:15.525109 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/741b0771-3993-406f-aea3-2a2f4befd27e-kube-api-access-cdxd6" (OuterVolumeSpecName: "kube-api-access-cdxd6") pod "741b0771-3993-406f-aea3-2a2f4befd27e" (UID: "741b0771-3993-406f-aea3-2a2f4befd27e"). InnerVolumeSpecName "kube-api-access-cdxd6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 18:05:15.525700 kubelet[2669]: I0527 18:05:15.525483 2669 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5c7d5090-2acf-417a-ba26-4d3b35648ee4" (UID: "5c7d5090-2acf-417a-ba26-4d3b35648ee4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 18:05:15.601232 kubelet[2669]: I0527 18:05:15.601066 2669 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-lib-modules\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.601738 kubelet[2669]: I0527 18:05:15.601474 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jbgv2\" (UniqueName: \"kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-kube-api-access-jbgv2\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.601738 kubelet[2669]: I0527 18:05:15.601502 2669 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-host-proc-sys-kernel\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.601738 kubelet[2669]: I0527 18:05:15.601515 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-cgroup\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.601738 kubelet[2669]: I0527 18:05:15.601526 2669 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-hostproc\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.601738 kubelet[2669]: I0527 18:05:15.601541 2669 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-host-proc-sys-net\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.601738 kubelet[2669]: I0527 18:05:15.601558 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/741b0771-3993-406f-aea3-2a2f4befd27e-cilium-config-path\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.601738 kubelet[2669]: I0527 18:05:15.601574 2669 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-bpf-maps\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.601738 kubelet[2669]: I0527 18:05:15.601601 2669 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c7d5090-2acf-417a-ba26-4d3b35648ee4-clustermesh-secrets\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.602384 kubelet[2669]: I0527 18:05:15.601617 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-run\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.602384 kubelet[2669]: I0527 18:05:15.601640 2669 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c7d5090-2acf-417a-ba26-4d3b35648ee4-hubble-tls\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.602384 kubelet[2669]: I0527 18:05:15.601652 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cilium-config-path\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.602384 kubelet[2669]: I0527 18:05:15.601667 2669 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-cni-path\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.602384 kubelet[2669]: I0527 18:05:15.601681 2669 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c7d5090-2acf-417a-ba26-4d3b35648ee4-etc-cni-netd\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.602384 kubelet[2669]: I0527 18:05:15.601696 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cdxd6\" (UniqueName: \"kubernetes.io/projected/741b0771-3993-406f-aea3-2a2f4befd27e-kube-api-access-cdxd6\") on node \"ci-4344.0.0-1-b2ae16c630\" DevicePath \"\"" May 27 18:05:15.918565 kubelet[2669]: E0527 18:05:15.918475 2669 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 18:05:16.168137 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125-shm.mount: Deactivated successfully. May 27 18:05:16.168276 systemd[1]: var-lib-kubelet-pods-5c7d5090\x2d2acf\x2d417a\x2dba26\x2d4d3b35648ee4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djbgv2.mount: Deactivated successfully. May 27 18:05:16.168369 systemd[1]: var-lib-kubelet-pods-741b0771\x2d3993\x2d406f\x2daea3\x2d2a2f4befd27e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcdxd6.mount: Deactivated successfully. May 27 18:05:16.168481 systemd[1]: var-lib-kubelet-pods-5c7d5090\x2d2acf\x2d417a\x2dba26\x2d4d3b35648ee4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 18:05:16.168703 systemd[1]: var-lib-kubelet-pods-5c7d5090\x2d2acf\x2d417a\x2dba26\x2d4d3b35648ee4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 18:05:16.242937 kubelet[2669]: I0527 18:05:16.241381 2669 scope.go:117] "RemoveContainer" containerID="d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977" May 27 18:05:16.253950 containerd[1521]: time="2025-05-27T18:05:16.253835720Z" level=info msg="RemoveContainer for \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\"" May 27 18:05:16.266450 containerd[1521]: time="2025-05-27T18:05:16.266349466Z" level=info msg="RemoveContainer for \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" returns successfully" May 27 18:05:16.273106 systemd[1]: Removed slice kubepods-burstable-pod5c7d5090_2acf_417a_ba26_4d3b35648ee4.slice - libcontainer container kubepods-burstable-pod5c7d5090_2acf_417a_ba26_4d3b35648ee4.slice. May 27 18:05:16.273286 systemd[1]: kubepods-burstable-pod5c7d5090_2acf_417a_ba26_4d3b35648ee4.slice: Consumed 9.330s CPU time, 147M memory peak, 29.1M read from disk, 16.6M written to disk. May 27 18:05:16.275534 kubelet[2669]: I0527 18:05:16.275460 2669 scope.go:117] "RemoveContainer" containerID="2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26" May 27 18:05:16.276511 systemd[1]: Removed slice kubepods-besteffort-pod741b0771_3993_406f_aea3_2a2f4befd27e.slice - libcontainer container kubepods-besteffort-pod741b0771_3993_406f_aea3_2a2f4befd27e.slice. May 27 18:05:16.280496 containerd[1521]: time="2025-05-27T18:05:16.280419952Z" level=info msg="RemoveContainer for \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\"" May 27 18:05:16.292203 containerd[1521]: time="2025-05-27T18:05:16.292080701Z" level=info msg="RemoveContainer for \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\" returns successfully" May 27 18:05:16.292983 kubelet[2669]: I0527 18:05:16.292768 2669 scope.go:117] "RemoveContainer" containerID="66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6" May 27 18:05:16.300532 containerd[1521]: time="2025-05-27T18:05:16.300153394Z" level=info msg="RemoveContainer for \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\"" May 27 18:05:16.311126 containerd[1521]: time="2025-05-27T18:05:16.310670122Z" level=info msg="RemoveContainer for \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\" returns successfully" May 27 18:05:16.311291 kubelet[2669]: I0527 18:05:16.310967 2669 scope.go:117] "RemoveContainer" containerID="dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1" May 27 18:05:16.314621 containerd[1521]: time="2025-05-27T18:05:16.314566954Z" level=info msg="RemoveContainer for \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\"" May 27 18:05:16.320956 containerd[1521]: time="2025-05-27T18:05:16.320846060Z" level=info msg="RemoveContainer for \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\" returns successfully" May 27 18:05:16.322140 kubelet[2669]: I0527 18:05:16.321232 2669 scope.go:117] "RemoveContainer" containerID="3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6" May 27 18:05:16.328509 containerd[1521]: time="2025-05-27T18:05:16.328462207Z" level=info msg="RemoveContainer for \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\"" May 27 18:05:16.331786 containerd[1521]: time="2025-05-27T18:05:16.331684787Z" level=info msg="RemoveContainer for \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\" returns successfully" May 27 18:05:16.332595 kubelet[2669]: I0527 18:05:16.332234 2669 scope.go:117] "RemoveContainer" containerID="d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977" May 27 18:05:16.346104 containerd[1521]: time="2025-05-27T18:05:16.333066471Z" level=error msg="ContainerStatus for \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\": not found" May 27 18:05:16.346734 kubelet[2669]: E0527 18:05:16.346667 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\": not found" containerID="d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977" May 27 18:05:16.348606 kubelet[2669]: I0527 18:05:16.346848 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977"} err="failed to get container status \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1a8cc239a74e56b71a00b9744fc11726e4fd0895e78259a3f3778632e2a9977\": not found" May 27 18:05:16.348785 kubelet[2669]: I0527 18:05:16.348762 2669 scope.go:117] "RemoveContainer" containerID="2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26" May 27 18:05:16.349452 containerd[1521]: time="2025-05-27T18:05:16.349341013Z" level=error msg="ContainerStatus for \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\": not found" May 27 18:05:16.349767 kubelet[2669]: E0527 18:05:16.349708 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\": not found" containerID="2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26" May 27 18:05:16.349849 kubelet[2669]: I0527 18:05:16.349754 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26"} err="failed to get container status \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\": rpc error: code = NotFound desc = an error occurred when try to find container \"2430b70ea1c0042cf539b39a12c4dbd4db5447a5c7888b63dfa90642ea359d26\": not found" May 27 18:05:16.349849 kubelet[2669]: I0527 18:05:16.349788 2669 scope.go:117] "RemoveContainer" containerID="66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6" May 27 18:05:16.350309 containerd[1521]: time="2025-05-27T18:05:16.350240831Z" level=error msg="ContainerStatus for \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\": not found" May 27 18:05:16.350620 kubelet[2669]: E0527 18:05:16.350577 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\": not found" containerID="66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6" May 27 18:05:16.350700 kubelet[2669]: I0527 18:05:16.350619 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6"} err="failed to get container status \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"66857d3f8657da58a9e0598513b7cd15ef1f9a1843b907895bdfd536f62438e6\": not found" May 27 18:05:16.350700 kubelet[2669]: I0527 18:05:16.350645 2669 scope.go:117] "RemoveContainer" containerID="dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1" May 27 18:05:16.351184 containerd[1521]: time="2025-05-27T18:05:16.351134612Z" level=error msg="ContainerStatus for \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\": not found" May 27 18:05:16.351513 kubelet[2669]: E0527 18:05:16.351399 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\": not found" containerID="dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1" May 27 18:05:16.351513 kubelet[2669]: I0527 18:05:16.351442 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1"} err="failed to get container status \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc380cca665818b2c5022aeb570ec4322b496ab832c63f5cb536a8051c6623c1\": not found" May 27 18:05:16.351513 kubelet[2669]: I0527 18:05:16.351480 2669 scope.go:117] "RemoveContainer" containerID="3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6" May 27 18:05:16.352911 kubelet[2669]: E0527 18:05:16.352269 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\": not found" containerID="3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6" May 27 18:05:16.352911 kubelet[2669]: I0527 18:05:16.352320 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6"} err="failed to get container status \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\": not found" May 27 18:05:16.352911 kubelet[2669]: I0527 18:05:16.352347 2669 scope.go:117] "RemoveContainer" containerID="07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c" May 27 18:05:16.353108 containerd[1521]: time="2025-05-27T18:05:16.352032191Z" level=error msg="ContainerStatus for \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ae3b70e14d8781684dbe13f2bde8625cdf209ea9ade1a10edd609763342cbc6\": not found" May 27 18:05:16.355173 containerd[1521]: time="2025-05-27T18:05:16.355131851Z" level=info msg="RemoveContainer for \"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\"" May 27 18:05:16.362079 containerd[1521]: time="2025-05-27T18:05:16.360004784Z" level=info msg="RemoveContainer for \"07427b1aefed196d61ef24ba38b9767aaaac5bd6935661d0fd7279f7ad5c9f3c\" returns successfully" May 27 18:05:16.770937 kubelet[2669]: I0527 18:05:16.770010 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7d5090-2acf-417a-ba26-4d3b35648ee4" path="/var/lib/kubelet/pods/5c7d5090-2acf-417a-ba26-4d3b35648ee4/volumes" May 27 18:05:16.771729 kubelet[2669]: I0527 18:05:16.771694 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741b0771-3993-406f-aea3-2a2f4befd27e" path="/var/lib/kubelet/pods/741b0771-3993-406f-aea3-2a2f4befd27e/volumes" May 27 18:05:16.997087 sshd[4059]: Connection closed by 139.178.68.195 port 52698 May 27 18:05:16.998106 sshd-session[4057]: pam_unix(sshd:session): session closed for user core May 27 18:05:17.013428 systemd[1]: sshd@24-137.184.189.209:22-139.178.68.195:52698.service: Deactivated successfully. May 27 18:05:17.017546 systemd[1]: session-25.scope: Deactivated successfully. May 27 18:05:17.018982 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. May 27 18:05:17.025416 systemd[1]: Started sshd@25-137.184.189.209:22-139.178.68.195:42458.service - OpenSSH per-connection server daemon (139.178.68.195:42458). May 27 18:05:17.027609 systemd-logind[1490]: Removed session 25. May 27 18:05:17.122784 sshd[4210]: Accepted publickey for core from 139.178.68.195 port 42458 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:05:17.124831 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:05:17.133522 systemd-logind[1490]: New session 26 of user core. May 27 18:05:17.149183 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 18:05:18.004425 sshd[4212]: Connection closed by 139.178.68.195 port 42458 May 27 18:05:18.006477 sshd-session[4210]: pam_unix(sshd:session): session closed for user core May 27 18:05:18.026500 systemd[1]: sshd@25-137.184.189.209:22-139.178.68.195:42458.service: Deactivated successfully. May 27 18:05:18.034922 systemd[1]: session-26.scope: Deactivated successfully. May 27 18:05:18.037619 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. May 27 18:05:18.046987 systemd-logind[1490]: Removed session 26. May 27 18:05:18.051097 kubelet[2669]: I0527 18:05:18.051052 2669 memory_manager.go:355] "RemoveStaleState removing state" podUID="741b0771-3993-406f-aea3-2a2f4befd27e" containerName="cilium-operator" May 27 18:05:18.051097 kubelet[2669]: I0527 18:05:18.051081 2669 memory_manager.go:355] "RemoveStaleState removing state" podUID="5c7d5090-2acf-417a-ba26-4d3b35648ee4" containerName="cilium-agent" May 27 18:05:18.053282 systemd[1]: Started sshd@26-137.184.189.209:22-139.178.68.195:42468.service - OpenSSH per-connection server daemon (139.178.68.195:42468). May 27 18:05:18.087958 systemd[1]: Created slice kubepods-burstable-podde951c5a_b95d_4984_8420_9eac2ed740ac.slice - libcontainer container kubepods-burstable-podde951c5a_b95d_4984_8420_9eac2ed740ac.slice. May 27 18:05:18.124839 kubelet[2669]: I0527 18:05:18.124215 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de951c5a-b95d-4984-8420-9eac2ed740ac-hubble-tls\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.124839 kubelet[2669]: I0527 18:05:18.124285 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-bpf-maps\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.124839 kubelet[2669]: I0527 18:05:18.124320 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-cni-path\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.124839 kubelet[2669]: I0527 18:05:18.124350 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de951c5a-b95d-4984-8420-9eac2ed740ac-clustermesh-secrets\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.124839 kubelet[2669]: I0527 18:05:18.124403 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de951c5a-b95d-4984-8420-9eac2ed740ac-cilium-ipsec-secrets\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.124839 kubelet[2669]: I0527 18:05:18.124434 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-etc-cni-netd\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.125355 kubelet[2669]: I0527 18:05:18.124462 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-hostproc\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.125355 kubelet[2669]: I0527 18:05:18.124510 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-lib-modules\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.128914 kubelet[2669]: I0527 18:05:18.125952 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de951c5a-b95d-4984-8420-9eac2ed740ac-cilium-config-path\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.128914 kubelet[2669]: I0527 18:05:18.126077 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvtb7\" (UniqueName: \"kubernetes.io/projected/de951c5a-b95d-4984-8420-9eac2ed740ac-kube-api-access-jvtb7\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.128914 kubelet[2669]: I0527 18:05:18.126124 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-cilium-run\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.128914 kubelet[2669]: I0527 18:05:18.126149 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-cilium-cgroup\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.128914 kubelet[2669]: I0527 18:05:18.126185 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-host-proc-sys-net\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.129611 kubelet[2669]: I0527 18:05:18.126210 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-host-proc-sys-kernel\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.129611 kubelet[2669]: I0527 18:05:18.126255 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de951c5a-b95d-4984-8420-9eac2ed740ac-xtables-lock\") pod \"cilium-rwxdk\" (UID: \"de951c5a-b95d-4984-8420-9eac2ed740ac\") " pod="kube-system/cilium-rwxdk" May 27 18:05:18.199923 sshd[4223]: Accepted publickey for core from 139.178.68.195 port 42468 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:05:18.203550 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:05:18.213779 systemd-logind[1490]: New session 27 of user core. May 27 18:05:18.228066 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 18:05:18.309049 sshd[4228]: Connection closed by 139.178.68.195 port 42468 May 27 18:05:18.311920 sshd-session[4223]: pam_unix(sshd:session): session closed for user core May 27 18:05:18.327219 systemd[1]: sshd@26-137.184.189.209:22-139.178.68.195:42468.service: Deactivated successfully. May 27 18:05:18.332864 systemd[1]: session-27.scope: Deactivated successfully. May 27 18:05:18.335073 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. May 27 18:05:18.342059 systemd[1]: Started sshd@27-137.184.189.209:22-139.178.68.195:42480.service - OpenSSH per-connection server daemon (139.178.68.195:42480). May 27 18:05:18.345841 systemd-logind[1490]: Removed session 27. May 27 18:05:18.423137 sshd[4236]: Accepted publickey for core from 139.178.68.195 port 42480 ssh2: RSA SHA256:UaI/c683jLOK1O5O7/zMQdDqmv0Givfij3uff0Smr7g May 27 18:05:18.425239 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:05:18.436273 systemd-logind[1490]: New session 28 of user core. May 27 18:05:18.445308 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 18:05:18.448715 kubelet[2669]: E0527 18:05:18.446010 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:18.450096 containerd[1521]: time="2025-05-27T18:05:18.450021635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwxdk,Uid:de951c5a-b95d-4984-8420-9eac2ed740ac,Namespace:kube-system,Attempt:0,}" May 27 18:05:18.478547 containerd[1521]: time="2025-05-27T18:05:18.478478267Z" level=info msg="connecting to shim 27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a" address="unix:///run/containerd/s/f1a287ad55fc6bc99f6cd41d2bfc08e4cbc60087d8e08c0bf9b428e37466c059" namespace=k8s.io protocol=ttrpc version=3 May 27 18:05:18.528524 systemd[1]: Started cri-containerd-27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a.scope - libcontainer container 27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a. May 27 18:05:18.608863 containerd[1521]: time="2025-05-27T18:05:18.608790722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwxdk,Uid:de951c5a-b95d-4984-8420-9eac2ed740ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\"" May 27 18:05:18.610742 kubelet[2669]: E0527 18:05:18.610707 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:18.620490 containerd[1521]: time="2025-05-27T18:05:18.620435663Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 18:05:18.630851 containerd[1521]: time="2025-05-27T18:05:18.630797222Z" level=info msg="Container c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042: CDI devices from CRI Config.CDIDevices: []" May 27 18:05:18.645248 containerd[1521]: time="2025-05-27T18:05:18.645181175Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042\"" May 27 18:05:18.647502 containerd[1521]: time="2025-05-27T18:05:18.647462657Z" level=info msg="StartContainer for \"c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042\"" May 27 18:05:18.651364 containerd[1521]: time="2025-05-27T18:05:18.651282851Z" level=info msg="connecting to shim c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042" address="unix:///run/containerd/s/f1a287ad55fc6bc99f6cd41d2bfc08e4cbc60087d8e08c0bf9b428e37466c059" protocol=ttrpc version=3 May 27 18:05:18.679240 systemd[1]: Started cri-containerd-c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042.scope - libcontainer container c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042. May 27 18:05:18.732849 containerd[1521]: time="2025-05-27T18:05:18.732720664Z" level=info msg="StartContainer for \"c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042\" returns successfully" May 27 18:05:18.747459 systemd[1]: cri-containerd-c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042.scope: Deactivated successfully. May 27 18:05:18.749825 containerd[1521]: time="2025-05-27T18:05:18.749317686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042\" id:\"c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042\" pid:4307 exited_at:{seconds:1748369118 nanos:748623517}" May 27 18:05:18.750355 containerd[1521]: time="2025-05-27T18:05:18.750042145Z" level=info msg="received exit event container_id:\"c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042\" id:\"c28cc0b893c235581bed93c6672d8e33e43af4708017e94bb58e9edf09026042\" pid:4307 exited_at:{seconds:1748369118 nanos:748623517}" May 27 18:05:19.284209 kubelet[2669]: E0527 18:05:19.283529 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:19.289803 containerd[1521]: time="2025-05-27T18:05:19.289546088Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 18:05:19.302386 containerd[1521]: time="2025-05-27T18:05:19.299988201Z" level=info msg="Container 1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07: CDI devices from CRI Config.CDIDevices: []" May 27 18:05:19.310276 containerd[1521]: time="2025-05-27T18:05:19.310225454Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07\"" May 27 18:05:19.313997 containerd[1521]: time="2025-05-27T18:05:19.313298008Z" level=info msg="StartContainer for \"1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07\"" May 27 18:05:19.318830 containerd[1521]: time="2025-05-27T18:05:19.317124527Z" level=info msg="connecting to shim 1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07" address="unix:///run/containerd/s/f1a287ad55fc6bc99f6cd41d2bfc08e4cbc60087d8e08c0bf9b428e37466c059" protocol=ttrpc version=3 May 27 18:05:19.354178 systemd[1]: Started cri-containerd-1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07.scope - libcontainer container 1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07. May 27 18:05:19.402716 containerd[1521]: time="2025-05-27T18:05:19.402664678Z" level=info msg="StartContainer for \"1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07\" returns successfully" May 27 18:05:19.415248 systemd[1]: cri-containerd-1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07.scope: Deactivated successfully. May 27 18:05:19.420946 containerd[1521]: time="2025-05-27T18:05:19.420837716Z" level=info msg="received exit event container_id:\"1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07\" id:\"1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07\" pid:4353 exited_at:{seconds:1748369119 nanos:420497568}" May 27 18:05:19.421284 containerd[1521]: time="2025-05-27T18:05:19.420846364Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07\" id:\"1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07\" pid:4353 exited_at:{seconds:1748369119 nanos:420497568}" May 27 18:05:19.453001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c53847980fc58514713e1b9ce1d43b4552c500d4f4cec4e3d5ba5855e9c9b07-rootfs.mount: Deactivated successfully. May 27 18:05:20.289904 kubelet[2669]: E0527 18:05:20.289120 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:20.296562 containerd[1521]: time="2025-05-27T18:05:20.295804970Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 18:05:20.329917 containerd[1521]: time="2025-05-27T18:05:20.328048884Z" level=info msg="Container 4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e: CDI devices from CRI Config.CDIDevices: []" May 27 18:05:20.335866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330943207.mount: Deactivated successfully. May 27 18:05:20.344924 containerd[1521]: time="2025-05-27T18:05:20.344296959Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e\"" May 27 18:05:20.347261 containerd[1521]: time="2025-05-27T18:05:20.347208135Z" level=info msg="StartContainer for \"4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e\"" May 27 18:05:20.350244 containerd[1521]: time="2025-05-27T18:05:20.350176751Z" level=info msg="connecting to shim 4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e" address="unix:///run/containerd/s/f1a287ad55fc6bc99f6cd41d2bfc08e4cbc60087d8e08c0bf9b428e37466c059" protocol=ttrpc version=3 May 27 18:05:20.397764 systemd[1]: Started cri-containerd-4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e.scope - libcontainer container 4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e. May 27 18:05:20.457106 containerd[1521]: time="2025-05-27T18:05:20.457034100Z" level=info msg="StartContainer for \"4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e\" returns successfully" May 27 18:05:20.461751 systemd[1]: cri-containerd-4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e.scope: Deactivated successfully. May 27 18:05:20.468208 containerd[1521]: time="2025-05-27T18:05:20.468144371Z" level=info msg="received exit event container_id:\"4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e\" id:\"4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e\" pid:4395 exited_at:{seconds:1748369120 nanos:467830676}" May 27 18:05:20.469374 containerd[1521]: time="2025-05-27T18:05:20.469320615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e\" id:\"4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e\" pid:4395 exited_at:{seconds:1748369120 nanos:467830676}" May 27 18:05:20.498264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bc19cb0fb31a39c794c5d2627029bc1586ba226bf0771fbb4f0daa3aa85f83e-rootfs.mount: Deactivated successfully. May 27 18:05:20.920721 kubelet[2669]: E0527 18:05:20.920633 2669 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 18:05:21.299164 kubelet[2669]: E0527 18:05:21.299007 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:21.309739 containerd[1521]: time="2025-05-27T18:05:21.309455213Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 18:05:21.329936 containerd[1521]: time="2025-05-27T18:05:21.328018834Z" level=info msg="Container 8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3: CDI devices from CRI Config.CDIDevices: []" May 27 18:05:21.341670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395160221.mount: Deactivated successfully. May 27 18:05:21.348734 containerd[1521]: time="2025-05-27T18:05:21.348668187Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3\"" May 27 18:05:21.351230 containerd[1521]: time="2025-05-27T18:05:21.351086909Z" level=info msg="StartContainer for \"8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3\"" May 27 18:05:21.356561 containerd[1521]: time="2025-05-27T18:05:21.356109357Z" level=info msg="connecting to shim 8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3" address="unix:///run/containerd/s/f1a287ad55fc6bc99f6cd41d2bfc08e4cbc60087d8e08c0bf9b428e37466c059" protocol=ttrpc version=3 May 27 18:05:21.420170 systemd[1]: Started cri-containerd-8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3.scope - libcontainer container 8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3. May 27 18:05:21.473516 systemd[1]: cri-containerd-8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3.scope: Deactivated successfully. May 27 18:05:21.476174 containerd[1521]: time="2025-05-27T18:05:21.475932449Z" level=info msg="received exit event container_id:\"8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3\" id:\"8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3\" pid:4436 exited_at:{seconds:1748369121 nanos:475392316}" May 27 18:05:21.477131 containerd[1521]: time="2025-05-27T18:05:21.477074755Z" level=info msg="StartContainer for \"8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3\" returns successfully" May 27 18:05:21.477302 containerd[1521]: time="2025-05-27T18:05:21.477159258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3\" id:\"8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3\" pid:4436 exited_at:{seconds:1748369121 nanos:475392316}" May 27 18:05:21.510746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b9c00ca6be2a134a1020c04d9aaf5a7c691942d03625f917431241c21daa0d3-rootfs.mount: Deactivated successfully. May 27 18:05:21.676742 kubelet[2669]: I0527 18:05:21.676684 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:05:21.676742 kubelet[2669]: I0527 18:05:21.676741 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:05:21.680143 containerd[1521]: time="2025-05-27T18:05:21.680029617Z" level=info msg="StopPodSandbox for \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\"" May 27 18:05:21.680605 containerd[1521]: time="2025-05-27T18:05:21.680353524Z" level=info msg="TearDown network for sandbox \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" successfully" May 27 18:05:21.680605 containerd[1521]: time="2025-05-27T18:05:21.680384132Z" level=info msg="StopPodSandbox for \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" returns successfully" May 27 18:05:21.681546 containerd[1521]: time="2025-05-27T18:05:21.681493093Z" level=info msg="RemovePodSandbox for \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\"" May 27 18:05:21.682900 containerd[1521]: time="2025-05-27T18:05:21.681555260Z" level=info msg="Forcibly stopping sandbox \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\"" May 27 18:05:21.682900 containerd[1521]: time="2025-05-27T18:05:21.681732585Z" level=info msg="TearDown network for sandbox \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" successfully" May 27 18:05:21.683914 containerd[1521]: time="2025-05-27T18:05:21.683846233Z" level=info msg="Ensure that sandbox b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b in task-service has been cleanup successfully" May 27 18:05:21.687160 containerd[1521]: time="2025-05-27T18:05:21.687071219Z" level=info msg="RemovePodSandbox \"b70b763a828ca8c19d8a8af02398ebed32c996b32d9b1a6897912c5f63172d0b\" returns successfully" May 27 18:05:21.688250 containerd[1521]: time="2025-05-27T18:05:21.688209045Z" level=info msg="StopPodSandbox for \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\"" May 27 18:05:21.688462 containerd[1521]: time="2025-05-27T18:05:21.688384363Z" level=info msg="TearDown network for sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" successfully" May 27 18:05:21.688462 containerd[1521]: time="2025-05-27T18:05:21.688403019Z" level=info msg="StopPodSandbox for \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" returns successfully" May 27 18:05:21.689263 containerd[1521]: time="2025-05-27T18:05:21.689208540Z" level=info msg="RemovePodSandbox for \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\"" May 27 18:05:21.689263 containerd[1521]: time="2025-05-27T18:05:21.689246481Z" level=info msg="Forcibly stopping sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\"" May 27 18:05:21.689424 containerd[1521]: time="2025-05-27T18:05:21.689375775Z" level=info msg="TearDown network for sandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" successfully" May 27 18:05:21.690956 containerd[1521]: time="2025-05-27T18:05:21.690919650Z" level=info msg="Ensure that sandbox f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125 in task-service has been cleanup successfully" May 27 18:05:21.694607 containerd[1521]: time="2025-05-27T18:05:21.694517872Z" level=info msg="RemovePodSandbox \"f09d8b9d384471b0d8c2da0bfa3d514de75dac7649ff07360ace697bff830125\" returns successfully" May 27 18:05:21.695900 kubelet[2669]: I0527 18:05:21.695729 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:05:21.720837 kubelet[2669]: I0527 18:05:21.720788 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:05:21.721269 kubelet[2669]: I0527 18:05:21.721143 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-rwxdk","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:05:21.721438 kubelet[2669]: E0527 18:05:21.721418 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rwxdk" May 27 18:05:21.721675 kubelet[2669]: E0527 18:05:21.721584 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:05:21.721675 kubelet[2669]: E0527 18:05:21.721612 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:05:21.721675 kubelet[2669]: E0527 18:05:21.721628 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:05:21.721675 kubelet[2669]: E0527 18:05:21.721641 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:05:21.721675 kubelet[2669]: I0527 18:05:21.721657 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:05:22.311812 kubelet[2669]: E0527 18:05:22.311126 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:22.318133 containerd[1521]: time="2025-05-27T18:05:22.317700080Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 18:05:22.334797 containerd[1521]: time="2025-05-27T18:05:22.334747960Z" level=info msg="Container 15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e: CDI devices from CRI Config.CDIDevices: []" May 27 18:05:22.347745 containerd[1521]: time="2025-05-27T18:05:22.347626914Z" level=info msg="CreateContainer within sandbox \"27740b54e441610d8546889caaae9f31616bcc47132e6c7142de5f637e62ee4a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e\"" May 27 18:05:22.348902 containerd[1521]: time="2025-05-27T18:05:22.348794278Z" level=info msg="StartContainer for \"15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e\"" May 27 18:05:22.352037 containerd[1521]: time="2025-05-27T18:05:22.351996433Z" level=info msg="connecting to shim 15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e" address="unix:///run/containerd/s/f1a287ad55fc6bc99f6cd41d2bfc08e4cbc60087d8e08c0bf9b428e37466c059" protocol=ttrpc version=3 May 27 18:05:22.398094 systemd[1]: Started cri-containerd-15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e.scope - libcontainer container 15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e. May 27 18:05:22.453227 containerd[1521]: time="2025-05-27T18:05:22.452902027Z" level=info msg="StartContainer for \"15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e\" returns successfully" May 27 18:05:22.556516 containerd[1521]: time="2025-05-27T18:05:22.556472220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e\" id:\"c9562cb0337d39bf71c62b4549a863d3158aa7dec825c479208bbd1ffe990033\" pid:4505 exited_at:{seconds:1748369122 nanos:556127725}" May 27 18:05:22.995184 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 18:05:23.322180 kubelet[2669]: E0527 18:05:23.320924 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:23.496359 kubelet[2669]: I0527 18:05:23.496284 2669 setters.go:602] "Node became not ready" node="ci-4344.0.0-1-b2ae16c630" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T18:05:23Z","lastTransitionTime":"2025-05-27T18:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 18:05:24.449100 kubelet[2669]: E0527 18:05:24.449051 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:25.081808 containerd[1521]: time="2025-05-27T18:05:25.081743273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e\" id:\"273096424c695b2fd03a80a7fa30549491c1d07af8cc5ea8918dcd8c4b577cef\" pid:4668 exit_status:1 exited_at:{seconds:1748369125 nanos:81345127}" May 27 18:05:26.603650 systemd-networkd[1451]: lxc_health: Link UP May 27 18:05:26.622463 systemd-networkd[1451]: lxc_health: Gained carrier May 27 18:05:27.436036 containerd[1521]: time="2025-05-27T18:05:27.435848296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e\" id:\"f2aab1f7d0fcc4130c0c178513f1be5d4a369a82a31332420a0ae0f915f87099\" pid:5011 exited_at:{seconds:1748369127 nanos:434727900}" May 27 18:05:27.816095 systemd-networkd[1451]: lxc_health: Gained IPv6LL May 27 18:05:28.451846 kubelet[2669]: E0527 18:05:28.451793 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:28.483653 kubelet[2669]: I0527 18:05:28.483553 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rwxdk" podStartSLOduration=10.483520998 podStartE2EDuration="10.483520998s" podCreationTimestamp="2025-05-27 18:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:05:23.347452709 +0000 UTC m=+112.759719246" watchObservedRunningTime="2025-05-27 18:05:28.483520998 +0000 UTC m=+117.895787525" May 27 18:05:29.344802 kubelet[2669]: E0527 18:05:29.344653 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:29.624838 containerd[1521]: time="2025-05-27T18:05:29.624706906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e\" id:\"3b987821ccde8976dcdaa57b769b6d5d992141ee9a650b3d9d37852a2024e3bb\" pid:5048 exited_at:{seconds:1748369129 nanos:624122824}" May 27 18:05:30.347861 kubelet[2669]: E0527 18:05:30.347145 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 27 18:05:31.768828 kubelet[2669]: I0527 18:05:31.768423 2669 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:05:31.768828 kubelet[2669]: I0527 18:05:31.768496 2669 container_gc.go:86] "Attempting to delete unused containers" May 27 18:05:31.777068 kubelet[2669]: I0527 18:05:31.776378 2669 image_gc_manager.go:431] "Attempting to delete unused images" May 27 18:05:31.781736 kubelet[2669]: I0527 18:05:31.781454 2669 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" May 27 18:05:31.782757 containerd[1521]: time="2025-05-27T18:05:31.782702129Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 18:05:31.786413 containerd[1521]: time="2025-05-27T18:05:31.786316300Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" May 27 18:05:31.788271 containerd[1521]: time="2025-05-27T18:05:31.788179114Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" May 27 18:05:31.790011 containerd[1521]: time="2025-05-27T18:05:31.789180109Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" May 27 18:05:31.790011 containerd[1521]: time="2025-05-27T18:05:31.789406598Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 18:05:31.792699 kubelet[2669]: I0527 18:05:31.792623 2669 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" May 27 18:05:31.793929 containerd[1521]: time="2025-05-27T18:05:31.793754195Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 18:05:31.795644 containerd[1521]: time="2025-05-27T18:05:31.795506339Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 18:05:31.796900 containerd[1521]: time="2025-05-27T18:05:31.796820964Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" May 27 18:05:31.798823 containerd[1521]: time="2025-05-27T18:05:31.798448300Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" May 27 18:05:31.798823 containerd[1521]: time="2025-05-27T18:05:31.798589093Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 18:05:31.837959 kubelet[2669]: I0527 18:05:31.837860 2669 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:05:31.839058 kubelet[2669]: I0527 18:05:31.838669 2669 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-rwxdk","kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630","kube-system/kube-proxy-chjkm","kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630","kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630"] May 27 18:05:31.839763 kubelet[2669]: E0527 18:05:31.839677 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rwxdk" May 27 18:05:31.842100 kubelet[2669]: E0527 18:05:31.839722 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-1-b2ae16c630" May 27 18:05:31.842407 kubelet[2669]: E0527 18:05:31.842270 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-chjkm" May 27 18:05:31.842407 kubelet[2669]: E0527 18:05:31.842326 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-1-b2ae16c630" May 27 18:05:31.842407 kubelet[2669]: E0527 18:05:31.842350 2669 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-1-b2ae16c630" May 27 18:05:31.842407 kubelet[2669]: I0527 18:05:31.842385 2669 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 18:05:31.912667 containerd[1521]: time="2025-05-27T18:05:31.912293054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15c82ef16e955eb394bee9f4e5fa2ba21fa58ba2951dacd7ecb2655d8c96972e\" id:\"efbf9a124b6ef971269ac543626be57580e1897b313014656a151fd3e0da1284\" pid:5076 exited_at:{seconds:1748369131 nanos:911030228}" May 27 18:05:31.929740 sshd[4238]: Connection closed by 139.178.68.195 port 42480 May 27 18:05:31.931328 sshd-session[4236]: pam_unix(sshd:session): session closed for user core May 27 18:05:31.941228 systemd[1]: sshd@27-137.184.189.209:22-139.178.68.195:42480.service: Deactivated successfully. May 27 18:05:31.946069 systemd[1]: session-28.scope: Deactivated successfully. May 27 18:05:31.951662 systemd-logind[1490]: Session 28 logged out. Waiting for processes to exit. May 27 18:05:31.954000 systemd-logind[1490]: Removed session 28.