May 27 17:45:47.908534 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:45:47.908602 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:47.908617 kernel: BIOS-provided physical RAM map: May 27 17:45:47.908627 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 17:45:47.908636 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 17:45:47.908646 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 17:45:47.908658 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 27 17:45:47.908672 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 27 17:45:47.908690 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 17:45:47.908700 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 17:45:47.908711 kernel: NX (Execute Disable) protection: active May 27 17:45:47.908718 kernel: APIC: Static calls initialized May 27 17:45:47.908725 kernel: SMBIOS 2.8 present. May 27 17:45:47.908732 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 27 17:45:47.908747 kernel: DMI: Memory slots populated: 1/1 May 27 17:45:47.908755 kernel: Hypervisor detected: KVM May 27 17:45:47.908765 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 17:45:47.908773 kernel: kvm-clock: using sched offset of 4740802757 cycles May 27 17:45:47.908782 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 17:45:47.908790 kernel: tsc: Detected 2494.140 MHz processor May 27 17:45:47.908799 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:45:47.908807 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:45:47.908816 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 27 17:45:47.908831 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 17:45:47.908839 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:45:47.908847 kernel: ACPI: Early table checksum verification disabled May 27 17:45:47.908855 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 27 17:45:47.908863 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:47.908872 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:47.908880 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:47.908888 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 27 17:45:47.908897 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:47.908910 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:47.908919 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:47.908927 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:45:47.908935 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 27 17:45:47.908943 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 27 17:45:47.908951 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 27 17:45:47.908960 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 27 17:45:47.908968 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 27 17:45:47.908988 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 27 17:45:47.908996 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 27 17:45:47.909005 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 27 17:45:47.909013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 27 17:45:47.909022 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 27 17:45:47.909031 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 27 17:45:47.909045 kernel: Zone ranges: May 27 17:45:47.909054 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:45:47.909062 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 27 17:45:47.909071 kernel: Normal empty May 27 17:45:47.909080 kernel: Device empty May 27 17:45:47.909088 kernel: Movable zone start for each node May 27 17:45:47.909096 kernel: Early memory node ranges May 27 17:45:47.909105 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 17:45:47.909113 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 27 17:45:47.909127 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 27 17:45:47.909136 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:45:47.909144 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 17:45:47.909153 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 27 17:45:47.909162 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 17:45:47.909171 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 17:45:47.909181 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 17:45:47.909190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 17:45:47.909200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 17:45:47.909215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:45:47.909225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 17:45:47.909234 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 17:45:47.909243 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:45:47.909251 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 17:45:47.909260 kernel: TSC deadline timer available May 27 17:45:47.909268 kernel: CPU topo: Max. logical packages: 1 May 27 17:45:47.909277 kernel: CPU topo: Max. logical dies: 1 May 27 17:45:47.909285 kernel: CPU topo: Max. dies per package: 1 May 27 17:45:47.909294 kernel: CPU topo: Max. threads per core: 1 May 27 17:45:47.909308 kernel: CPU topo: Num. cores per package: 2 May 27 17:45:47.909316 kernel: CPU topo: Num. threads per package: 2 May 27 17:45:47.909325 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 17:45:47.909334 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 17:45:47.909346 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 27 17:45:47.909359 kernel: Booting paravirtualized kernel on KVM May 27 17:45:47.909387 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:45:47.909401 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 17:45:47.909414 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 17:45:47.909435 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 17:45:47.909449 kernel: pcpu-alloc: [0] 0 1 May 27 17:45:47.909462 kernel: kvm-guest: PV spinlocks disabled, no host support May 27 17:45:47.909475 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:47.909487 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:45:47.909499 kernel: random: crng init done May 27 17:45:47.909510 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:45:47.909521 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 17:45:47.909541 kernel: Fallback order for Node 0: 0 May 27 17:45:47.909554 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 27 17:45:47.909566 kernel: Policy zone: DMA32 May 27 17:45:47.909579 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:45:47.909590 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 17:45:47.909602 kernel: Kernel/User page tables isolation: enabled May 27 17:45:47.909614 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:45:47.909626 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:45:47.909638 kernel: Dynamic Preempt: voluntary May 27 17:45:47.909660 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:45:47.909675 kernel: rcu: RCU event tracing is enabled. May 27 17:45:47.909688 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 17:45:47.909702 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:45:47.909716 kernel: Rude variant of Tasks RCU enabled. May 27 17:45:47.909728 kernel: Tracing variant of Tasks RCU enabled. May 27 17:45:47.909742 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:45:47.909755 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 17:45:47.909767 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:45:47.909791 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:45:47.909805 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:45:47.909818 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 17:45:47.909831 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:45:47.909843 kernel: Console: colour VGA+ 80x25 May 27 17:45:47.909856 kernel: printk: legacy console [tty0] enabled May 27 17:45:47.909870 kernel: printk: legacy console [ttyS0] enabled May 27 17:45:47.909883 kernel: ACPI: Core revision 20240827 May 27 17:45:47.909897 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 17:45:47.909936 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:45:47.909965 kernel: x2apic enabled May 27 17:45:47.909980 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:45:47.910002 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 17:45:47.910018 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 27 17:45:47.910032 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) May 27 17:45:47.910050 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 27 17:45:47.910063 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 27 17:45:47.910077 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:45:47.910100 kernel: Spectre V2 : Mitigation: Retpolines May 27 17:45:47.910114 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:45:47.910127 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 27 17:45:47.910139 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 17:45:47.910148 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 17:45:47.910157 kernel: MDS: Mitigation: Clear CPU buffers May 27 17:45:47.910166 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 27 17:45:47.910182 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 17:45:47.910191 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:45:47.910200 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:45:47.910209 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:45:47.910218 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:45:47.910228 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 27 17:45:47.910237 kernel: Freeing SMP alternatives memory: 32K May 27 17:45:47.910245 kernel: pid_max: default: 32768 minimum: 301 May 27 17:45:47.910254 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:45:47.910269 kernel: landlock: Up and running. May 27 17:45:47.910278 kernel: SELinux: Initializing. May 27 17:45:47.910286 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 17:45:47.910295 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 17:45:47.910305 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 27 17:45:47.910314 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 27 17:45:47.910323 kernel: signal: max sigframe size: 1776 May 27 17:45:47.910332 kernel: rcu: Hierarchical SRCU implementation. May 27 17:45:47.910341 kernel: rcu: Max phase no-delay instances is 400. May 27 17:45:47.910356 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:45:47.912428 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 17:45:47.912470 kernel: smp: Bringing up secondary CPUs ... May 27 17:45:47.912483 kernel: smpboot: x86: Booting SMP configuration: May 27 17:45:47.912504 kernel: .... node #0, CPUs: #1 May 27 17:45:47.912519 kernel: smp: Brought up 1 node, 2 CPUs May 27 17:45:47.912535 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) May 27 17:45:47.912551 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 125140K reserved, 0K cma-reserved) May 27 17:45:47.912567 kernel: devtmpfs: initialized May 27 17:45:47.912597 kernel: x86/mm: Memory block size: 128MB May 27 17:45:47.912610 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:45:47.912624 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 17:45:47.912637 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:45:47.912649 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:45:47.912662 kernel: audit: initializing netlink subsys (disabled) May 27 17:45:47.912678 kernel: audit: type=2000 audit(1748367944.469:1): state=initialized audit_enabled=0 res=1 May 27 17:45:47.912694 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:45:47.912708 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:45:47.912732 kernel: cpuidle: using governor menu May 27 17:45:47.912746 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:45:47.912759 kernel: dca service started, version 1.12.1 May 27 17:45:47.912770 kernel: PCI: Using configuration type 1 for base access May 27 17:45:47.912779 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:45:47.912788 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:45:47.912797 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:45:47.912806 kernel: ACPI: Added _OSI(Module Device) May 27 17:45:47.912815 kernel: ACPI: Added _OSI(Processor Device) May 27 17:45:47.912831 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:45:47.912846 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:45:47.912861 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:45:47.912877 kernel: ACPI: Interpreter enabled May 27 17:45:47.912892 kernel: ACPI: PM: (supports S0 S5) May 27 17:45:47.912907 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:45:47.912921 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:45:47.912936 kernel: PCI: Using E820 reservations for host bridge windows May 27 17:45:47.912950 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 27 17:45:47.912969 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:45:47.913224 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 17:45:47.913328 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 17:45:47.914558 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 17:45:47.914600 kernel: acpiphp: Slot [3] registered May 27 17:45:47.914617 kernel: acpiphp: Slot [4] registered May 27 17:45:47.914633 kernel: acpiphp: Slot [5] registered May 27 17:45:47.914649 kernel: acpiphp: Slot [6] registered May 27 17:45:47.914680 kernel: acpiphp: Slot [7] registered May 27 17:45:47.914696 kernel: acpiphp: Slot [8] registered May 27 17:45:47.914711 kernel: acpiphp: Slot [9] registered May 27 17:45:47.914727 kernel: acpiphp: Slot [10] registered May 27 17:45:47.914743 kernel: acpiphp: Slot [11] registered May 27 17:45:47.914758 kernel: acpiphp: Slot [12] registered May 27 17:45:47.914774 kernel: acpiphp: Slot [13] registered May 27 17:45:47.914789 kernel: acpiphp: Slot [14] registered May 27 17:45:47.914805 kernel: acpiphp: Slot [15] registered May 27 17:45:47.914826 kernel: acpiphp: Slot [16] registered May 27 17:45:47.914840 kernel: acpiphp: Slot [17] registered May 27 17:45:47.914853 kernel: acpiphp: Slot [18] registered May 27 17:45:47.914862 kernel: acpiphp: Slot [19] registered May 27 17:45:47.914871 kernel: acpiphp: Slot [20] registered May 27 17:45:47.914880 kernel: acpiphp: Slot [21] registered May 27 17:45:47.914890 kernel: acpiphp: Slot [22] registered May 27 17:45:47.914899 kernel: acpiphp: Slot [23] registered May 27 17:45:47.914908 kernel: acpiphp: Slot [24] registered May 27 17:45:47.914922 kernel: acpiphp: Slot [25] registered May 27 17:45:47.914931 kernel: acpiphp: Slot [26] registered May 27 17:45:47.914940 kernel: acpiphp: Slot [27] registered May 27 17:45:47.914949 kernel: acpiphp: Slot [28] registered May 27 17:45:47.914958 kernel: acpiphp: Slot [29] registered May 27 17:45:47.914967 kernel: acpiphp: Slot [30] registered May 27 17:45:47.914976 kernel: acpiphp: Slot [31] registered May 27 17:45:47.914985 kernel: PCI host bridge to bus 0000:00 May 27 17:45:47.915127 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 17:45:47.915224 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 17:45:47.915309 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 17:45:47.915874 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 27 17:45:47.916000 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 27 17:45:47.916087 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:45:47.916237 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 17:45:47.916354 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 27 17:45:47.916574 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 27 17:45:47.916723 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 27 17:45:47.916862 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 27 17:45:47.916962 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 27 17:45:47.917059 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 27 17:45:47.917153 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 27 17:45:47.917279 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 27 17:45:47.917518 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 27 17:45:47.917652 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 27 17:45:47.917749 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 27 17:45:47.917882 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 27 17:45:47.918010 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 27 17:45:47.918119 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 27 17:45:47.918218 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 27 17:45:47.918313 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 27 17:45:47.918432 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 27 17:45:47.918530 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 17:45:47.918635 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 17:45:47.918781 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 27 17:45:47.918924 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 27 17:45:47.919034 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 27 17:45:47.919195 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 17:45:47.919304 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 27 17:45:47.919422 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 27 17:45:47.919555 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 27 17:45:47.919732 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 27 17:45:47.919915 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 27 17:45:47.920081 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 27 17:45:47.920247 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 27 17:45:47.920482 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 17:45:47.920648 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 27 17:45:47.920787 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 27 17:45:47.920920 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 27 17:45:47.921098 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 17:45:47.921254 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 27 17:45:47.921435 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 27 17:45:47.921598 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 27 17:45:47.921801 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 27 17:45:47.921962 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 27 17:45:47.922140 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 27 17:45:47.922164 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 17:45:47.922183 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 17:45:47.922201 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 17:45:47.922219 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 17:45:47.922236 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 17:45:47.922252 kernel: iommu: Default domain type: Translated May 27 17:45:47.922271 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:45:47.922288 kernel: PCI: Using ACPI for IRQ routing May 27 17:45:47.922319 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 17:45:47.922335 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 17:45:47.922352 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 27 17:45:47.922546 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 27 17:45:47.922699 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 27 17:45:47.922839 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 17:45:47.922863 kernel: vgaarb: loaded May 27 17:45:47.922881 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 17:45:47.922918 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 17:45:47.922936 kernel: clocksource: Switched to clocksource kvm-clock May 27 17:45:47.922954 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:45:47.922972 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:45:47.922991 kernel: pnp: PnP ACPI init May 27 17:45:47.923005 kernel: pnp: PnP ACPI: found 4 devices May 27 17:45:47.923021 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:45:47.923039 kernel: NET: Registered PF_INET protocol family May 27 17:45:47.923055 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:45:47.923079 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 27 17:45:47.923098 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:45:47.923116 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 17:45:47.923134 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 27 17:45:47.923152 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 27 17:45:47.923171 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 17:45:47.923189 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 17:45:47.923207 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:45:47.923225 kernel: NET: Registered PF_XDP protocol family May 27 17:45:47.924082 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 17:45:47.924258 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 17:45:47.924477 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 17:45:47.924688 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 27 17:45:47.924873 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 27 17:45:47.925055 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 27 17:45:47.925235 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 17:45:47.925260 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 27 17:45:47.925480 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 31413 usecs May 27 17:45:47.925508 kernel: PCI: CLS 0 bytes, default 64 May 27 17:45:47.925524 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 17:45:47.925542 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 27 17:45:47.925558 kernel: Initialise system trusted keyrings May 27 17:45:47.925574 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 27 17:45:47.925590 kernel: Key type asymmetric registered May 27 17:45:47.925605 kernel: Asymmetric key parser 'x509' registered May 27 17:45:47.925621 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:45:47.925652 kernel: io scheduler mq-deadline registered May 27 17:45:47.925669 kernel: io scheduler kyber registered May 27 17:45:47.925685 kernel: io scheduler bfq registered May 27 17:45:47.925702 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:45:47.925719 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 27 17:45:47.925736 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 27 17:45:47.925752 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 27 17:45:47.925768 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:45:47.925785 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:45:47.925811 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 17:45:47.925827 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 17:45:47.925841 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 17:45:47.926067 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 17:45:47.926092 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 17:45:47.926217 kernel: rtc_cmos 00:03: registered as rtc0 May 27 17:45:47.926341 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T17:45:47 UTC (1748367947) May 27 17:45:47.926508 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 27 17:45:47.926525 kernel: intel_pstate: CPU model not supported May 27 17:45:47.926539 kernel: NET: Registered PF_INET6 protocol family May 27 17:45:47.926551 kernel: Segment Routing with IPv6 May 27 17:45:47.926564 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:45:47.926578 kernel: NET: Registered PF_PACKET protocol family May 27 17:45:47.926591 kernel: Key type dns_resolver registered May 27 17:45:47.926604 kernel: IPI shorthand broadcast: enabled May 27 17:45:47.926618 kernel: sched_clock: Marking stable (3725006817, 99778067)->(3843676424, -18891540) May 27 17:45:47.926641 kernel: registered taskstats version 1 May 27 17:45:47.926654 kernel: Loading compiled-in X.509 certificates May 27 17:45:47.926666 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:45:47.926679 kernel: Demotion targets for Node 0: null May 27 17:45:47.926691 kernel: Key type .fscrypt registered May 27 17:45:47.926703 kernel: Key type fscrypt-provisioning registered May 27 17:45:47.926765 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:45:47.926784 kernel: ima: Allocated hash algorithm: sha1 May 27 17:45:47.926797 kernel: ima: No architecture policies found May 27 17:45:47.926823 kernel: clk: Disabling unused clocks May 27 17:45:47.926837 kernel: Warning: unable to open an initial console. May 27 17:45:47.926850 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:45:47.926864 kernel: Write protecting the kernel read-only data: 24576k May 27 17:45:47.926880 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:45:47.926897 kernel: Run /init as init process May 27 17:45:47.926914 kernel: with arguments: May 27 17:45:47.926933 kernel: /init May 27 17:45:47.926951 kernel: with environment: May 27 17:45:47.926978 kernel: HOME=/ May 27 17:45:47.926996 kernel: TERM=linux May 27 17:45:47.927013 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:45:47.927035 systemd[1]: Successfully made /usr/ read-only. May 27 17:45:47.927060 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:45:47.927082 systemd[1]: Detected virtualization kvm. May 27 17:45:47.927101 systemd[1]: Detected architecture x86-64. May 27 17:45:47.927122 systemd[1]: Running in initrd. May 27 17:45:47.927149 systemd[1]: No hostname configured, using default hostname. May 27 17:45:47.927169 systemd[1]: Hostname set to . May 27 17:45:47.927189 systemd[1]: Initializing machine ID from VM UUID. May 27 17:45:47.927207 systemd[1]: Queued start job for default target initrd.target. May 27 17:45:47.927227 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:47.927247 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:47.927267 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:45:47.927287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:45:47.927316 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:45:47.927343 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:45:47.927388 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:45:47.927417 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:45:47.927437 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:47.927457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:47.927477 systemd[1]: Reached target paths.target - Path Units. May 27 17:45:47.927497 systemd[1]: Reached target slices.target - Slice Units. May 27 17:45:47.927517 systemd[1]: Reached target swap.target - Swaps. May 27 17:45:47.927537 systemd[1]: Reached target timers.target - Timer Units. May 27 17:45:47.927557 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:45:47.927576 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:45:47.927603 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:45:47.927623 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:45:47.927642 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:47.927660 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:45:47.927680 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:47.927699 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:45:47.927719 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:45:47.927738 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:45:47.927765 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:45:47.927784 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:45:47.927803 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:45:47.927821 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:45:47.927841 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:45:47.927859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:47.927879 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:45:47.927908 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:47.927928 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:45:47.927947 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:45:47.928038 systemd-journald[211]: Collecting audit messages is disabled. May 27 17:45:47.928096 systemd-journald[211]: Journal started May 27 17:45:47.928136 systemd-journald[211]: Runtime Journal (/run/log/journal/7ac7d7e803b54049b03dd2f7e33bfcbb) is 4.9M, max 39.5M, 34.6M free. May 27 17:45:47.904851 systemd-modules-load[213]: Inserted module 'overlay' May 27 17:45:47.930525 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:45:47.939748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:45:47.950431 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:45:47.955613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:45:47.996152 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:45:47.996213 kernel: Bridge firewalling registered May 27 17:45:47.968098 systemd-modules-load[213]: Inserted module 'br_netfilter' May 27 17:45:47.997397 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:45:47.998198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:48.002625 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:45:48.011709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:45:48.018132 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:45:48.023443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:48.028595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:48.042293 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:45:48.044138 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:45:48.046653 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:48.058213 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:45:48.078191 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:48.119195 systemd-resolved[251]: Positive Trust Anchors: May 27 17:45:48.120126 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:45:48.120197 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:45:48.127909 systemd-resolved[251]: Defaulting to hostname 'linux'. May 27 17:45:48.131113 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:45:48.131763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:48.214455 kernel: SCSI subsystem initialized May 27 17:45:48.226439 kernel: Loading iSCSI transport class v2.0-870. May 27 17:45:48.240522 kernel: iscsi: registered transport (tcp) May 27 17:45:48.268476 kernel: iscsi: registered transport (qla4xxx) May 27 17:45:48.268590 kernel: QLogic iSCSI HBA Driver May 27 17:45:48.297865 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:45:48.319150 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:48.320159 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:45:48.393731 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:45:48.395926 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:45:48.457428 kernel: raid6: avx2x4 gen() 15525 MB/s May 27 17:45:48.474416 kernel: raid6: avx2x2 gen() 16028 MB/s May 27 17:45:48.491679 kernel: raid6: avx2x1 gen() 11579 MB/s May 27 17:45:48.491782 kernel: raid6: using algorithm avx2x2 gen() 16028 MB/s May 27 17:45:48.509547 kernel: raid6: .... xor() 19866 MB/s, rmw enabled May 27 17:45:48.509651 kernel: raid6: using avx2x2 recovery algorithm May 27 17:45:48.533414 kernel: xor: automatically using best checksumming function avx May 27 17:45:48.728421 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:45:48.737593 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:45:48.740096 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:48.773790 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 27 17:45:48.783143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:48.787589 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:45:48.826871 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation May 27 17:45:48.867975 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:45:48.871592 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:45:48.951422 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:48.955599 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:45:49.042395 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 27 17:45:49.052505 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 27 17:45:49.064424 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 27 17:45:49.067323 kernel: scsi host0: Virtio SCSI HBA May 27 17:45:49.075436 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:45:49.081242 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:45:49.081313 kernel: GPT:9289727 != 125829119 May 27 17:45:49.081353 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:45:49.081392 kernel: GPT:9289727 != 125829119 May 27 17:45:49.081408 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:45:49.081421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:45:49.088511 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 27 17:45:49.101704 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) May 27 17:45:49.117396 kernel: AES CTR mode by8 optimization enabled May 27 17:45:49.121403 kernel: libata version 3.00 loaded. May 27 17:45:49.138776 kernel: ata_piix 0000:00:01.1: version 2.13 May 27 17:45:49.142918 kernel: scsi host1: ata_piix May 27 17:45:49.143176 kernel: scsi host2: ata_piix May 27 17:45:49.144151 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 27 17:45:49.145386 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 27 17:45:49.156181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:49.156329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:49.159887 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:49.164539 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 17:45:49.161944 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:49.164877 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:45:49.179458 kernel: ACPI: bus type USB registered May 27 17:45:49.181910 kernel: usbcore: registered new interface driver usbfs May 27 17:45:49.181973 kernel: usbcore: registered new interface driver hub May 27 17:45:49.187660 kernel: usbcore: registered new device driver usb May 27 17:45:49.247324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 17:45:49.261656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:49.276511 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 17:45:49.285861 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 17:45:49.286321 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 17:45:49.296624 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:45:49.298242 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:45:49.320985 disk-uuid[610]: Primary Header is updated. May 27 17:45:49.320985 disk-uuid[610]: Secondary Entries is updated. May 27 17:45:49.320985 disk-uuid[610]: Secondary Header is updated. May 27 17:45:49.335529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:45:49.337314 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 27 17:45:49.337603 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 27 17:45:49.339592 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 27 17:45:49.339930 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 27 17:45:49.342390 kernel: hub 1-0:1.0: USB hub found May 27 17:45:49.343398 kernel: hub 1-0:1.0: 2 ports detected May 27 17:45:49.466788 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:45:49.482693 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:45:49.483589 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:49.484977 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:45:49.486537 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:45:49.533115 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:45:50.346412 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:45:50.348068 disk-uuid[611]: The operation has completed successfully. May 27 17:45:50.399542 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:45:50.399652 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:45:50.432774 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:45:50.459748 sh[635]: Success May 27 17:45:50.481525 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:45:50.481624 kernel: device-mapper: uevent: version 1.0.3 May 27 17:45:50.483244 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:45:50.494450 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 27 17:45:50.564729 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:45:50.570561 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:45:50.583077 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:45:50.595406 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:45:50.598452 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (647) May 27 17:45:50.600536 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:45:50.600629 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:50.601731 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:45:50.608843 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:45:50.609765 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:45:50.610561 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:45:50.612604 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:45:50.614028 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:45:50.649413 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (678) May 27 17:45:50.652462 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:50.654814 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:50.655135 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:45:50.664507 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:50.666710 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:45:50.669223 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:45:50.806045 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:45:50.811884 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:45:50.873844 systemd-networkd[817]: lo: Link UP May 27 17:45:50.873859 systemd-networkd[817]: lo: Gained carrier May 27 17:45:50.877568 systemd-networkd[817]: Enumeration completed May 27 17:45:50.877707 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:45:50.878931 systemd[1]: Reached target network.target - Network. May 27 17:45:50.879068 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 27 17:45:50.879073 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 27 17:45:50.880047 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:50.880054 systemd-networkd[817]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:45:50.880881 systemd-networkd[817]: eth0: Link UP May 27 17:45:50.880885 systemd-networkd[817]: eth0: Gained carrier May 27 17:45:50.880896 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 27 17:45:50.883697 systemd-networkd[817]: eth1: Link UP May 27 17:45:50.883702 systemd-networkd[817]: eth1: Gained carrier May 27 17:45:50.883718 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:45:50.897485 systemd-networkd[817]: eth0: DHCPv4 address 143.198.147.228/20, gateway 143.198.144.1 acquired from 169.254.169.253 May 27 17:45:50.897503 ignition[723]: Ignition 2.21.0 May 27 17:45:50.897512 ignition[723]: Stage: fetch-offline May 27 17:45:50.897550 ignition[723]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:50.900107 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:45:50.897560 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 17:45:50.897654 ignition[723]: parsed url from cmdline: "" May 27 17:45:50.902512 systemd-networkd[817]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 May 27 17:45:50.897658 ignition[723]: no config URL provided May 27 17:45:50.897664 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:45:50.897671 ignition[723]: no config at "/usr/lib/ignition/user.ign" May 27 17:45:50.897677 ignition[723]: failed to fetch config: resource requires networking May 27 17:45:50.897871 ignition[723]: Ignition finished successfully May 27 17:45:50.907617 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 17:45:50.949559 ignition[829]: Ignition 2.21.0 May 27 17:45:50.949580 ignition[829]: Stage: fetch May 27 17:45:50.949826 ignition[829]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:50.949841 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 17:45:50.949957 ignition[829]: parsed url from cmdline: "" May 27 17:45:50.949961 ignition[829]: no config URL provided May 27 17:45:50.949967 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:45:50.949976 ignition[829]: no config at "/usr/lib/ignition/user.ign" May 27 17:45:50.950013 ignition[829]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 27 17:45:50.964834 ignition[829]: GET result: OK May 27 17:45:50.965578 ignition[829]: parsing config with SHA512: f3c16bec08598858e1007feabe6eca0379beb65784e358903a699cdefd2cb15c335b9521a6127ab7d631873329e6791c76883c2837c6cf635e614881ac4ee68a May 27 17:45:50.979484 unknown[829]: fetched base config from "system" May 27 17:45:50.979498 unknown[829]: fetched base config from "system" May 27 17:45:50.979883 ignition[829]: fetch: fetch complete May 27 17:45:50.979504 unknown[829]: fetched user config from "digitalocean" May 27 17:45:50.979888 ignition[829]: fetch: fetch passed May 27 17:45:50.979961 ignition[829]: Ignition finished successfully May 27 17:45:50.983083 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 17:45:50.986606 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:45:51.034146 ignition[836]: Ignition 2.21.0 May 27 17:45:51.034165 ignition[836]: Stage: kargs May 27 17:45:51.034354 ignition[836]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:51.034385 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 17:45:51.035205 ignition[836]: kargs: kargs passed May 27 17:45:51.035277 ignition[836]: Ignition finished successfully May 27 17:45:51.036728 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:45:51.038995 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:45:51.078986 ignition[843]: Ignition 2.21.0 May 27 17:45:51.079008 ignition[843]: Stage: disks May 27 17:45:51.079244 ignition[843]: no configs at "/usr/lib/ignition/base.d" May 27 17:45:51.079259 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 17:45:51.081226 ignition[843]: disks: disks passed May 27 17:45:51.081334 ignition[843]: Ignition finished successfully May 27 17:45:51.084894 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:45:51.085606 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:45:51.086117 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:45:51.087084 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:45:51.087931 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:45:51.088894 systemd[1]: Reached target basic.target - Basic System. May 27 17:45:51.090908 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:45:51.126511 systemd-fsck[852]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:45:51.129334 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:45:51.131534 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:45:51.272620 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:45:51.273823 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:45:51.274999 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:45:51.277432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:45:51.280575 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:45:51.282550 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 27 17:45:51.293443 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 17:45:51.294840 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:45:51.295696 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:45:51.301496 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:45:51.304143 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:45:51.338807 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (860) May 27 17:45:51.346410 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:51.348574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:51.348670 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:45:51.379640 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:45:51.397999 initrd-setup-root[891]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:45:51.408180 coreos-metadata[862]: May 27 17:45:51.407 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 17:45:51.415405 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory May 27 17:45:51.418084 coreos-metadata[863]: May 27 17:45:51.417 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 17:45:51.422132 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:45:51.424712 coreos-metadata[862]: May 27 17:45:51.423 INFO Fetch successful May 27 17:45:51.434050 initrd-setup-root[912]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:45:51.434775 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 27 17:45:51.436819 coreos-metadata[863]: May 27 17:45:51.434 INFO Fetch successful May 27 17:45:51.436498 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 27 17:45:51.444585 coreos-metadata[863]: May 27 17:45:51.444 INFO wrote hostname ci-4344.0.0-f-2f5fe7c465 to /sysroot/etc/hostname May 27 17:45:51.446257 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:45:51.564957 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:45:51.567832 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:45:51.569926 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:45:51.591472 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:51.598121 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:45:51.626066 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:45:51.635854 ignition[982]: INFO : Ignition 2.21.0 May 27 17:45:51.635854 ignition[982]: INFO : Stage: mount May 27 17:45:51.635854 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:45:51.635854 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 17:45:51.638524 ignition[982]: INFO : mount: mount passed May 27 17:45:51.638524 ignition[982]: INFO : Ignition finished successfully May 27 17:45:51.639882 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:45:51.642309 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:45:51.669488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:45:51.696721 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (994) May 27 17:45:51.699532 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:45:51.699631 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 17:45:51.700554 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:45:51.706167 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:45:51.746115 ignition[1011]: INFO : Ignition 2.21.0 May 27 17:45:51.746115 ignition[1011]: INFO : Stage: files May 27 17:45:51.747671 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:45:51.747671 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 17:45:51.750724 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping May 27 17:45:51.750724 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:45:51.750724 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:45:51.753978 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:45:51.753978 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:45:51.755266 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:45:51.754453 unknown[1011]: wrote ssh authorized keys file for user: core May 27 17:45:51.758252 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 17:45:51.759362 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 27 17:45:51.915581 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:45:52.195600 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 17:45:52.195600 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:45:52.197551 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 17:45:52.364656 systemd-networkd[817]: eth1: Gained IPv6LL May 27 17:45:52.684938 systemd-networkd[817]: eth0: Gained IPv6LL May 27 17:45:52.691928 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 17:45:52.757847 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:45:52.757847 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 17:45:52.764291 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 27 17:45:53.268293 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 17:45:53.564849 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 17:45:53.564849 ignition[1011]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 17:45:53.567634 ignition[1011]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:45:53.571718 ignition[1011]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:45:53.571718 ignition[1011]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 17:45:53.571718 ignition[1011]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 27 17:45:53.571718 ignition[1011]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:45:53.571718 ignition[1011]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:45:53.571718 ignition[1011]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:45:53.571718 ignition[1011]: INFO : files: files passed May 27 17:45:53.571718 ignition[1011]: INFO : Ignition finished successfully May 27 17:45:53.574843 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:45:53.578498 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:45:53.581620 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:45:53.598749 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:45:53.598928 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:45:53.611815 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:45:53.611815 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:45:53.613584 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:45:53.615623 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:45:53.617147 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:45:53.618628 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:45:53.680614 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:45:53.680787 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:45:53.682077 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:45:53.682845 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:45:53.683668 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:45:53.685030 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:45:53.712750 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:45:53.715813 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:45:53.747389 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:53.748637 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:53.749242 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:45:53.749976 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:45:53.750156 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:45:53.750984 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:45:53.751927 systemd[1]: Stopped target basic.target - Basic System. May 27 17:45:53.752635 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:45:53.753619 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:45:53.754311 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:45:53.755048 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:45:53.755962 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:45:53.756880 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:45:53.757848 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:45:53.758591 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:45:53.759204 systemd[1]: Stopped target swap.target - Swaps. May 27 17:45:53.759771 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:45:53.759961 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:45:53.760873 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:53.761842 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:53.762609 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:45:53.762840 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:53.763586 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:45:53.763729 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:45:53.765448 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:45:53.765606 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:45:53.766613 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:45:53.766785 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:45:53.767596 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 17:45:53.767732 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:45:53.769814 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:45:53.772571 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:45:53.774618 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:45:53.774837 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:53.783182 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:45:53.783993 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:45:53.796749 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:45:53.796930 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:45:53.812163 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:45:53.820543 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:45:53.820682 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:45:53.827875 ignition[1065]: INFO : Ignition 2.21.0 May 27 17:45:53.827875 ignition[1065]: INFO : Stage: umount May 27 17:45:53.829515 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:45:53.829515 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 17:45:53.832436 ignition[1065]: INFO : umount: umount passed May 27 17:45:53.832436 ignition[1065]: INFO : Ignition finished successfully May 27 17:45:53.832969 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:45:53.833149 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:45:53.834982 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:45:53.835191 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:45:53.836553 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:45:53.836656 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:45:53.837433 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 17:45:53.837512 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 17:45:53.838223 systemd[1]: Stopped target network.target - Network. May 27 17:45:53.838936 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:45:53.839011 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:45:53.839964 systemd[1]: Stopped target paths.target - Path Units. May 27 17:45:53.841007 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:45:53.842487 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:53.843360 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:45:53.844439 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:45:53.845431 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:45:53.845493 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:45:53.846109 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:45:53.846146 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:45:53.846881 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:45:53.846989 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:45:53.847604 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:45:53.847648 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:45:53.848508 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:45:53.848576 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:45:53.849279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:45:53.850010 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:45:53.854836 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:45:53.854982 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:45:53.860890 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:45:53.861213 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:45:53.861326 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:45:53.863064 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:45:53.864137 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:45:53.864944 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:45:53.865015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:53.867151 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:45:53.867654 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:45:53.867733 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:45:53.868453 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:45:53.868527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:53.869546 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:45:53.869593 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:45:53.869962 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:45:53.870005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:53.871021 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:53.874777 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:45:53.874863 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:45:53.888201 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:45:53.888686 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:53.890813 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:45:53.890901 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:45:53.891544 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:45:53.891592 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:53.892822 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:45:53.892900 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:45:53.893790 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:45:53.893860 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:45:53.894401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:45:53.894460 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:45:53.899671 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:45:53.900222 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:45:53.900351 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:53.901879 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:45:53.901972 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:53.905230 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:45:53.905328 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:45:53.908846 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:45:53.908939 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:53.909907 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:53.909991 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:53.914029 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:45:53.914140 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:45:53.914199 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:45:53.914262 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:45:53.914857 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:45:53.919545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:45:53.930517 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:45:53.930675 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:45:53.931964 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:45:53.934206 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:45:53.961626 systemd[1]: Switching root. May 27 17:45:54.004743 systemd-journald[211]: Journal stopped May 27 17:45:55.303068 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). May 27 17:45:55.303138 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:45:55.303175 kernel: SELinux: policy capability open_perms=1 May 27 17:45:55.303191 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:45:55.303213 kernel: SELinux: policy capability always_check_network=0 May 27 17:45:55.303229 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:45:55.303256 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:45:55.303283 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:45:55.303309 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:45:55.303326 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:45:55.303343 kernel: audit: type=1403 audit(1748367954.176:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:45:55.309454 systemd[1]: Successfully loaded SELinux policy in 52.856ms. May 27 17:45:55.309519 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.611ms. May 27 17:45:55.309541 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:45:55.309561 systemd[1]: Detected virtualization kvm. May 27 17:45:55.309581 systemd[1]: Detected architecture x86-64. May 27 17:45:55.309600 systemd[1]: Detected first boot. May 27 17:45:55.309620 systemd[1]: Hostname set to . May 27 17:45:55.309639 systemd[1]: Initializing machine ID from VM UUID. May 27 17:45:55.309674 zram_generator::config[1108]: No configuration found. May 27 17:45:55.309700 kernel: Guest personality initialized and is inactive May 27 17:45:55.309723 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 17:45:55.309741 kernel: Initialized host personality May 27 17:45:55.309767 kernel: NET: Registered PF_VSOCK protocol family May 27 17:45:55.309786 systemd[1]: Populated /etc with preset unit settings. May 27 17:45:55.309808 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:45:55.309829 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:45:55.309850 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:45:55.309888 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:45:55.309910 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:45:55.309930 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:45:55.309951 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:45:55.309978 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:45:55.310006 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:45:55.310027 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:45:55.310047 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:45:55.310076 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:45:55.310097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:55.310119 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:55.310140 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:45:55.310161 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:45:55.310182 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:45:55.310212 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:45:55.310234 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:45:55.310255 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:55.310277 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:55.310300 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:45:55.310323 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:45:55.310347 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:45:55.310385 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:45:55.310407 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:55.310435 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:45:55.310455 systemd[1]: Reached target slices.target - Slice Units. May 27 17:45:55.310476 systemd[1]: Reached target swap.target - Swaps. May 27 17:45:55.310495 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:45:55.310533 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:45:55.310554 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:45:55.310576 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:55.310597 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:45:55.310618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:55.310651 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:45:55.310681 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:45:55.310702 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:45:55.310724 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:45:55.310745 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:55.310767 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:45:55.310786 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:45:55.310808 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:45:55.310832 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:45:55.310886 systemd[1]: Reached target machines.target - Containers. May 27 17:45:55.310908 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:45:55.310930 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:55.310951 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:45:55.310974 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:45:55.310995 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:45:55.311017 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:45:55.311038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:45:55.311059 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:45:55.311086 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:45:55.311109 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:45:55.311130 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:45:55.311151 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:45:55.311172 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:45:55.311194 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:45:55.311224 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:55.311246 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:45:55.311267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:45:55.311288 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:45:55.311310 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:45:55.311333 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:45:55.311361 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:45:55.320521 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:45:55.320561 systemd[1]: Stopped verity-setup.service. May 27 17:45:55.320583 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:55.320603 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:45:55.320622 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:45:55.320654 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:45:55.320674 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:45:55.320694 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:45:55.320714 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:45:55.320734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:55.320754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:45:55.320773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:45:55.320791 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:45:55.320810 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:45:55.320841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:45:55.320860 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:45:55.320879 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:55.320898 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:45:55.320918 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:45:55.320939 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:45:55.320959 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:45:55.320980 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:45:55.320999 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:45:55.321024 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:45:55.321043 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:45:55.321061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:55.321080 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:45:55.321099 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:45:55.321195 systemd-journald[1178]: Collecting audit messages is disabled. May 27 17:45:55.321245 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:45:55.321268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:45:55.321296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:45:55.321314 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:45:55.321337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:45:55.321359 systemd-journald[1178]: Journal started May 27 17:45:55.321427 systemd-journald[1178]: Runtime Journal (/run/log/journal/7ac7d7e803b54049b03dd2f7e33bfcbb) is 4.9M, max 39.5M, 34.6M free. May 27 17:45:55.328008 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:45:54.911248 systemd[1]: Queued start job for default target multi-user.target. May 27 17:45:54.937633 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 17:45:54.938125 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:45:55.333548 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:45:55.384423 kernel: loop: module loaded May 27 17:45:55.383559 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:45:55.394414 kernel: loop0: detected capacity change from 0 to 146240 May 27 17:45:55.402902 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:45:55.403251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:45:55.404216 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:45:55.431666 systemd-journald[1178]: Time spent on flushing to /var/log/journal/7ac7d7e803b54049b03dd2f7e33bfcbb is 116.587ms for 1007 entries. May 27 17:45:55.431666 systemd-journald[1178]: System Journal (/var/log/journal/7ac7d7e803b54049b03dd2f7e33bfcbb) is 8M, max 195.6M, 187.6M free. May 27 17:45:55.616891 systemd-journald[1178]: Received client request to flush runtime journal. May 27 17:45:55.617045 kernel: fuse: init (API version 7.41) May 27 17:45:55.617080 kernel: ACPI: bus type drm_connector registered May 27 17:45:55.617104 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:45:55.617140 kernel: loop1: detected capacity change from 0 to 8 May 27 17:45:55.617168 kernel: loop2: detected capacity change from 0 to 221472 May 27 17:45:55.428494 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:55.448834 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:45:55.455827 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:45:55.458513 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:45:55.471066 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:45:55.472188 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 27 17:45:55.472210 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 27 17:45:55.494288 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:45:55.511995 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:45:55.512339 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:45:55.528504 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:45:55.558763 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:45:55.568782 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:45:55.611924 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:45:55.621488 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:45:55.661011 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:55.671620 kernel: loop3: detected capacity change from 0 to 113872 May 27 17:45:55.737363 kernel: loop4: detected capacity change from 0 to 146240 May 27 17:45:55.778422 kernel: loop5: detected capacity change from 0 to 8 May 27 17:45:55.778506 kernel: loop6: detected capacity change from 0 to 221472 May 27 17:45:55.776851 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:45:55.785612 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:45:55.804435 kernel: loop7: detected capacity change from 0 to 113872 May 27 17:45:55.821724 (sd-merge)[1255]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 27 17:45:55.822287 (sd-merge)[1255]: Merged extensions into '/usr'. May 27 17:45:55.830496 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 27 17:45:55.830516 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 27 17:45:55.834081 systemd[1]: Reload requested from client PID 1208 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:45:55.834107 systemd[1]: Reloading... May 27 17:45:56.040423 zram_generator::config[1285]: No configuration found. May 27 17:45:56.312467 ldconfig[1205]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:45:56.334275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:45:56.458884 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:45:56.460182 systemd[1]: Reloading finished in 625 ms. May 27 17:45:56.486632 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:45:56.487707 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:56.488959 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:45:56.498890 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:45:56.511597 systemd[1]: Starting ensure-sysext.service... May 27 17:45:56.520400 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:45:56.540967 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:45:56.560701 systemd[1]: Reload requested from client PID 1330 ('systemctl') (unit ensure-sysext.service)... May 27 17:45:56.560722 systemd[1]: Reloading... May 27 17:45:56.617117 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:45:56.619156 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:45:56.619594 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:45:56.619889 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:45:56.623403 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:45:56.623960 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. May 27 17:45:56.624179 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. May 27 17:45:56.637291 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:45:56.638434 systemd-tmpfiles[1331]: Skipping /boot May 27 17:45:56.697752 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:45:56.698444 systemd-tmpfiles[1331]: Skipping /boot May 27 17:45:56.763423 zram_generator::config[1362]: No configuration found. May 27 17:45:56.939117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:45:57.043700 systemd[1]: Reloading finished in 482 ms. May 27 17:45:57.069718 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:45:57.077013 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:57.086628 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:45:57.090845 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:45:57.094670 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:45:57.104653 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:45:57.109726 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:57.115749 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:45:57.128292 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:57.129759 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:57.134870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:45:57.140487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:45:57.151906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:45:57.153896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:57.154069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:57.159277 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:45:57.160090 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:57.167286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:57.168691 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:57.168995 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:57.169157 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:57.169319 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:57.182640 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:45:57.183014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:45:57.190185 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:57.190714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:45:57.199970 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:45:57.200948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:45:57.201549 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:45:57.202322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:45:57.204846 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:45:57.217691 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:45:57.219530 systemd[1]: Finished ensure-sysext.service. May 27 17:45:57.220391 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:45:57.236899 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 17:45:57.249768 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:45:57.250507 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:45:57.257597 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:45:57.265227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:45:57.265529 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:45:57.266115 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:45:57.275005 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:45:57.278782 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:45:57.291722 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:45:57.292934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:45:57.294007 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:45:57.298641 augenrules[1450]: No rules May 27 17:45:57.303532 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:45:57.305050 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:45:57.314827 systemd-udevd[1408]: Using default interface naming scheme 'v255'. May 27 17:45:57.336760 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:45:57.366520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:57.373715 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:45:57.414724 systemd-resolved[1407]: Positive Trust Anchors: May 27 17:45:57.414740 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:45:57.414779 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:45:57.421112 systemd-resolved[1407]: Using system hostname 'ci-4344.0.0-f-2f5fe7c465'. May 27 17:45:57.424473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:45:57.425942 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:57.454477 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 17:45:57.455661 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:45:57.456812 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:45:57.457867 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:45:57.459344 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:45:57.459757 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:45:57.460695 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:45:57.460749 systemd[1]: Reached target paths.target - Path Units. May 27 17:45:57.461182 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:45:57.462100 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:45:57.463084 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:45:57.463960 systemd[1]: Reached target timers.target - Timer Units. May 27 17:45:57.466162 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:45:57.470362 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:45:57.478146 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:45:57.480769 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:45:57.481265 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:45:57.490834 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:45:57.492264 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:45:57.495763 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:45:57.501749 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:45:57.503544 systemd[1]: Reached target basic.target - Basic System. May 27 17:45:57.504124 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:45:57.504160 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:45:57.508062 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 17:45:57.514754 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:45:57.519720 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:45:57.523861 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:45:57.529965 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:45:57.530915 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:45:57.547279 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:45:57.554707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:45:57.568402 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:45:57.577750 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:45:57.595674 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:45:57.602704 jq[1488]: false May 27 17:45:57.607425 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:45:57.608885 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:45:57.612803 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:45:57.614225 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:45:57.618943 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:45:57.620918 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:45:57.623290 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:45:57.625767 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:45:57.628418 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:45:57.632191 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:45:57.679120 systemd-networkd[1461]: lo: Link UP May 27 17:45:57.679131 systemd-networkd[1461]: lo: Gained carrier May 27 17:45:57.680019 systemd-networkd[1461]: Enumeration completed May 27 17:45:57.680198 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:45:57.680875 systemd[1]: Reached target network.target - Network. May 27 17:45:57.683247 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:45:57.686896 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:45:57.692858 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing passwd entry cache May 27 17:45:57.692879 oslogin_cache_refresh[1490]: Refreshing passwd entry cache May 27 17:45:57.708155 jq[1501]: true May 27 17:45:57.704151 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:45:57.715379 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting users, quitting May 27 17:45:57.715379 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:45:57.715379 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing group entry cache May 27 17:45:57.715062 oslogin_cache_refresh[1490]: Failure getting users, quitting May 27 17:45:57.715088 oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:45:57.715183 oslogin_cache_refresh[1490]: Refreshing group entry cache May 27 17:45:57.734496 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting groups, quitting May 27 17:45:57.734496 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:45:57.731880 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:45:57.727652 oslogin_cache_refresh[1490]: Failure getting groups, quitting May 27 17:45:57.733317 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:45:57.727667 oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:45:57.746300 update_engine[1500]: I20250527 17:45:57.746191 1500 main.cc:92] Flatcar Update Engine starting May 27 17:45:57.746931 extend-filesystems[1489]: Found loop4 May 27 17:45:57.746931 extend-filesystems[1489]: Found loop5 May 27 17:45:57.746931 extend-filesystems[1489]: Found loop6 May 27 17:45:57.746931 extend-filesystems[1489]: Found loop7 May 27 17:45:57.746931 extend-filesystems[1489]: Found vda May 27 17:45:57.746931 extend-filesystems[1489]: Found vda1 May 27 17:45:57.746931 extend-filesystems[1489]: Found vda2 May 27 17:45:57.746931 extend-filesystems[1489]: Found vda3 May 27 17:45:57.776638 extend-filesystems[1489]: Found usr May 27 17:45:57.776638 extend-filesystems[1489]: Found vda4 May 27 17:45:57.776638 extend-filesystems[1489]: Found vda6 May 27 17:45:57.776638 extend-filesystems[1489]: Found vda7 May 27 17:45:57.776638 extend-filesystems[1489]: Found vda9 May 27 17:45:57.776638 extend-filesystems[1489]: Found vdb May 27 17:45:57.747958 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:45:57.794884 tar[1502]: linux-amd64/helm May 27 17:45:57.757276 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:45:57.797262 jq[1516]: true May 27 17:45:57.800311 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:45:57.799752 dbus-daemon[1484]: [system] SELinux support is enabled May 27 17:45:57.807096 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:45:57.818748 update_engine[1500]: I20250527 17:45:57.813836 1500 update_check_scheduler.cc:74] Next update check in 8m3s May 27 17:45:57.807148 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:45:57.809171 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:45:57.823728 coreos-metadata[1483]: May 27 17:45:57.820 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 17:45:57.809206 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:45:57.813313 systemd[1]: Started update-engine.service - Update Engine. May 27 17:45:57.828417 coreos-metadata[1483]: May 27 17:45:57.826 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 27 17:45:57.838260 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:45:57.841869 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:45:57.843984 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:45:57.844406 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:45:57.846224 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:45:57.938201 bash[1548]: Updated "/home/core/.ssh/authorized_keys" May 27 17:45:57.942483 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:45:57.949689 systemd[1]: Starting sshkeys.service... May 27 17:45:58.029599 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 17:45:58.038280 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 17:45:58.075720 systemd-logind[1499]: New seat seat0. May 27 17:45:58.079712 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:45:58.213356 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:45:58.236178 systemd-networkd[1461]: eth1: Configuring with /run/systemd/network/10-56:37:50:e0:9e:51.network. May 27 17:45:58.239042 systemd-networkd[1461]: eth1: Link UP May 27 17:45:58.239707 systemd-networkd[1461]: eth1: Gained carrier May 27 17:45:58.246660 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:45:58.284590 coreos-metadata[1551]: May 27 17:45:58.284 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 17:45:58.287461 coreos-metadata[1551]: May 27 17:45:58.286 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 27 17:45:58.439081 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 27 17:45:58.448932 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 27 17:45:58.449489 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:45:58.453754 containerd[1527]: time="2025-05-27T17:45:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:45:58.459404 containerd[1527]: time="2025-05-27T17:45:58.456137503Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:45:58.513160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:45:58.541285 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:45:58.562187 kernel: ISO 9660 Extensions: RRIP_1991A May 27 17:45:58.563328 containerd[1527]: time="2025-05-27T17:45:58.563267038Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.821µs" May 27 17:45:58.563328 containerd[1527]: time="2025-05-27T17:45:58.563314014Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:45:58.563328 containerd[1527]: time="2025-05-27T17:45:58.563337153Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.563821752Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.563862413Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.563897091Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.563982377Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.563999938Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.564360834Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.564401313Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.564421062Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.564433498Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:45:58.564715 containerd[1527]: time="2025-05-27T17:45:58.564572095Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:45:58.565019 containerd[1527]: time="2025-05-27T17:45:58.564824403Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:45:58.565019 containerd[1527]: time="2025-05-27T17:45:58.564855779Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:45:58.565019 containerd[1527]: time="2025-05-27T17:45:58.564885714Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:45:58.573016 containerd[1527]: time="2025-05-27T17:45:58.566928073Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:45:58.573016 containerd[1527]: time="2025-05-27T17:45:58.568052960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:45:58.573016 containerd[1527]: time="2025-05-27T17:45:58.568167733Z" level=info msg="metadata content store policy set" policy=shared May 27 17:45:58.567089 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 27 17:45:58.569067 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.577732374Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578083826Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578155354Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578172829Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578185801Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578197162Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578224929Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578243220Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578258426Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578271854Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578290878Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:45:58.579413 containerd[1527]: time="2025-05-27T17:45:58.578309354Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:45:58.578276 systemd-networkd[1461]: eth0: Configuring with /run/systemd/network/10-d6:54:0a:bd:d6:f1.network. May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579575099Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579649940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579673860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579685713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579697849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579708394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579727170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579739463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579750740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579761743Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579773017Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579849585Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:45:58.579880 containerd[1527]: time="2025-05-27T17:45:58.579863036Z" level=info msg="Start snapshots syncer" May 27 17:45:58.580176 containerd[1527]: time="2025-05-27T17:45:58.579994004Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:45:58.585762 containerd[1527]: time="2025-05-27T17:45:58.581658262Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:45:58.585762 containerd[1527]: time="2025-05-27T17:45:58.581743013Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.581859716Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582035619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582056656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582078635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582093617Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582117243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582128333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582139933Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582166822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582176826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582186210Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582226556Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582245579Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:45:58.586025 containerd[1527]: time="2025-05-27T17:45:58.582254555Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:45:58.586314 containerd[1527]: time="2025-05-27T17:45:58.582262951Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:45:58.586314 containerd[1527]: time="2025-05-27T17:45:58.582311404Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:45:58.586314 containerd[1527]: time="2025-05-27T17:45:58.582321482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:45:58.586314 containerd[1527]: time="2025-05-27T17:45:58.582331255Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:45:58.586314 containerd[1527]: time="2025-05-27T17:45:58.582350785Z" level=info msg="runtime interface created" May 27 17:45:58.586314 containerd[1527]: time="2025-05-27T17:45:58.582356311Z" level=info msg="created NRI interface" May 27 17:45:58.586314 containerd[1527]: time="2025-05-27T17:45:58.582443550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:45:58.586314 containerd[1527]: time="2025-05-27T17:45:58.582461706Z" level=info msg="Connect containerd service" May 27 17:45:58.586314 containerd[1527]: time="2025-05-27T17:45:58.582489085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:45:58.593511 containerd[1527]: time="2025-05-27T17:45:58.586644044Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:45:58.588409 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:45:58.588617 systemd-networkd[1461]: eth0: Link UP May 27 17:45:58.590146 systemd-networkd[1461]: eth0: Gained carrier May 27 17:45:58.592345 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:45:58.600774 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:45:58.618415 locksmithd[1532]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:45:58.665409 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:45:58.692389 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:45:58.752538 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 17:45:58.762489 kernel: ACPI: button: Power Button [PWRF] May 27 17:45:58.834145 coreos-metadata[1483]: May 27 17:45:58.828 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 27 17:45:58.852417 coreos-metadata[1483]: May 27 17:45:58.850 INFO Fetch successful May 27 17:45:58.876417 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 27 17:45:58.881417 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 17:45:58.954552 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 17:45:58.955785 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:45:58.959601 containerd[1527]: time="2025-05-27T17:45:58.959441569Z" level=info msg="Start subscribing containerd event" May 27 17:45:58.960833 containerd[1527]: time="2025-05-27T17:45:58.960745232Z" level=info msg="Start recovering state" May 27 17:45:58.961069 containerd[1527]: time="2025-05-27T17:45:58.961029951Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:45:58.965833 containerd[1527]: time="2025-05-27T17:45:58.962955979Z" level=info msg="Start event monitor" May 27 17:45:58.965833 containerd[1527]: time="2025-05-27T17:45:58.965672962Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:45:58.965833 containerd[1527]: time="2025-05-27T17:45:58.965686118Z" level=info msg="Start cni network conf syncer for default" May 27 17:45:58.965833 containerd[1527]: time="2025-05-27T17:45:58.965751151Z" level=info msg="Start streaming server" May 27 17:45:58.965833 containerd[1527]: time="2025-05-27T17:45:58.965766879Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:45:58.965833 containerd[1527]: time="2025-05-27T17:45:58.965775765Z" level=info msg="runtime interface starting up..." May 27 17:45:58.965833 containerd[1527]: time="2025-05-27T17:45:58.965781926Z" level=info msg="starting plugins..." May 27 17:45:58.965833 containerd[1527]: time="2025-05-27T17:45:58.965809308Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:45:58.966109 containerd[1527]: time="2025-05-27T17:45:58.965955783Z" level=info msg="containerd successfully booted in 0.514217s" May 27 17:45:58.966273 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:45:58.979569 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:45:59.057518 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:45:59.066391 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:45:59.100063 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:45:59.103533 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:45:59.110873 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:45:59.139526 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 27 17:45:59.139627 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 27 17:45:59.168163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:59.182739 kernel: Console: switching to colour dummy device 80x25 May 27 17:45:59.191068 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 27 17:45:59.191173 kernel: [drm] features: -context_init May 27 17:45:59.203515 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:45:59.204355 systemd-logind[1499]: Watching system buttons on /dev/input/event2 (Power Button) May 27 17:45:59.209427 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:45:59.212498 systemd-logind[1499]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:45:59.214164 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:45:59.215075 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:45:59.231591 kernel: [drm] number of scanouts: 1 May 27 17:45:59.240422 kernel: [drm] number of cap sets: 0 May 27 17:45:59.241398 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 27 17:45:59.268726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:59.269456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:59.280427 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:45:59.287804 coreos-metadata[1551]: May 27 17:45:59.287 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 27 17:45:59.309578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:59.322946 coreos-metadata[1551]: May 27 17:45:59.322 INFO Fetch successful May 27 17:45:59.331596 unknown[1551]: wrote ssh authorized keys file for user: core May 27 17:45:59.388330 update-ssh-keys[1630]: Updated "/home/core/.ssh/authorized_keys" May 27 17:45:59.392207 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 17:45:59.394988 systemd[1]: Finished sshkeys.service. May 27 17:45:59.492678 kernel: EDAC MC: Ver: 3.0.0 May 27 17:45:59.513569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:59.550681 tar[1502]: linux-amd64/LICENSE May 27 17:45:59.551224 tar[1502]: linux-amd64/README.md May 27 17:45:59.581691 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:45:59.660762 systemd-networkd[1461]: eth1: Gained IPv6LL May 27 17:45:59.663628 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:45:59.666680 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:45:59.667432 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:45:59.670259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:45:59.674752 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:45:59.712221 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:46:00.557201 systemd-networkd[1461]: eth0: Gained IPv6LL May 27 17:46:00.557939 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:46:01.038598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:01.039610 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:46:01.042572 systemd[1]: Startup finished in 3.798s (kernel) + 6.544s (initrd) + 6.916s (userspace) = 17.259s. May 27 17:46:01.049810 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:46:01.768913 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:46:01.775406 systemd[1]: Started sshd@0-143.198.147.228:22-139.178.68.195:52404.service - OpenSSH per-connection server daemon (139.178.68.195:52404). May 27 17:46:01.855409 kubelet[1659]: E0527 17:46:01.855294 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:46:01.860572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:46:01.860805 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:46:01.861533 systemd[1]: kubelet.service: Consumed 1.472s CPU time, 265.7M memory peak. May 27 17:46:01.907851 sshd[1670]: Accepted publickey for core from 139.178.68.195 port 52404 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:46:01.911083 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:01.922618 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:46:01.925108 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:46:01.937775 systemd-logind[1499]: New session 1 of user core. May 27 17:46:01.963521 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:46:01.969985 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:46:01.989609 (systemd)[1675]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:46:01.993844 systemd-logind[1499]: New session c1 of user core. May 27 17:46:02.185303 systemd[1675]: Queued start job for default target default.target. May 27 17:46:02.205803 systemd[1675]: Created slice app.slice - User Application Slice. May 27 17:46:02.206036 systemd[1675]: Reached target paths.target - Paths. May 27 17:46:02.206152 systemd[1675]: Reached target timers.target - Timers. May 27 17:46:02.208489 systemd[1675]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:46:02.252706 systemd[1675]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:46:02.252880 systemd[1675]: Reached target sockets.target - Sockets. May 27 17:46:02.252956 systemd[1675]: Reached target basic.target - Basic System. May 27 17:46:02.253015 systemd[1675]: Reached target default.target - Main User Target. May 27 17:46:02.253061 systemd[1675]: Startup finished in 247ms. May 27 17:46:02.253261 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:46:02.263163 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:46:02.335701 systemd[1]: Started sshd@1-143.198.147.228:22-139.178.68.195:52412.service - OpenSSH per-connection server daemon (139.178.68.195:52412). May 27 17:46:02.399667 sshd[1686]: Accepted publickey for core from 139.178.68.195 port 52412 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:46:02.402126 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:02.410242 systemd-logind[1499]: New session 2 of user core. May 27 17:46:02.420074 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:46:02.484495 sshd[1688]: Connection closed by 139.178.68.195 port 52412 May 27 17:46:02.484981 sshd-session[1686]: pam_unix(sshd:session): session closed for user core May 27 17:46:02.497148 systemd[1]: sshd@1-143.198.147.228:22-139.178.68.195:52412.service: Deactivated successfully. May 27 17:46:02.499452 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:46:02.500920 systemd-logind[1499]: Session 2 logged out. Waiting for processes to exit. May 27 17:46:02.505211 systemd[1]: Started sshd@2-143.198.147.228:22-139.178.68.195:52426.service - OpenSSH per-connection server daemon (139.178.68.195:52426). May 27 17:46:02.506748 systemd-logind[1499]: Removed session 2. May 27 17:46:02.573586 sshd[1694]: Accepted publickey for core from 139.178.68.195 port 52426 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:46:02.575648 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:02.584682 systemd-logind[1499]: New session 3 of user core. May 27 17:46:02.593699 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:46:02.656435 sshd[1696]: Connection closed by 139.178.68.195 port 52426 May 27 17:46:02.656503 sshd-session[1694]: pam_unix(sshd:session): session closed for user core May 27 17:46:02.671532 systemd[1]: sshd@2-143.198.147.228:22-139.178.68.195:52426.service: Deactivated successfully. May 27 17:46:02.674315 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:46:02.675671 systemd-logind[1499]: Session 3 logged out. Waiting for processes to exit. May 27 17:46:02.679959 systemd[1]: Started sshd@3-143.198.147.228:22-139.178.68.195:52434.service - OpenSSH per-connection server daemon (139.178.68.195:52434). May 27 17:46:02.681457 systemd-logind[1499]: Removed session 3. May 27 17:46:02.740031 sshd[1702]: Accepted publickey for core from 139.178.68.195 port 52434 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:46:02.742888 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:02.751257 systemd-logind[1499]: New session 4 of user core. May 27 17:46:02.759739 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:46:02.826264 sshd[1704]: Connection closed by 139.178.68.195 port 52434 May 27 17:46:02.826779 sshd-session[1702]: pam_unix(sshd:session): session closed for user core May 27 17:46:02.844492 systemd[1]: sshd@3-143.198.147.228:22-139.178.68.195:52434.service: Deactivated successfully. May 27 17:46:02.847712 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:46:02.849452 systemd-logind[1499]: Session 4 logged out. Waiting for processes to exit. May 27 17:46:02.855049 systemd[1]: Started sshd@4-143.198.147.228:22-139.178.68.195:38440.service - OpenSSH per-connection server daemon (139.178.68.195:38440). May 27 17:46:02.857563 systemd-logind[1499]: Removed session 4. May 27 17:46:02.919871 sshd[1710]: Accepted publickey for core from 139.178.68.195 port 38440 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:46:02.921957 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:02.929868 systemd-logind[1499]: New session 5 of user core. May 27 17:46:02.936697 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:46:03.011558 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:46:03.012799 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:46:03.025735 sudo[1713]: pam_unix(sudo:session): session closed for user root May 27 17:46:03.030411 sshd[1712]: Connection closed by 139.178.68.195 port 38440 May 27 17:46:03.030337 sshd-session[1710]: pam_unix(sshd:session): session closed for user core May 27 17:46:03.043789 systemd[1]: sshd@4-143.198.147.228:22-139.178.68.195:38440.service: Deactivated successfully. May 27 17:46:03.046252 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:46:03.047478 systemd-logind[1499]: Session 5 logged out. Waiting for processes to exit. May 27 17:46:03.052803 systemd[1]: Started sshd@5-143.198.147.228:22-139.178.68.195:38452.service - OpenSSH per-connection server daemon (139.178.68.195:38452). May 27 17:46:03.054462 systemd-logind[1499]: Removed session 5. May 27 17:46:03.119955 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 38452 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:46:03.122224 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:03.129697 systemd-logind[1499]: New session 6 of user core. May 27 17:46:03.135695 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:46:03.199995 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:46:03.200511 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:46:03.206936 sudo[1723]: pam_unix(sudo:session): session closed for user root May 27 17:46:03.214924 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:46:03.215266 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:46:03.233686 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:46:03.303512 augenrules[1745]: No rules May 27 17:46:03.305265 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:46:03.305560 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:46:03.306819 sudo[1722]: pam_unix(sudo:session): session closed for user root May 27 17:46:03.310838 sshd[1721]: Connection closed by 139.178.68.195 port 38452 May 27 17:46:03.311428 sshd-session[1719]: pam_unix(sshd:session): session closed for user core May 27 17:46:03.323177 systemd[1]: sshd@5-143.198.147.228:22-139.178.68.195:38452.service: Deactivated successfully. May 27 17:46:03.325614 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:46:03.326654 systemd-logind[1499]: Session 6 logged out. Waiting for processes to exit. May 27 17:46:03.330941 systemd[1]: Started sshd@6-143.198.147.228:22-139.178.68.195:38464.service - OpenSSH per-connection server daemon (139.178.68.195:38464). May 27 17:46:03.332529 systemd-logind[1499]: Removed session 6. May 27 17:46:03.389171 sshd[1754]: Accepted publickey for core from 139.178.68.195 port 38464 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:46:03.390864 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:03.397299 systemd-logind[1499]: New session 7 of user core. May 27 17:46:03.402711 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:46:03.465512 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:46:03.465932 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:46:03.995903 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:46:04.026048 (dockerd)[1775]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:46:04.390885 dockerd[1775]: time="2025-05-27T17:46:04.390309776Z" level=info msg="Starting up" May 27 17:46:04.391582 dockerd[1775]: time="2025-05-27T17:46:04.391550196Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:46:04.438254 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2575964439-merged.mount: Deactivated successfully. May 27 17:46:04.510804 dockerd[1775]: time="2025-05-27T17:46:04.510743568Z" level=info msg="Loading containers: start." May 27 17:46:04.522403 kernel: Initializing XFRM netlink socket May 27 17:46:04.821743 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:46:04.822976 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:46:04.836537 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:46:04.878651 systemd-networkd[1461]: docker0: Link UP May 27 17:46:04.879155 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. May 27 17:46:04.881898 dockerd[1775]: time="2025-05-27T17:46:04.881841053Z" level=info msg="Loading containers: done." May 27 17:46:04.899794 dockerd[1775]: time="2025-05-27T17:46:04.899710187Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:46:04.900069 dockerd[1775]: time="2025-05-27T17:46:04.899818491Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:46:04.900069 dockerd[1775]: time="2025-05-27T17:46:04.899943684Z" level=info msg="Initializing buildkit" May 27 17:46:04.931277 dockerd[1775]: time="2025-05-27T17:46:04.931200456Z" level=info msg="Completed buildkit initialization" May 27 17:46:04.942605 dockerd[1775]: time="2025-05-27T17:46:04.942451616Z" level=info msg="Daemon has completed initialization" May 27 17:46:04.942985 dockerd[1775]: time="2025-05-27T17:46:04.942785092Z" level=info msg="API listen on /run/docker.sock" May 27 17:46:04.943257 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:46:05.957153 containerd[1527]: time="2025-05-27T17:46:05.957070618Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 27 17:46:06.521953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700831764.mount: Deactivated successfully. May 27 17:46:07.686783 containerd[1527]: time="2025-05-27T17:46:07.686715093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:07.687856 containerd[1527]: time="2025-05-27T17:46:07.687799839Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 27 17:46:07.689430 containerd[1527]: time="2025-05-27T17:46:07.689144026Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:07.694466 containerd[1527]: time="2025-05-27T17:46:07.692974581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:07.694466 containerd[1527]: time="2025-05-27T17:46:07.694148280Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.737034929s" May 27 17:46:07.694466 containerd[1527]: time="2025-05-27T17:46:07.694193176Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 27 17:46:07.695216 containerd[1527]: time="2025-05-27T17:46:07.695184607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 27 17:46:09.279105 containerd[1527]: time="2025-05-27T17:46:09.277903948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:09.279105 containerd[1527]: time="2025-05-27T17:46:09.278931571Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 27 17:46:09.279105 containerd[1527]: time="2025-05-27T17:46:09.279027172Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:09.281947 containerd[1527]: time="2025-05-27T17:46:09.281887386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:09.283011 containerd[1527]: time="2025-05-27T17:46:09.282960976Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.587607161s" May 27 17:46:09.283011 containerd[1527]: time="2025-05-27T17:46:09.283005749Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 27 17:46:09.283558 containerd[1527]: time="2025-05-27T17:46:09.283517145Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 27 17:46:10.581772 containerd[1527]: time="2025-05-27T17:46:10.581001913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:10.582316 containerd[1527]: time="2025-05-27T17:46:10.581999203Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 27 17:46:10.583047 containerd[1527]: time="2025-05-27T17:46:10.582986376Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:10.587642 containerd[1527]: time="2025-05-27T17:46:10.587574458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:10.589385 containerd[1527]: time="2025-05-27T17:46:10.589218126Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.305668058s" May 27 17:46:10.589385 containerd[1527]: time="2025-05-27T17:46:10.589261457Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 27 17:46:10.590540 containerd[1527]: time="2025-05-27T17:46:10.590453395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 27 17:46:11.690209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1571288783.mount: Deactivated successfully. May 27 17:46:12.111062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:46:12.115663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:12.340226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:12.356016 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:46:12.374432 containerd[1527]: time="2025-05-27T17:46:12.373492557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:12.375618 containerd[1527]: time="2025-05-27T17:46:12.375504532Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 27 17:46:12.378967 containerd[1527]: time="2025-05-27T17:46:12.378902595Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:12.405404 containerd[1527]: time="2025-05-27T17:46:12.404277036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:12.405404 containerd[1527]: time="2025-05-27T17:46:12.405224531Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.814677792s" May 27 17:46:12.405404 containerd[1527]: time="2025-05-27T17:46:12.405276781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 27 17:46:12.405837 containerd[1527]: time="2025-05-27T17:46:12.405723760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 17:46:12.407436 systemd-resolved[1407]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 27 17:46:12.432723 kubelet[2062]: E0527 17:46:12.432662 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:46:12.438723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:46:12.439137 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:46:12.440073 systemd[1]: kubelet.service: Consumed 226ms CPU time, 108.5M memory peak. May 27 17:46:12.902955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351689035.mount: Deactivated successfully. May 27 17:46:13.813785 containerd[1527]: time="2025-05-27T17:46:13.813711684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:13.814864 containerd[1527]: time="2025-05-27T17:46:13.814816693Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 17:46:13.815800 containerd[1527]: time="2025-05-27T17:46:13.815404074Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:13.817881 containerd[1527]: time="2025-05-27T17:46:13.817836102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:13.819174 containerd[1527]: time="2025-05-27T17:46:13.819131438Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.413380007s" May 27 17:46:13.819174 containerd[1527]: time="2025-05-27T17:46:13.819172604Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 17:46:13.819806 containerd[1527]: time="2025-05-27T17:46:13.819650759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:46:14.280898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914661677.mount: Deactivated successfully. May 27 17:46:14.286801 containerd[1527]: time="2025-05-27T17:46:14.286722716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:46:14.288489 containerd[1527]: time="2025-05-27T17:46:14.288422999Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 17:46:14.289227 containerd[1527]: time="2025-05-27T17:46:14.289070737Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:46:14.293393 containerd[1527]: time="2025-05-27T17:46:14.292511236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:46:14.293393 containerd[1527]: time="2025-05-27T17:46:14.292974776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.279116ms" May 27 17:46:14.293393 containerd[1527]: time="2025-05-27T17:46:14.293012969Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:46:14.293943 containerd[1527]: time="2025-05-27T17:46:14.293859874Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 27 17:46:14.778283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036757677.mount: Deactivated successfully. May 27 17:46:15.468789 systemd-resolved[1407]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 27 17:46:16.682697 containerd[1527]: time="2025-05-27T17:46:16.682632870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:16.683946 containerd[1527]: time="2025-05-27T17:46:16.683885294Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 27 17:46:16.684811 containerd[1527]: time="2025-05-27T17:46:16.684764156Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:16.689101 containerd[1527]: time="2025-05-27T17:46:16.689035794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:16.690943 containerd[1527]: time="2025-05-27T17:46:16.690881272Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.396958968s" May 27 17:46:16.691172 containerd[1527]: time="2025-05-27T17:46:16.691146106Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 27 17:46:19.862166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:19.862860 systemd[1]: kubelet.service: Consumed 226ms CPU time, 108.5M memory peak. May 27 17:46:19.866509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:19.918612 systemd[1]: Reload requested from client PID 2206 ('systemctl') (unit session-7.scope)... May 27 17:46:19.918642 systemd[1]: Reloading... May 27 17:46:20.085404 zram_generator::config[2248]: No configuration found. May 27 17:46:20.229025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:46:20.374636 systemd[1]: Reloading finished in 455 ms. May 27 17:46:20.450539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:20.456580 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:46:20.457184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:20.457286 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.2M memory peak. May 27 17:46:20.459875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:20.667212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:20.679009 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:46:20.741735 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:20.741735 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 17:46:20.741735 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:20.742175 kubelet[2305]: I0527 17:46:20.741843 2305 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:46:21.187389 kubelet[2305]: I0527 17:46:21.187274 2305 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 17:46:21.187389 kubelet[2305]: I0527 17:46:21.187328 2305 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:46:21.187844 kubelet[2305]: I0527 17:46:21.187800 2305 server.go:934] "Client rotation is on, will bootstrap in background" May 27 17:46:21.218542 kubelet[2305]: I0527 17:46:21.218498 2305 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:46:21.224531 kubelet[2305]: E0527 17:46:21.224445 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.147.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.147.228:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:21.236419 kubelet[2305]: I0527 17:46:21.236246 2305 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:46:21.242631 kubelet[2305]: I0527 17:46:21.242580 2305 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:46:21.243529 kubelet[2305]: I0527 17:46:21.243474 2305 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 17:46:21.243785 kubelet[2305]: I0527 17:46:21.243741 2305 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:46:21.244164 kubelet[2305]: I0527 17:46:21.243782 2305 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-f-2f5fe7c465","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:46:21.244164 kubelet[2305]: I0527 17:46:21.244160 2305 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:46:21.244401 kubelet[2305]: I0527 17:46:21.244177 2305 container_manager_linux.go:300] "Creating device plugin manager" May 27 17:46:21.244401 kubelet[2305]: I0527 17:46:21.244334 2305 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:21.248357 kubelet[2305]: I0527 17:46:21.247856 2305 kubelet.go:408] "Attempting to sync node with API server" May 27 17:46:21.248357 kubelet[2305]: I0527 17:46:21.247919 2305 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:46:21.248357 kubelet[2305]: I0527 17:46:21.248078 2305 kubelet.go:314] "Adding apiserver pod source" May 27 17:46:21.248357 kubelet[2305]: I0527 17:46:21.248122 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:46:21.249986 kubelet[2305]: W0527 17:46:21.249916 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.147.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-f-2f5fe7c465&limit=500&resourceVersion=0": dial tcp 143.198.147.228:6443: connect: connection refused May 27 17:46:21.250186 kubelet[2305]: E0527 17:46:21.250159 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.147.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-f-2f5fe7c465&limit=500&resourceVersion=0\": dial tcp 143.198.147.228:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:21.251399 kubelet[2305]: W0527 17:46:21.251334 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.147.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.147.228:6443: connect: connection refused May 27 17:46:21.251674 kubelet[2305]: E0527 17:46:21.251640 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.147.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.147.228:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:21.252172 kubelet[2305]: I0527 17:46:21.251940 2305 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:46:21.255503 kubelet[2305]: I0527 17:46:21.255461 2305 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:46:21.256352 kubelet[2305]: W0527 17:46:21.256310 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:46:21.257519 kubelet[2305]: I0527 17:46:21.257491 2305 server.go:1274] "Started kubelet" May 27 17:46:21.258904 kubelet[2305]: I0527 17:46:21.258860 2305 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:46:21.260455 kubelet[2305]: I0527 17:46:21.260415 2305 server.go:449] "Adding debug handlers to kubelet server" May 27 17:46:21.264237 kubelet[2305]: I0527 17:46:21.264126 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:46:21.264859 kubelet[2305]: I0527 17:46:21.264828 2305 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:46:21.265856 kubelet[2305]: I0527 17:46:21.265826 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:46:21.266787 kubelet[2305]: E0527 17:46:21.265251 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.147.228:6443/api/v1/namespaces/default/events\": dial tcp 143.198.147.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.0.0-f-2f5fe7c465.1843736a6b848c8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-f-2f5fe7c465,UID:ci-4344.0.0-f-2f5fe7c465,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-f-2f5fe7c465,},FirstTimestamp:2025-05-27 17:46:21.257460877 +0000 UTC m=+0.572620092,LastTimestamp:2025-05-27 17:46:21.257460877 +0000 UTC m=+0.572620092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-f-2f5fe7c465,}" May 27 17:46:21.268738 kubelet[2305]: I0527 17:46:21.268711 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:46:21.278189 kubelet[2305]: E0527 17:46:21.278117 2305 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:46:21.278341 kubelet[2305]: I0527 17:46:21.278308 2305 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 17:46:21.278874 kubelet[2305]: I0527 17:46:21.278446 2305 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 17:46:21.278874 kubelet[2305]: I0527 17:46:21.278502 2305 reconciler.go:26] "Reconciler: start to sync state" May 27 17:46:21.278971 kubelet[2305]: W0527 17:46:21.278928 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.147.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.147.228:6443: connect: connection refused May 27 17:46:21.279003 kubelet[2305]: E0527 17:46:21.278985 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.147.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.147.228:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:21.279208 kubelet[2305]: E0527 17:46:21.279175 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.0.0-f-2f5fe7c465\" not found" May 27 17:46:21.279383 kubelet[2305]: E0527 17:46:21.279286 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.147.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-f-2f5fe7c465?timeout=10s\": dial tcp 143.198.147.228:6443: connect: connection refused" interval="200ms" May 27 17:46:21.282239 kubelet[2305]: I0527 17:46:21.282179 2305 factory.go:221] Registration of the containerd container factory successfully May 27 17:46:21.282239 kubelet[2305]: I0527 17:46:21.282224 2305 factory.go:221] Registration of the systemd container factory successfully May 27 17:46:21.282550 kubelet[2305]: I0527 17:46:21.282519 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:46:21.307298 kubelet[2305]: I0527 17:46:21.307175 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:46:21.315110 kubelet[2305]: I0527 17:46:21.314568 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:46:21.315110 kubelet[2305]: I0527 17:46:21.314610 2305 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 17:46:21.315110 kubelet[2305]: I0527 17:46:21.314631 2305 kubelet.go:2321] "Starting kubelet main sync loop" May 27 17:46:21.315110 kubelet[2305]: E0527 17:46:21.314686 2305 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:46:21.320158 kubelet[2305]: W0527 17:46:21.320084 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.147.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.147.228:6443: connect: connection refused May 27 17:46:21.320158 kubelet[2305]: E0527 17:46:21.320159 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.147.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.147.228:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:21.323867 kubelet[2305]: I0527 17:46:21.323794 2305 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 17:46:21.323867 kubelet[2305]: I0527 17:46:21.323825 2305 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 17:46:21.323867 kubelet[2305]: I0527 17:46:21.323853 2305 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:21.325502 kubelet[2305]: I0527 17:46:21.325468 2305 policy_none.go:49] "None policy: Start" May 27 17:46:21.326545 kubelet[2305]: I0527 17:46:21.326497 2305 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 17:46:21.326816 kubelet[2305]: I0527 17:46:21.326743 2305 state_mem.go:35] "Initializing new in-memory state store" May 27 17:46:21.336891 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:46:21.354849 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:46:21.360438 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:46:21.375399 kubelet[2305]: I0527 17:46:21.374901 2305 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:46:21.375399 kubelet[2305]: I0527 17:46:21.375200 2305 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:46:21.375399 kubelet[2305]: I0527 17:46:21.375221 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:46:21.376850 kubelet[2305]: I0527 17:46:21.376822 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:46:21.381115 kubelet[2305]: E0527 17:46:21.381079 2305 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.0.0-f-2f5fe7c465\" not found" May 27 17:46:21.428815 systemd[1]: Created slice kubepods-burstable-pod2eca70ee20233388b32956a3dbae6c1d.slice - libcontainer container kubepods-burstable-pod2eca70ee20233388b32956a3dbae6c1d.slice. May 27 17:46:21.451758 systemd[1]: Created slice kubepods-burstable-podcef9443ebd388831bf5512042e63c70e.slice - libcontainer container kubepods-burstable-podcef9443ebd388831bf5512042e63c70e.slice. May 27 17:46:21.477503 kubelet[2305]: I0527 17:46:21.477382 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.478198 systemd[1]: Created slice kubepods-burstable-podef7a6847049986414a539d0df1542ade.slice - libcontainer container kubepods-burstable-podef7a6847049986414a539d0df1542ade.slice. May 27 17:46:21.478898 kubelet[2305]: E0527 17:46:21.478634 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.147.228:6443/api/v1/nodes\": dial tcp 143.198.147.228:6443: connect: connection refused" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.480795 kubelet[2305]: E0527 17:46:21.480750 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.147.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-f-2f5fe7c465?timeout=10s\": dial tcp 143.198.147.228:6443: connect: connection refused" interval="400ms" May 27 17:46:21.579461 kubelet[2305]: I0527 17:46:21.578990 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2eca70ee20233388b32956a3dbae6c1d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-f-2f5fe7c465\" (UID: \"2eca70ee20233388b32956a3dbae6c1d\") " pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.579461 kubelet[2305]: I0527 17:46:21.579063 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.579461 kubelet[2305]: I0527 17:46:21.579103 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2eca70ee20233388b32956a3dbae6c1d-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-f-2f5fe7c465\" (UID: \"2eca70ee20233388b32956a3dbae6c1d\") " pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.579461 kubelet[2305]: I0527 17:46:21.579130 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2eca70ee20233388b32956a3dbae6c1d-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-f-2f5fe7c465\" (UID: \"2eca70ee20233388b32956a3dbae6c1d\") " pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.579461 kubelet[2305]: I0527 17:46:21.579156 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.579897 kubelet[2305]: I0527 17:46:21.579180 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.579897 kubelet[2305]: I0527 17:46:21.579210 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.579897 kubelet[2305]: I0527 17:46:21.579236 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.580700 kubelet[2305]: I0527 17:46:21.579288 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef7a6847049986414a539d0df1542ade-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-f-2f5fe7c465\" (UID: \"ef7a6847049986414a539d0df1542ade\") " pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.680264 kubelet[2305]: I0527 17:46:21.680202 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.680655 kubelet[2305]: E0527 17:46:21.680622 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.147.228:6443/api/v1/nodes\": dial tcp 143.198.147.228:6443: connect: connection refused" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:21.747506 kubelet[2305]: E0527 17:46:21.747291 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:21.752415 containerd[1527]: time="2025-05-27T17:46:21.752131712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-f-2f5fe7c465,Uid:2eca70ee20233388b32956a3dbae6c1d,Namespace:kube-system,Attempt:0,}" May 27 17:46:21.775800 kubelet[2305]: E0527 17:46:21.775747 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:21.783158 kubelet[2305]: E0527 17:46:21.783060 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:21.786138 containerd[1527]: time="2025-05-27T17:46:21.785346052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-f-2f5fe7c465,Uid:cef9443ebd388831bf5512042e63c70e,Namespace:kube-system,Attempt:0,}" May 27 17:46:21.786138 containerd[1527]: time="2025-05-27T17:46:21.785676726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-f-2f5fe7c465,Uid:ef7a6847049986414a539d0df1542ade,Namespace:kube-system,Attempt:0,}" May 27 17:46:21.867395 containerd[1527]: time="2025-05-27T17:46:21.867275130Z" level=info msg="connecting to shim a00750ec91a7352b00c84727e797ecab494e382574ff94084118253e85be853a" address="unix:///run/containerd/s/2bcc26ed24b7082ec9f9431c56a568591b13c973f612457a258d91892c6402af" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:21.872846 containerd[1527]: time="2025-05-27T17:46:21.872775532Z" level=info msg="connecting to shim 19a93cac6b587b789e18b1256c9e09d7c100cc2ba29af06183e224a48861f554" address="unix:///run/containerd/s/e234a8fdc435360a44619f686395d79c97391bfbfafa6d8aa6d1575318b0b013" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:21.878100 containerd[1527]: time="2025-05-27T17:46:21.877619412Z" level=info msg="connecting to shim 36bb744f0f6eb5c578510304728ed043176769002fc84a179f0a5fd30d104e22" address="unix:///run/containerd/s/87f6f08b22a4ab2955d41a574f151c02b8664de4462f51f3963812292ce75f91" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:21.881966 kubelet[2305]: E0527 17:46:21.881918 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.147.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-f-2f5fe7c465?timeout=10s\": dial tcp 143.198.147.228:6443: connect: connection refused" interval="800ms" May 27 17:46:22.008731 systemd[1]: Started cri-containerd-19a93cac6b587b789e18b1256c9e09d7c100cc2ba29af06183e224a48861f554.scope - libcontainer container 19a93cac6b587b789e18b1256c9e09d7c100cc2ba29af06183e224a48861f554. May 27 17:46:22.020744 systemd[1]: Started cri-containerd-36bb744f0f6eb5c578510304728ed043176769002fc84a179f0a5fd30d104e22.scope - libcontainer container 36bb744f0f6eb5c578510304728ed043176769002fc84a179f0a5fd30d104e22. May 27 17:46:22.024255 systemd[1]: Started cri-containerd-a00750ec91a7352b00c84727e797ecab494e382574ff94084118253e85be853a.scope - libcontainer container a00750ec91a7352b00c84727e797ecab494e382574ff94084118253e85be853a. May 27 17:46:22.084325 kubelet[2305]: I0527 17:46:22.084267 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:22.087462 kubelet[2305]: E0527 17:46:22.085159 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.147.228:6443/api/v1/nodes\": dial tcp 143.198.147.228:6443: connect: connection refused" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:22.098004 kubelet[2305]: W0527 17:46:22.097921 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.147.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.147.228:6443: connect: connection refused May 27 17:46:22.098275 kubelet[2305]: E0527 17:46:22.098234 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.147.228:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.147.228:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:22.144012 containerd[1527]: time="2025-05-27T17:46:22.143917249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-f-2f5fe7c465,Uid:ef7a6847049986414a539d0df1542ade,Namespace:kube-system,Attempt:0,} returns sandbox id \"a00750ec91a7352b00c84727e797ecab494e382574ff94084118253e85be853a\"" May 27 17:46:22.148282 kubelet[2305]: E0527 17:46:22.147856 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:22.151972 containerd[1527]: time="2025-05-27T17:46:22.151895954Z" level=info msg="CreateContainer within sandbox \"a00750ec91a7352b00c84727e797ecab494e382574ff94084118253e85be853a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:46:22.173433 containerd[1527]: time="2025-05-27T17:46:22.173301574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-f-2f5fe7c465,Uid:cef9443ebd388831bf5512042e63c70e,Namespace:kube-system,Attempt:0,} returns sandbox id \"36bb744f0f6eb5c578510304728ed043176769002fc84a179f0a5fd30d104e22\"" May 27 17:46:22.175592 kubelet[2305]: E0527 17:46:22.175551 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:22.183022 containerd[1527]: time="2025-05-27T17:46:22.182962966Z" level=info msg="CreateContainer within sandbox \"36bb744f0f6eb5c578510304728ed043176769002fc84a179f0a5fd30d104e22\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:46:22.185003 containerd[1527]: time="2025-05-27T17:46:22.184896981Z" level=info msg="Container 507280e7f2bb458ca72f942534020044f6e9ff8ee89942027c0520f4308cecb2: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:22.195575 containerd[1527]: time="2025-05-27T17:46:22.195510465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-f-2f5fe7c465,Uid:2eca70ee20233388b32956a3dbae6c1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"19a93cac6b587b789e18b1256c9e09d7c100cc2ba29af06183e224a48861f554\"" May 27 17:46:22.196995 kubelet[2305]: E0527 17:46:22.196961 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:22.198756 containerd[1527]: time="2025-05-27T17:46:22.198666205Z" level=info msg="Container 99e88a0b9fb74616fe7e9fdfc357a48076209ea5c5460641f72f1cd1a1113fda: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:22.200128 containerd[1527]: time="2025-05-27T17:46:22.200076177Z" level=info msg="CreateContainer within sandbox \"a00750ec91a7352b00c84727e797ecab494e382574ff94084118253e85be853a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"507280e7f2bb458ca72f942534020044f6e9ff8ee89942027c0520f4308cecb2\"" May 27 17:46:22.201134 kubelet[2305]: W0527 17:46:22.201065 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.147.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.147.228:6443: connect: connection refused May 27 17:46:22.201258 kubelet[2305]: E0527 17:46:22.201143 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.147.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.147.228:6443: connect: connection refused" logger="UnhandledError" May 27 17:46:22.208394 containerd[1527]: time="2025-05-27T17:46:22.208278148Z" level=info msg="StartContainer for \"507280e7f2bb458ca72f942534020044f6e9ff8ee89942027c0520f4308cecb2\"" May 27 17:46:22.215593 containerd[1527]: time="2025-05-27T17:46:22.215548089Z" level=info msg="CreateContainer within sandbox \"19a93cac6b587b789e18b1256c9e09d7c100cc2ba29af06183e224a48861f554\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:46:22.223014 containerd[1527]: time="2025-05-27T17:46:22.222889207Z" level=info msg="connecting to shim 507280e7f2bb458ca72f942534020044f6e9ff8ee89942027c0520f4308cecb2" address="unix:///run/containerd/s/2bcc26ed24b7082ec9f9431c56a568591b13c973f612457a258d91892c6402af" protocol=ttrpc version=3 May 27 17:46:22.234064 containerd[1527]: time="2025-05-27T17:46:22.233994659Z" level=info msg="Container b38d9ea5d8ceee1fc846775ead4bbeb194d31636d19de46b1ea97d1aea39324e: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:22.238399 containerd[1527]: time="2025-05-27T17:46:22.238026432Z" level=info msg="CreateContainer within sandbox \"36bb744f0f6eb5c578510304728ed043176769002fc84a179f0a5fd30d104e22\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"99e88a0b9fb74616fe7e9fdfc357a48076209ea5c5460641f72f1cd1a1113fda\"" May 27 17:46:22.245904 containerd[1527]: time="2025-05-27T17:46:22.245845483Z" level=info msg="StartContainer for \"99e88a0b9fb74616fe7e9fdfc357a48076209ea5c5460641f72f1cd1a1113fda\"" May 27 17:46:22.247638 containerd[1527]: time="2025-05-27T17:46:22.247582414Z" level=info msg="connecting to shim 99e88a0b9fb74616fe7e9fdfc357a48076209ea5c5460641f72f1cd1a1113fda" address="unix:///run/containerd/s/87f6f08b22a4ab2955d41a574f151c02b8664de4462f51f3963812292ce75f91" protocol=ttrpc version=3 May 27 17:46:22.253254 containerd[1527]: time="2025-05-27T17:46:22.253164005Z" level=info msg="CreateContainer within sandbox \"19a93cac6b587b789e18b1256c9e09d7c100cc2ba29af06183e224a48861f554\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b38d9ea5d8ceee1fc846775ead4bbeb194d31636d19de46b1ea97d1aea39324e\"" May 27 17:46:22.254020 containerd[1527]: time="2025-05-27T17:46:22.253988858Z" level=info msg="StartContainer for \"b38d9ea5d8ceee1fc846775ead4bbeb194d31636d19de46b1ea97d1aea39324e\"" May 27 17:46:22.255653 containerd[1527]: time="2025-05-27T17:46:22.255604599Z" level=info msg="connecting to shim b38d9ea5d8ceee1fc846775ead4bbeb194d31636d19de46b1ea97d1aea39324e" address="unix:///run/containerd/s/e234a8fdc435360a44619f686395d79c97391bfbfafa6d8aa6d1575318b0b013" protocol=ttrpc version=3 May 27 17:46:22.276925 systemd[1]: Started cri-containerd-507280e7f2bb458ca72f942534020044f6e9ff8ee89942027c0520f4308cecb2.scope - libcontainer container 507280e7f2bb458ca72f942534020044f6e9ff8ee89942027c0520f4308cecb2. May 27 17:46:22.305657 systemd[1]: Started cri-containerd-99e88a0b9fb74616fe7e9fdfc357a48076209ea5c5460641f72f1cd1a1113fda.scope - libcontainer container 99e88a0b9fb74616fe7e9fdfc357a48076209ea5c5460641f72f1cd1a1113fda. May 27 17:46:22.307689 systemd[1]: Started cri-containerd-b38d9ea5d8ceee1fc846775ead4bbeb194d31636d19de46b1ea97d1aea39324e.scope - libcontainer container b38d9ea5d8ceee1fc846775ead4bbeb194d31636d19de46b1ea97d1aea39324e. May 27 17:46:22.430794 containerd[1527]: time="2025-05-27T17:46:22.430653391Z" level=info msg="StartContainer for \"99e88a0b9fb74616fe7e9fdfc357a48076209ea5c5460641f72f1cd1a1113fda\" returns successfully" May 27 17:46:22.444655 containerd[1527]: time="2025-05-27T17:46:22.444615325Z" level=info msg="StartContainer for \"b38d9ea5d8ceee1fc846775ead4bbeb194d31636d19de46b1ea97d1aea39324e\" returns successfully" May 27 17:46:22.447297 containerd[1527]: time="2025-05-27T17:46:22.447146039Z" level=info msg="StartContainer for \"507280e7f2bb458ca72f942534020044f6e9ff8ee89942027c0520f4308cecb2\" returns successfully" May 27 17:46:22.891842 kubelet[2305]: I0527 17:46:22.891772 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:23.376014 kubelet[2305]: E0527 17:46:23.375965 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:23.380489 kubelet[2305]: E0527 17:46:23.380345 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:23.385018 kubelet[2305]: E0527 17:46:23.384984 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:24.386970 kubelet[2305]: E0527 17:46:24.386881 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:24.387611 kubelet[2305]: E0527 17:46:24.387154 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:24.387611 kubelet[2305]: E0527 17:46:24.387249 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:24.762745 kubelet[2305]: E0527 17:46:24.761479 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.0.0-f-2f5fe7c465\" not found" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:24.824634 kubelet[2305]: E0527 17:46:24.824322 2305 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4344.0.0-f-2f5fe7c465.1843736a6b848c8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-f-2f5fe7c465,UID:ci-4344.0.0-f-2f5fe7c465,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-f-2f5fe7c465,},FirstTimestamp:2025-05-27 17:46:21.257460877 +0000 UTC m=+0.572620092,LastTimestamp:2025-05-27 17:46:21.257460877 +0000 UTC m=+0.572620092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-f-2f5fe7c465,}" May 27 17:46:24.873117 kubelet[2305]: I0527 17:46:24.873018 2305 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:24.873117 kubelet[2305]: E0527 17:46:24.873068 2305 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4344.0.0-f-2f5fe7c465\": node \"ci-4344.0.0-f-2f5fe7c465\" not found" May 27 17:46:25.253494 kubelet[2305]: I0527 17:46:25.253338 2305 apiserver.go:52] "Watching apiserver" May 27 17:46:25.279456 kubelet[2305]: I0527 17:46:25.279389 2305 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 17:46:25.400230 kubelet[2305]: E0527 17:46:25.400160 2305 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4344.0.0-f-2f5fe7c465\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:25.400673 kubelet[2305]: E0527 17:46:25.400425 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:25.401544 kubelet[2305]: E0527 17:46:25.400761 2305 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.0.0-f-2f5fe7c465\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:25.401544 kubelet[2305]: E0527 17:46:25.400887 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:26.399775 kubelet[2305]: W0527 17:46:26.398962 2305 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:46:26.400940 kubelet[2305]: E0527 17:46:26.400826 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:27.138550 systemd[1]: Reload requested from client PID 2578 ('systemctl') (unit session-7.scope)... May 27 17:46:27.138579 systemd[1]: Reloading... May 27 17:46:27.259446 zram_generator::config[2621]: No configuration found. May 27 17:46:27.396050 kubelet[2305]: E0527 17:46:27.394839 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:27.451986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:46:27.631629 systemd[1]: Reloading finished in 492 ms. May 27 17:46:27.679924 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:27.698879 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:46:27.699204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:27.699278 systemd[1]: kubelet.service: Consumed 1.129s CPU time, 125.4M memory peak. May 27 17:46:27.704824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:27.899740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:27.919102 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:46:27.995499 kubelet[2672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:27.995499 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 17:46:27.995499 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:46:27.995499 kubelet[2672]: I0527 17:46:27.995318 2672 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:46:28.017167 kubelet[2672]: I0527 17:46:28.017105 2672 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 17:46:28.017167 kubelet[2672]: I0527 17:46:28.017156 2672 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:46:28.017582 kubelet[2672]: I0527 17:46:28.017549 2672 server.go:934] "Client rotation is on, will bootstrap in background" May 27 17:46:28.019159 kubelet[2672]: I0527 17:46:28.019131 2672 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 17:46:28.024322 kubelet[2672]: I0527 17:46:28.023502 2672 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:46:28.032009 kubelet[2672]: I0527 17:46:28.031858 2672 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:46:28.036391 kubelet[2672]: I0527 17:46:28.036169 2672 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:46:28.036391 kubelet[2672]: I0527 17:46:28.036343 2672 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 17:46:28.036621 kubelet[2672]: I0527 17:46:28.036505 2672 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:46:28.036754 kubelet[2672]: I0527 17:46:28.036542 2672 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-f-2f5fe7c465","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:46:28.036754 kubelet[2672]: I0527 17:46:28.036752 2672 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:46:28.036754 kubelet[2672]: I0527 17:46:28.036763 2672 container_manager_linux.go:300] "Creating device plugin manager" May 27 17:46:28.038121 kubelet[2672]: I0527 17:46:28.036796 2672 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:28.039217 kubelet[2672]: I0527 17:46:28.039176 2672 kubelet.go:408] "Attempting to sync node with API server" May 27 17:46:28.039217 kubelet[2672]: I0527 17:46:28.039212 2672 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:46:28.040311 kubelet[2672]: I0527 17:46:28.039253 2672 kubelet.go:314] "Adding apiserver pod source" May 27 17:46:28.040311 kubelet[2672]: I0527 17:46:28.039273 2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:46:28.052951 kubelet[2672]: I0527 17:46:28.052912 2672 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:46:28.055591 kubelet[2672]: I0527 17:46:28.055405 2672 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:46:28.067246 kubelet[2672]: I0527 17:46:28.067019 2672 server.go:1274] "Started kubelet" May 27 17:46:28.067589 kubelet[2672]: I0527 17:46:28.067508 2672 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:46:28.071910 kubelet[2672]: I0527 17:46:28.071820 2672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:46:28.073394 kubelet[2672]: I0527 17:46:28.073335 2672 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:46:28.076767 kubelet[2672]: I0527 17:46:28.076719 2672 server.go:449] "Adding debug handlers to kubelet server" May 27 17:46:28.082430 kubelet[2672]: I0527 17:46:28.081845 2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:46:28.094075 kubelet[2672]: I0527 17:46:28.094021 2672 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:46:28.101499 kubelet[2672]: I0527 17:46:28.101430 2672 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 17:46:28.101911 kubelet[2672]: E0527 17:46:28.101885 2672 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.0.0-f-2f5fe7c465\" not found" May 27 17:46:28.107996 kubelet[2672]: I0527 17:46:28.107801 2672 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 17:46:28.108200 kubelet[2672]: I0527 17:46:28.108047 2672 reconciler.go:26] "Reconciler: start to sync state" May 27 17:46:28.111065 kubelet[2672]: I0527 17:46:28.111023 2672 factory.go:221] Registration of the systemd container factory successfully May 27 17:46:28.112119 kubelet[2672]: I0527 17:46:28.112038 2672 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:46:28.115793 kubelet[2672]: E0527 17:46:28.115739 2672 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:46:28.117335 kubelet[2672]: I0527 17:46:28.117298 2672 factory.go:221] Registration of the containerd container factory successfully May 27 17:46:28.163080 kubelet[2672]: I0527 17:46:28.162977 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:46:28.169511 kubelet[2672]: I0527 17:46:28.169006 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:46:28.169511 kubelet[2672]: I0527 17:46:28.169041 2672 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 17:46:28.169511 kubelet[2672]: I0527 17:46:28.169063 2672 kubelet.go:2321] "Starting kubelet main sync loop" May 27 17:46:28.169511 kubelet[2672]: E0527 17:46:28.169120 2672 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:46:28.185311 sudo[2695]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 17:46:28.186222 sudo[2695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 17:46:28.271029 kubelet[2672]: E0527 17:46:28.270843 2672 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:46:28.294791 kubelet[2672]: I0527 17:46:28.294746 2672 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 17:46:28.294791 kubelet[2672]: I0527 17:46:28.294772 2672 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 17:46:28.294791 kubelet[2672]: I0527 17:46:28.294807 2672 state_mem.go:36] "Initialized new in-memory state store" May 27 17:46:28.295091 kubelet[2672]: I0527 17:46:28.295058 2672 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:46:28.295091 kubelet[2672]: I0527 17:46:28.295072 2672 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:46:28.295091 kubelet[2672]: I0527 17:46:28.295094 2672 policy_none.go:49] "None policy: Start" May 27 17:46:28.299354 kubelet[2672]: I0527 17:46:28.299301 2672 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 17:46:28.299354 kubelet[2672]: I0527 17:46:28.299350 2672 state_mem.go:35] "Initializing new in-memory state store" May 27 17:46:28.300226 kubelet[2672]: I0527 17:46:28.300194 2672 state_mem.go:75] "Updated machine memory state" May 27 17:46:28.316480 kubelet[2672]: I0527 17:46:28.316436 2672 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:46:28.316733 kubelet[2672]: I0527 17:46:28.316714 2672 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:46:28.316792 kubelet[2672]: I0527 17:46:28.316733 2672 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:46:28.320404 kubelet[2672]: I0527 17:46:28.317567 2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:46:28.436790 kubelet[2672]: I0527 17:46:28.436723 2672 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.453404 kubelet[2672]: I0527 17:46:28.452302 2672 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.453404 kubelet[2672]: I0527 17:46:28.452660 2672 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.488683 kubelet[2672]: W0527 17:46:28.488633 2672 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:46:28.493871 kubelet[2672]: W0527 17:46:28.493824 2672 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:46:28.495193 kubelet[2672]: W0527 17:46:28.495128 2672 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:46:28.496419 kubelet[2672]: E0527 17:46:28.495241 2672 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.0.0-f-2f5fe7c465\" already exists" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.510196 kubelet[2672]: I0527 17:46:28.510085 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.511086 kubelet[2672]: I0527 17:46:28.510433 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.511086 kubelet[2672]: I0527 17:46:28.510517 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.511086 kubelet[2672]: I0527 17:46:28.510542 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2eca70ee20233388b32956a3dbae6c1d-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-f-2f5fe7c465\" (UID: \"2eca70ee20233388b32956a3dbae6c1d\") " pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.511086 kubelet[2672]: I0527 17:46:28.510860 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2eca70ee20233388b32956a3dbae6c1d-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-f-2f5fe7c465\" (UID: \"2eca70ee20233388b32956a3dbae6c1d\") " pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.511086 kubelet[2672]: I0527 17:46:28.510892 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2eca70ee20233388b32956a3dbae6c1d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-f-2f5fe7c465\" (UID: \"2eca70ee20233388b32956a3dbae6c1d\") " pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.511423 kubelet[2672]: I0527 17:46:28.510912 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.511423 kubelet[2672]: I0527 17:46:28.510929 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cef9443ebd388831bf5512042e63c70e-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-f-2f5fe7c465\" (UID: \"cef9443ebd388831bf5512042e63c70e\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.511423 kubelet[2672]: I0527 17:46:28.510947 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef7a6847049986414a539d0df1542ade-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-f-2f5fe7c465\" (UID: \"ef7a6847049986414a539d0df1542ade\") " pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:28.790421 kubelet[2672]: E0527 17:46:28.790349 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:28.794575 kubelet[2672]: E0527 17:46:28.794283 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:28.795755 kubelet[2672]: E0527 17:46:28.795722 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:29.000238 sudo[2695]: pam_unix(sudo:session): session closed for user root May 27 17:46:29.044054 kubelet[2672]: I0527 17:46:29.043793 2672 apiserver.go:52] "Watching apiserver" May 27 17:46:29.108992 kubelet[2672]: I0527 17:46:29.108936 2672 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 17:46:29.253401 kubelet[2672]: E0527 17:46:29.252977 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:29.255204 kubelet[2672]: E0527 17:46:29.254132 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:29.266391 kubelet[2672]: W0527 17:46:29.265213 2672 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:46:29.266391 kubelet[2672]: E0527 17:46:29.265303 2672 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.0.0-f-2f5fe7c465\" already exists" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:29.266391 kubelet[2672]: E0527 17:46:29.266026 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:29.303981 kubelet[2672]: I0527 17:46:29.303764 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" podStartSLOduration=1.303741204 podStartE2EDuration="1.303741204s" podCreationTimestamp="2025-05-27 17:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:29.302259062 +0000 UTC m=+1.377704530" watchObservedRunningTime="2025-05-27 17:46:29.303741204 +0000 UTC m=+1.379186693" May 27 17:46:29.344064 kubelet[2672]: I0527 17:46:29.343805 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" podStartSLOduration=1.3437666400000001 podStartE2EDuration="1.34376664s" podCreationTimestamp="2025-05-27 17:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:29.330138456 +0000 UTC m=+1.405583918" watchObservedRunningTime="2025-05-27 17:46:29.34376664 +0000 UTC m=+1.419212107" May 27 17:46:30.256762 kubelet[2672]: E0527 17:46:30.256499 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:30.615484 sudo[1757]: pam_unix(sudo:session): session closed for user root May 27 17:46:30.619152 sshd[1756]: Connection closed by 139.178.68.195 port 38464 May 27 17:46:30.620304 sshd-session[1754]: pam_unix(sshd:session): session closed for user core May 27 17:46:30.627288 systemd[1]: sshd@6-143.198.147.228:22-139.178.68.195:38464.service: Deactivated successfully. May 27 17:46:30.631006 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:46:30.631688 systemd[1]: session-7.scope: Consumed 5.606s CPU time, 222.3M memory peak. May 27 17:46:30.634018 systemd-logind[1499]: Session 7 logged out. Waiting for processes to exit. May 27 17:46:30.636673 systemd-logind[1499]: Removed session 7. May 27 17:46:31.260779 kubelet[2672]: E0527 17:46:31.260024 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:31.909521 kubelet[2672]: I0527 17:46:31.909455 2672 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:46:31.910854 containerd[1527]: time="2025-05-27T17:46:31.910168545Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:46:31.911780 kubelet[2672]: I0527 17:46:31.911477 2672 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:46:32.929160 kubelet[2672]: I0527 17:46:32.927546 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" podStartSLOduration=6.927521782 podStartE2EDuration="6.927521782s" podCreationTimestamp="2025-05-27 17:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:29.345878418 +0000 UTC m=+1.421323877" watchObservedRunningTime="2025-05-27 17:46:32.927521782 +0000 UTC m=+5.002967229" May 27 17:46:32.942153 systemd[1]: Created slice kubepods-besteffort-pod3337413e_eeaa_4976_88c1_b019353ecd2e.slice - libcontainer container kubepods-besteffort-pod3337413e_eeaa_4976_88c1_b019353ecd2e.slice. May 27 17:46:32.945632 kubelet[2672]: I0527 17:46:32.944476 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3337413e-eeaa-4976-88c1-b019353ecd2e-lib-modules\") pod \"kube-proxy-7sbp2\" (UID: \"3337413e-eeaa-4976-88c1-b019353ecd2e\") " pod="kube-system/kube-proxy-7sbp2" May 27 17:46:32.945632 kubelet[2672]: I0527 17:46:32.944521 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3337413e-eeaa-4976-88c1-b019353ecd2e-kube-proxy\") pod \"kube-proxy-7sbp2\" (UID: \"3337413e-eeaa-4976-88c1-b019353ecd2e\") " pod="kube-system/kube-proxy-7sbp2" May 27 17:46:32.945632 kubelet[2672]: I0527 17:46:32.944540 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3337413e-eeaa-4976-88c1-b019353ecd2e-xtables-lock\") pod \"kube-proxy-7sbp2\" (UID: \"3337413e-eeaa-4976-88c1-b019353ecd2e\") " pod="kube-system/kube-proxy-7sbp2" May 27 17:46:32.945632 kubelet[2672]: I0527 17:46:32.944567 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqwrb\" (UniqueName: \"kubernetes.io/projected/3337413e-eeaa-4976-88c1-b019353ecd2e-kube-api-access-dqwrb\") pod \"kube-proxy-7sbp2\" (UID: \"3337413e-eeaa-4976-88c1-b019353ecd2e\") " pod="kube-system/kube-proxy-7sbp2" May 27 17:46:32.980131 systemd[1]: Created slice kubepods-burstable-podf562065d_86b8_4757_9ef9_7aa958d54c7d.slice - libcontainer container kubepods-burstable-podf562065d_86b8_4757_9ef9_7aa958d54c7d.slice. May 27 17:46:33.038906 systemd[1]: Created slice kubepods-besteffort-pod8d029677_a101_412c_b857_6a30c5d7aaf0.slice - libcontainer container kubepods-besteffort-pod8d029677_a101_412c_b857_6a30c5d7aaf0.slice. May 27 17:46:33.045553 kubelet[2672]: I0527 17:46:33.045344 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-cgroup\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.046167 kubelet[2672]: I0527 17:46:33.045921 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f562065d-86b8-4757-9ef9-7aa958d54c7d-hubble-tls\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.047488 kubelet[2672]: I0527 17:46:33.047245 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-run\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.047488 kubelet[2672]: I0527 17:46:33.047299 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-hostproc\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.047488 kubelet[2672]: I0527 17:46:33.047320 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cni-path\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.047488 kubelet[2672]: I0527 17:46:33.047339 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-etc-cni-netd\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.049395 kubelet[2672]: I0527 17:46:33.047915 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-config-path\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.049395 kubelet[2672]: I0527 17:46:33.048007 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-host-proc-sys-net\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.049395 kubelet[2672]: I0527 17:46:33.048033 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-bpf-maps\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.049395 kubelet[2672]: I0527 17:46:33.048054 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-lib-modules\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.049395 kubelet[2672]: I0527 17:46:33.048077 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d029677-a101-412c-b857-6a30c5d7aaf0-cilium-config-path\") pod \"cilium-operator-5d85765b45-f6nch\" (UID: \"8d029677-a101-412c-b857-6a30c5d7aaf0\") " pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:46:33.049654 kubelet[2672]: I0527 17:46:33.048120 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f562065d-86b8-4757-9ef9-7aa958d54c7d-clustermesh-secrets\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.049654 kubelet[2672]: I0527 17:46:33.048153 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-host-proc-sys-kernel\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.049654 kubelet[2672]: I0527 17:46:33.048177 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhkh6\" (UniqueName: \"kubernetes.io/projected/8d029677-a101-412c-b857-6a30c5d7aaf0-kube-api-access-mhkh6\") pod \"cilium-operator-5d85765b45-f6nch\" (UID: \"8d029677-a101-412c-b857-6a30c5d7aaf0\") " pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:46:33.049654 kubelet[2672]: I0527 17:46:33.048203 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-xtables-lock\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.049654 kubelet[2672]: I0527 17:46:33.048223 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfkm7\" (UniqueName: \"kubernetes.io/projected/f562065d-86b8-4757-9ef9-7aa958d54c7d-kube-api-access-dfkm7\") pod \"cilium-mk7hk\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " pod="kube-system/cilium-mk7hk" May 27 17:46:33.257237 kubelet[2672]: E0527 17:46:33.257044 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:33.259210 containerd[1527]: time="2025-05-27T17:46:33.259035021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sbp2,Uid:3337413e-eeaa-4976-88c1-b019353ecd2e,Namespace:kube-system,Attempt:0,}" May 27 17:46:33.282418 containerd[1527]: time="2025-05-27T17:46:33.281985180Z" level=info msg="connecting to shim 0d4f9c8a32a82a6458dfad6ee9b615676182dff933931b459fecfad25b5ffd2c" address="unix:///run/containerd/s/5c79f28b1df5aa9aeb7bf3f909a50a78a41e6050971eccfd72a8e0726c69b255" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:33.289472 kubelet[2672]: E0527 17:46:33.287357 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:33.291269 containerd[1527]: time="2025-05-27T17:46:33.291202570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mk7hk,Uid:f562065d-86b8-4757-9ef9-7aa958d54c7d,Namespace:kube-system,Attempt:0,}" May 27 17:46:33.323758 systemd[1]: Started cri-containerd-0d4f9c8a32a82a6458dfad6ee9b615676182dff933931b459fecfad25b5ffd2c.scope - libcontainer container 0d4f9c8a32a82a6458dfad6ee9b615676182dff933931b459fecfad25b5ffd2c. May 27 17:46:33.330283 containerd[1527]: time="2025-05-27T17:46:33.330232321Z" level=info msg="connecting to shim 5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708" address="unix:///run/containerd/s/5d1d8a492113540dd1a886d066e2a2caac38f56510bf6d1af8d07ab98fc51da4" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:33.346237 kubelet[2672]: E0527 17:46:33.345724 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:33.347722 containerd[1527]: time="2025-05-27T17:46:33.347682247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-f6nch,Uid:8d029677-a101-412c-b857-6a30c5d7aaf0,Namespace:kube-system,Attempt:0,}" May 27 17:46:33.375697 systemd[1]: Started cri-containerd-5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708.scope - libcontainer container 5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708. May 27 17:46:33.409716 containerd[1527]: time="2025-05-27T17:46:33.409617973Z" level=info msg="connecting to shim 9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e" address="unix:///run/containerd/s/3b80e86db9527f28c99e067198865ed8d7eea67749c756f7dd08a9cd20217e37" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:33.423916 containerd[1527]: time="2025-05-27T17:46:33.423005752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sbp2,Uid:3337413e-eeaa-4976-88c1-b019353ecd2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d4f9c8a32a82a6458dfad6ee9b615676182dff933931b459fecfad25b5ffd2c\"" May 27 17:46:33.425856 kubelet[2672]: E0527 17:46:33.425825 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:33.438165 containerd[1527]: time="2025-05-27T17:46:33.437660175Z" level=info msg="CreateContainer within sandbox \"0d4f9c8a32a82a6458dfad6ee9b615676182dff933931b459fecfad25b5ffd2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:46:33.482936 containerd[1527]: time="2025-05-27T17:46:33.482877946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mk7hk,Uid:f562065d-86b8-4757-9ef9-7aa958d54c7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\"" May 27 17:46:33.485221 kubelet[2672]: E0527 17:46:33.485176 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:33.489227 containerd[1527]: time="2025-05-27T17:46:33.488983929Z" level=info msg="Container 6e04e1d64ae743b94afe30b0dd96dc03aa4abbf3cd1a996a629ffdaa57bc93fb: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:33.491488 containerd[1527]: time="2025-05-27T17:46:33.491066829Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 17:46:33.500678 systemd-resolved[1407]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. May 27 17:46:33.508708 systemd[1]: Started cri-containerd-9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e.scope - libcontainer container 9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e. May 27 17:46:33.525308 containerd[1527]: time="2025-05-27T17:46:33.525242392Z" level=info msg="CreateContainer within sandbox \"0d4f9c8a32a82a6458dfad6ee9b615676182dff933931b459fecfad25b5ffd2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6e04e1d64ae743b94afe30b0dd96dc03aa4abbf3cd1a996a629ffdaa57bc93fb\"" May 27 17:46:33.528080 containerd[1527]: time="2025-05-27T17:46:33.528001249Z" level=info msg="StartContainer for \"6e04e1d64ae743b94afe30b0dd96dc03aa4abbf3cd1a996a629ffdaa57bc93fb\"" May 27 17:46:33.530875 containerd[1527]: time="2025-05-27T17:46:33.530820907Z" level=info msg="connecting to shim 6e04e1d64ae743b94afe30b0dd96dc03aa4abbf3cd1a996a629ffdaa57bc93fb" address="unix:///run/containerd/s/5c79f28b1df5aa9aeb7bf3f909a50a78a41e6050971eccfd72a8e0726c69b255" protocol=ttrpc version=3 May 27 17:46:33.559016 systemd[1]: Started cri-containerd-6e04e1d64ae743b94afe30b0dd96dc03aa4abbf3cd1a996a629ffdaa57bc93fb.scope - libcontainer container 6e04e1d64ae743b94afe30b0dd96dc03aa4abbf3cd1a996a629ffdaa57bc93fb. May 27 17:46:33.605871 containerd[1527]: time="2025-05-27T17:46:33.605710283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-f6nch,Uid:8d029677-a101-412c-b857-6a30c5d7aaf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\"" May 27 17:46:33.609106 kubelet[2672]: E0527 17:46:33.608898 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:33.646541 containerd[1527]: time="2025-05-27T17:46:33.646489319Z" level=info msg="StartContainer for \"6e04e1d64ae743b94afe30b0dd96dc03aa4abbf3cd1a996a629ffdaa57bc93fb\" returns successfully" May 27 17:46:34.272429 kubelet[2672]: E0527 17:46:34.270087 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:34.292395 kubelet[2672]: I0527 17:46:34.292216 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7sbp2" podStartSLOduration=2.292172332 podStartE2EDuration="2.292172332s" podCreationTimestamp="2025-05-27 17:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:34.291608029 +0000 UTC m=+6.367053555" watchObservedRunningTime="2025-05-27 17:46:34.292172332 +0000 UTC m=+6.367617800" May 27 17:46:34.819658 kubelet[2672]: E0527 17:46:34.819395 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:35.816720 systemd-resolved[1407]: Clock change detected. Flushing caches. May 27 17:46:35.817618 systemd-timesyncd[1433]: Contacted time server 45.61.187.39:123 (2.flatcar.pool.ntp.org). May 27 17:46:35.817680 systemd-timesyncd[1433]: Initial clock synchronization to Tue 2025-05-27 17:46:35.816220 UTC. May 27 17:46:35.992399 kubelet[2672]: E0527 17:46:35.992330 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:36.560846 kubelet[2672]: E0527 17:46:36.560773 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:36.997938 kubelet[2672]: E0527 17:46:36.995616 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:39.616334 kubelet[2672]: E0527 17:46:39.615571 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:40.004462 kubelet[2672]: E0527 17:46:40.004414 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:41.028693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977015869.mount: Deactivated successfully. May 27 17:46:43.500250 update_engine[1500]: I20250527 17:46:43.500120 1500 update_attempter.cc:509] Updating boot flags... May 27 17:46:43.776958 containerd[1527]: time="2025-05-27T17:46:43.776107458Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:43.782874 containerd[1527]: time="2025-05-27T17:46:43.780927654Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 17:46:43.782874 containerd[1527]: time="2025-05-27T17:46:43.782278800Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:43.785732 containerd[1527]: time="2025-05-27T17:46:43.785677119Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.584049157s" May 27 17:46:43.785732 containerd[1527]: time="2025-05-27T17:46:43.785731042Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 17:46:43.788305 containerd[1527]: time="2025-05-27T17:46:43.786953344Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 17:46:43.792145 containerd[1527]: time="2025-05-27T17:46:43.792043768Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:46:43.849927 containerd[1527]: time="2025-05-27T17:46:43.848188371Z" level=info msg="Container ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:43.854847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880673892.mount: Deactivated successfully. May 27 17:46:43.867704 containerd[1527]: time="2025-05-27T17:46:43.867647470Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\"" May 27 17:46:43.870137 containerd[1527]: time="2025-05-27T17:46:43.868466063Z" level=info msg="StartContainer for \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\"" May 27 17:46:43.870137 containerd[1527]: time="2025-05-27T17:46:43.869276072Z" level=info msg="connecting to shim ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd" address="unix:///run/containerd/s/5d1d8a492113540dd1a886d066e2a2caac38f56510bf6d1af8d07ab98fc51da4" protocol=ttrpc version=3 May 27 17:46:44.041125 systemd[1]: Started cri-containerd-ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd.scope - libcontainer container ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd. May 27 17:46:44.120415 containerd[1527]: time="2025-05-27T17:46:44.120370371Z" level=info msg="StartContainer for \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\" returns successfully" May 27 17:46:44.140185 systemd[1]: cri-containerd-ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd.scope: Deactivated successfully. May 27 17:46:44.273436 containerd[1527]: time="2025-05-27T17:46:44.273368497Z" level=info msg="received exit event container_id:\"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\" id:\"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\" pid:3105 exited_at:{seconds:1748368004 nanos:144480040}" May 27 17:46:44.275749 containerd[1527]: time="2025-05-27T17:46:44.275675373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\" id:\"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\" pid:3105 exited_at:{seconds:1748368004 nanos:144480040}" May 27 17:46:44.312755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd-rootfs.mount: Deactivated successfully. May 27 17:46:45.094960 kubelet[2672]: E0527 17:46:45.093927 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:45.101659 containerd[1527]: time="2025-05-27T17:46:45.101584670Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:46:45.111967 containerd[1527]: time="2025-05-27T17:46:45.110046082Z" level=info msg="Container 3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:45.121911 containerd[1527]: time="2025-05-27T17:46:45.121202994Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\"" May 27 17:46:45.123687 containerd[1527]: time="2025-05-27T17:46:45.123584386Z" level=info msg="StartContainer for \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\"" May 27 17:46:45.126867 containerd[1527]: time="2025-05-27T17:46:45.126788984Z" level=info msg="connecting to shim 3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d" address="unix:///run/containerd/s/5d1d8a492113540dd1a886d066e2a2caac38f56510bf6d1af8d07ab98fc51da4" protocol=ttrpc version=3 May 27 17:46:45.168219 systemd[1]: Started cri-containerd-3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d.scope - libcontainer container 3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d. May 27 17:46:45.214645 containerd[1527]: time="2025-05-27T17:46:45.214501382Z" level=info msg="StartContainer for \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\" returns successfully" May 27 17:46:45.237946 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:46:45.238720 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:46:45.239274 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 17:46:45.244334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:46:45.246616 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:46:45.247529 systemd[1]: cri-containerd-3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d.scope: Deactivated successfully. May 27 17:46:45.254425 containerd[1527]: time="2025-05-27T17:46:45.251650773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\" id:\"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\" pid:3153 exited_at:{seconds:1748368005 nanos:248940706}" May 27 17:46:45.284439 containerd[1527]: time="2025-05-27T17:46:45.272388058Z" level=info msg="received exit event container_id:\"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\" id:\"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\" pid:3153 exited_at:{seconds:1748368005 nanos:248940706}" May 27 17:46:45.311978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:46:45.330063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d-rootfs.mount: Deactivated successfully. May 27 17:46:46.100619 kubelet[2672]: E0527 17:46:46.100568 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:46.106415 containerd[1527]: time="2025-05-27T17:46:46.105621926Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:46:46.134944 containerd[1527]: time="2025-05-27T17:46:46.132840693Z" level=info msg="Container a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:46.139624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611804290.mount: Deactivated successfully. May 27 17:46:46.181616 containerd[1527]: time="2025-05-27T17:46:46.181490324Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\"" May 27 17:46:46.182584 containerd[1527]: time="2025-05-27T17:46:46.182285676Z" level=info msg="StartContainer for \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\"" May 27 17:46:46.185486 containerd[1527]: time="2025-05-27T17:46:46.185426504Z" level=info msg="connecting to shim a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31" address="unix:///run/containerd/s/5d1d8a492113540dd1a886d066e2a2caac38f56510bf6d1af8d07ab98fc51da4" protocol=ttrpc version=3 May 27 17:46:46.218395 systemd[1]: Started cri-containerd-a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31.scope - libcontainer container a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31. May 27 17:46:46.295872 containerd[1527]: time="2025-05-27T17:46:46.295668953Z" level=info msg="StartContainer for \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\" returns successfully" May 27 17:46:46.297641 systemd[1]: cri-containerd-a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31.scope: Deactivated successfully. May 27 17:46:46.307742 containerd[1527]: time="2025-05-27T17:46:46.307680227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\" id:\"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\" pid:3202 exited_at:{seconds:1748368006 nanos:306669900}" May 27 17:46:46.308172 containerd[1527]: time="2025-05-27T17:46:46.308033200Z" level=info msg="received exit event container_id:\"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\" id:\"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\" pid:3202 exited_at:{seconds:1748368006 nanos:306669900}" May 27 17:46:46.347396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31-rootfs.mount: Deactivated successfully. May 27 17:46:47.111974 kubelet[2672]: E0527 17:46:47.110510 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:47.124705 containerd[1527]: time="2025-05-27T17:46:47.124594318Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:46:47.135769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727727413.mount: Deactivated successfully. May 27 17:46:47.170666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943550668.mount: Deactivated successfully. May 27 17:46:47.175703 containerd[1527]: time="2025-05-27T17:46:47.175567045Z" level=info msg="Container dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:47.185618 containerd[1527]: time="2025-05-27T17:46:47.185558025Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\"" May 27 17:46:47.188619 containerd[1527]: time="2025-05-27T17:46:47.188571655Z" level=info msg="StartContainer for \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\"" May 27 17:46:47.194012 containerd[1527]: time="2025-05-27T17:46:47.193954761Z" level=info msg="connecting to shim dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760" address="unix:///run/containerd/s/5d1d8a492113540dd1a886d066e2a2caac38f56510bf6d1af8d07ab98fc51da4" protocol=ttrpc version=3 May 27 17:46:47.239416 systemd[1]: Started cri-containerd-dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760.scope - libcontainer container dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760. May 27 17:46:47.285608 systemd[1]: cri-containerd-dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760.scope: Deactivated successfully. May 27 17:46:47.294289 containerd[1527]: time="2025-05-27T17:46:47.294056301Z" level=info msg="received exit event container_id:\"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\" id:\"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\" pid:3254 exited_at:{seconds:1748368007 nanos:288394002}" May 27 17:46:47.294689 containerd[1527]: time="2025-05-27T17:46:47.294628555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\" id:\"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\" pid:3254 exited_at:{seconds:1748368007 nanos:288394002}" May 27 17:46:47.299605 containerd[1527]: time="2025-05-27T17:46:47.299499135Z" level=info msg="StartContainer for \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\" returns successfully" May 27 17:46:47.347091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760-rootfs.mount: Deactivated successfully. May 27 17:46:47.738534 containerd[1527]: time="2025-05-27T17:46:47.737297114Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:47.738534 containerd[1527]: time="2025-05-27T17:46:47.738320786Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 17:46:47.738534 containerd[1527]: time="2025-05-27T17:46:47.738466974Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:46:47.740152 containerd[1527]: time="2025-05-27T17:46:47.740104425Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.953101538s" May 27 17:46:47.740366 containerd[1527]: time="2025-05-27T17:46:47.740337485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 17:46:47.745766 containerd[1527]: time="2025-05-27T17:46:47.745565060Z" level=info msg="CreateContainer within sandbox \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 17:46:47.784700 containerd[1527]: time="2025-05-27T17:46:47.784633810Z" level=info msg="Container c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:47.791110 containerd[1527]: time="2025-05-27T17:46:47.791043285Z" level=info msg="CreateContainer within sandbox \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\"" May 27 17:46:47.793582 containerd[1527]: time="2025-05-27T17:46:47.793522289Z" level=info msg="StartContainer for \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\"" May 27 17:46:47.794728 containerd[1527]: time="2025-05-27T17:46:47.794681021Z" level=info msg="connecting to shim c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed" address="unix:///run/containerd/s/3b80e86db9527f28c99e067198865ed8d7eea67749c756f7dd08a9cd20217e37" protocol=ttrpc version=3 May 27 17:46:47.820178 systemd[1]: Started cri-containerd-c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed.scope - libcontainer container c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed. May 27 17:46:47.869676 containerd[1527]: time="2025-05-27T17:46:47.869625204Z" level=info msg="StartContainer for \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" returns successfully" May 27 17:46:48.117300 kubelet[2672]: E0527 17:46:48.117257 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:48.136651 kubelet[2672]: E0527 17:46:48.136609 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:48.144617 containerd[1527]: time="2025-05-27T17:46:48.144564752Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:46:48.210970 containerd[1527]: time="2025-05-27T17:46:48.210104382Z" level=info msg="Container 30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:48.213244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720770174.mount: Deactivated successfully. May 27 17:46:48.227896 containerd[1527]: time="2025-05-27T17:46:48.227793500Z" level=info msg="CreateContainer within sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\"" May 27 17:46:48.230160 containerd[1527]: time="2025-05-27T17:46:48.229381296Z" level=info msg="StartContainer for \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\"" May 27 17:46:48.232082 containerd[1527]: time="2025-05-27T17:46:48.231977771Z" level=info msg="connecting to shim 30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c" address="unix:///run/containerd/s/5d1d8a492113540dd1a886d066e2a2caac38f56510bf6d1af8d07ab98fc51da4" protocol=ttrpc version=3 May 27 17:46:48.282146 systemd[1]: Started cri-containerd-30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c.scope - libcontainer container 30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c. May 27 17:46:48.439669 kubelet[2672]: I0527 17:46:48.438870 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-f6nch" podStartSLOduration=2.019264945 podStartE2EDuration="15.438850472s" podCreationTimestamp="2025-05-27 17:46:33 +0000 UTC" firstStartedPulling="2025-05-27 17:46:33.611176389 +0000 UTC m=+5.686621835" lastFinishedPulling="2025-05-27 17:46:47.741276207 +0000 UTC m=+19.106207362" observedRunningTime="2025-05-27 17:46:48.268702818 +0000 UTC m=+19.633633979" watchObservedRunningTime="2025-05-27 17:46:48.438850472 +0000 UTC m=+19.803781632" May 27 17:46:48.459195 containerd[1527]: time="2025-05-27T17:46:48.459068965Z" level=info msg="StartContainer for \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" returns successfully" May 27 17:46:48.709320 containerd[1527]: time="2025-05-27T17:46:48.708975279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" id:\"80770881ca9c0aac8b429b2158d76d067d06849e8edacb85a598b7ad5d4fd3cb\" pid:3356 exited_at:{seconds:1748368008 nanos:706070676}" May 27 17:46:48.718177 kubelet[2672]: I0527 17:46:48.718135 2672 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 27 17:46:48.828920 systemd[1]: Created slice kubepods-burstable-pod3af00153_92f1_45a9_815e_bceaf80672f0.slice - libcontainer container kubepods-burstable-pod3af00153_92f1_45a9_815e_bceaf80672f0.slice. May 27 17:46:48.863607 kubelet[2672]: W0527 17:46:48.863552 2672 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4344.0.0-f-2f5fe7c465" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.0.0-f-2f5fe7c465' and this object May 27 17:46:48.863815 kubelet[2672]: E0527 17:46:48.863627 2672 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4344.0.0-f-2f5fe7c465\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.0.0-f-2f5fe7c465' and this object" logger="UnhandledError" May 27 17:46:48.874489 kubelet[2672]: I0527 17:46:48.874414 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3af00153-92f1-45a9-815e-bceaf80672f0-config-volume\") pod \"coredns-7c65d6cfc9-xkc97\" (UID: \"3af00153-92f1-45a9-815e-bceaf80672f0\") " pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:46:48.874489 kubelet[2672]: I0527 17:46:48.874491 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwks\" (UniqueName: \"kubernetes.io/projected/3af00153-92f1-45a9-815e-bceaf80672f0-kube-api-access-tlwks\") pod \"coredns-7c65d6cfc9-xkc97\" (UID: \"3af00153-92f1-45a9-815e-bceaf80672f0\") " pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:46:48.895667 systemd[1]: Created slice kubepods-burstable-podf948ba5a_2b83_4850_ae32_0b05d801e235.slice - libcontainer container kubepods-burstable-podf948ba5a_2b83_4850_ae32_0b05d801e235.slice. May 27 17:46:48.976012 kubelet[2672]: I0527 17:46:48.975058 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffprn\" (UniqueName: \"kubernetes.io/projected/f948ba5a-2b83-4850-ae32-0b05d801e235-kube-api-access-ffprn\") pod \"coredns-7c65d6cfc9-54sq5\" (UID: \"f948ba5a-2b83-4850-ae32-0b05d801e235\") " pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:46:48.976012 kubelet[2672]: I0527 17:46:48.975113 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f948ba5a-2b83-4850-ae32-0b05d801e235-config-volume\") pod \"coredns-7c65d6cfc9-54sq5\" (UID: \"f948ba5a-2b83-4850-ae32-0b05d801e235\") " pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:46:49.176392 kubelet[2672]: E0527 17:46:49.176069 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:49.197464 kubelet[2672]: E0527 17:46:49.197420 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:49.199092 kubelet[2672]: I0527 17:46:49.199050 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:46:49.199361 kubelet[2672]: I0527 17:46:49.199339 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:46:49.199611 kubelet[2672]: I0527 17:46:49.199580 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-xkc97","kube-system/coredns-7c65d6cfc9-54sq5","kube-system/cilium-operator-5d85765b45-f6nch","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465","kube-system/cilium-mk7hk"] May 27 17:46:49.199788 kubelet[2672]: E0527 17:46:49.199763 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:46:49.199931 kubelet[2672]: E0527 17:46:49.199915 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:46:49.200053 kubelet[2672]: E0527 17:46:49.200040 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:46:49.200189 kubelet[2672]: E0527 17:46:49.200170 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:49.200375 kubelet[2672]: E0527 17:46:49.200278 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:46:49.200375 kubelet[2672]: E0527 17:46:49.200301 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:49.200375 kubelet[2672]: E0527 17:46:49.200319 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:49.200375 kubelet[2672]: E0527 17:46:49.200337 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-mk7hk" May 27 17:46:49.200375 kubelet[2672]: I0527 17:46:49.200356 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:46:49.975967 kubelet[2672]: E0527 17:46:49.975822 2672 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 27 17:46:49.976267 kubelet[2672]: E0527 17:46:49.976020 2672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3af00153-92f1-45a9-815e-bceaf80672f0-config-volume podName:3af00153-92f1-45a9-815e-bceaf80672f0 nodeName:}" failed. No retries permitted until 2025-05-27 17:46:50.475983025 +0000 UTC m=+21.840914183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3af00153-92f1-45a9-815e-bceaf80672f0-config-volume") pod "coredns-7c65d6cfc9-xkc97" (UID: "3af00153-92f1-45a9-815e-bceaf80672f0") : failed to sync configmap cache: timed out waiting for the condition May 27 17:46:50.076933 kubelet[2672]: E0527 17:46:50.076789 2672 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 27 17:46:50.076933 kubelet[2672]: E0527 17:46:50.076922 2672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f948ba5a-2b83-4850-ae32-0b05d801e235-config-volume podName:f948ba5a-2b83-4850-ae32-0b05d801e235 nodeName:}" failed. No retries permitted until 2025-05-27 17:46:50.576902699 +0000 UTC m=+21.941833851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f948ba5a-2b83-4850-ae32-0b05d801e235-config-volume") pod "coredns-7c65d6cfc9-54sq5" (UID: "f948ba5a-2b83-4850-ae32-0b05d801e235") : failed to sync configmap cache: timed out waiting for the condition May 27 17:46:50.192525 kubelet[2672]: E0527 17:46:50.192470 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:50.634254 kubelet[2672]: E0527 17:46:50.634109 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:50.635417 containerd[1527]: time="2025-05-27T17:46:50.635293172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xkc97,Uid:3af00153-92f1-45a9-815e-bceaf80672f0,Namespace:kube-system,Attempt:0,}" May 27 17:46:50.701919 kubelet[2672]: E0527 17:46:50.701571 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:50.702354 containerd[1527]: time="2025-05-27T17:46:50.702317134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-54sq5,Uid:f948ba5a-2b83-4850-ae32-0b05d801e235,Namespace:kube-system,Attempt:0,}" May 27 17:46:51.194415 kubelet[2672]: E0527 17:46:51.194376 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:51.483339 systemd-networkd[1461]: cilium_host: Link UP May 27 17:46:51.486130 systemd-networkd[1461]: cilium_net: Link UP May 27 17:46:51.486488 systemd-networkd[1461]: cilium_net: Gained carrier May 27 17:46:51.486736 systemd-networkd[1461]: cilium_host: Gained carrier May 27 17:46:51.657789 systemd-networkd[1461]: cilium_vxlan: Link UP May 27 17:46:51.657799 systemd-networkd[1461]: cilium_vxlan: Gained carrier May 27 17:46:51.691708 systemd-networkd[1461]: cilium_net: Gained IPv6LL May 27 17:46:52.135956 kernel: NET: Registered PF_ALG protocol family May 27 17:46:52.467578 systemd-networkd[1461]: cilium_host: Gained IPv6LL May 27 17:46:52.787094 systemd-networkd[1461]: cilium_vxlan: Gained IPv6LL May 27 17:46:53.162934 systemd-networkd[1461]: lxc_health: Link UP May 27 17:46:53.178971 systemd-networkd[1461]: lxc_health: Gained carrier May 27 17:46:53.693843 systemd-networkd[1461]: lxcd1071e7ca50b: Link UP May 27 17:46:53.695955 kernel: eth0: renamed from tmp9e39f May 27 17:46:53.699508 systemd-networkd[1461]: lxcd1071e7ca50b: Gained carrier May 27 17:46:53.744846 systemd-networkd[1461]: lxc6414fc7d56c5: Link UP May 27 17:46:53.749924 kernel: eth0: renamed from tmpe51dc May 27 17:46:53.751497 systemd-networkd[1461]: lxc6414fc7d56c5: Gained carrier May 27 17:46:54.001542 kubelet[2672]: E0527 17:46:54.001486 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:54.031855 kubelet[2672]: I0527 17:46:54.031501 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mk7hk" podStartSLOduration=12.442591473 podStartE2EDuration="22.031483501s" podCreationTimestamp="2025-05-27 17:46:32 +0000 UTC" firstStartedPulling="2025-05-27 17:46:33.487311392 +0000 UTC m=+5.562756851" lastFinishedPulling="2025-05-27 17:46:43.78671772 +0000 UTC m=+15.151648879" observedRunningTime="2025-05-27 17:46:49.467828595 +0000 UTC m=+20.832759758" watchObservedRunningTime="2025-05-27 17:46:54.031483501 +0000 UTC m=+25.396414662" May 27 17:46:54.515086 systemd-networkd[1461]: lxc_health: Gained IPv6LL May 27 17:46:55.475114 systemd-networkd[1461]: lxc6414fc7d56c5: Gained IPv6LL May 27 17:46:55.603307 systemd-networkd[1461]: lxcd1071e7ca50b: Gained IPv6LL May 27 17:46:58.756997 containerd[1527]: time="2025-05-27T17:46:58.756634754Z" level=info msg="connecting to shim 9e39fa342bb3c808fbe887c029f9199b0bdd36f6ddfeea3240faed7387674a3c" address="unix:///run/containerd/s/e1b748b8d2fcaad83fc8832cd366b48373338b63673ef89fbd7b034b5e11a295" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:58.757630 containerd[1527]: time="2025-05-27T17:46:58.756645094Z" level=info msg="connecting to shim e51dc9b27e5d6c800e82113fc8083ee8b33c7a0de03b6f51820601c5e91b400c" address="unix:///run/containerd/s/c9eafc71876b28cfe47b9c5dd8b254873c2047f8a2e0bea9a12c76d8a5c0e6ab" namespace=k8s.io protocol=ttrpc version=3 May 27 17:46:58.824517 systemd[1]: Started cri-containerd-e51dc9b27e5d6c800e82113fc8083ee8b33c7a0de03b6f51820601c5e91b400c.scope - libcontainer container e51dc9b27e5d6c800e82113fc8083ee8b33c7a0de03b6f51820601c5e91b400c. May 27 17:46:58.833132 systemd[1]: Started cri-containerd-9e39fa342bb3c808fbe887c029f9199b0bdd36f6ddfeea3240faed7387674a3c.scope - libcontainer container 9e39fa342bb3c808fbe887c029f9199b0bdd36f6ddfeea3240faed7387674a3c. May 27 17:46:58.941225 containerd[1527]: time="2025-05-27T17:46:58.941024553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xkc97,Uid:3af00153-92f1-45a9-815e-bceaf80672f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e39fa342bb3c808fbe887c029f9199b0bdd36f6ddfeea3240faed7387674a3c\"" May 27 17:46:58.942176 kubelet[2672]: E0527 17:46:58.942138 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:58.946020 containerd[1527]: time="2025-05-27T17:46:58.945563164Z" level=info msg="CreateContainer within sandbox \"9e39fa342bb3c808fbe887c029f9199b0bdd36f6ddfeea3240faed7387674a3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:46:58.956096 containerd[1527]: time="2025-05-27T17:46:58.955801423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-54sq5,Uid:f948ba5a-2b83-4850-ae32-0b05d801e235,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51dc9b27e5d6c800e82113fc8083ee8b33c7a0de03b6f51820601c5e91b400c\"" May 27 17:46:58.958903 kubelet[2672]: E0527 17:46:58.958706 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:58.963150 containerd[1527]: time="2025-05-27T17:46:58.963069706Z" level=info msg="CreateContainer within sandbox \"e51dc9b27e5d6c800e82113fc8083ee8b33c7a0de03b6f51820601c5e91b400c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:46:58.973335 kubelet[2672]: I0527 17:46:58.972503 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:46:58.979404 kubelet[2672]: E0527 17:46:58.979329 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:58.988028 containerd[1527]: time="2025-05-27T17:46:58.987456146Z" level=info msg="Container f9c65ba7c3f5817ae9fef68868c9bed07e82458900067c2f21b1320928be427c: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:58.988831 containerd[1527]: time="2025-05-27T17:46:58.988763328Z" level=info msg="Container f58f66c95df6555e6ad80cfd00b91907791ea73d960351dfa73c08ba07e1d3db: CDI devices from CRI Config.CDIDevices: []" May 27 17:46:58.999620 containerd[1527]: time="2025-05-27T17:46:58.999552575Z" level=info msg="CreateContainer within sandbox \"e51dc9b27e5d6c800e82113fc8083ee8b33c7a0de03b6f51820601c5e91b400c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9c65ba7c3f5817ae9fef68868c9bed07e82458900067c2f21b1320928be427c\"" May 27 17:46:59.003540 containerd[1527]: time="2025-05-27T17:46:59.002050135Z" level=info msg="StartContainer for \"f9c65ba7c3f5817ae9fef68868c9bed07e82458900067c2f21b1320928be427c\"" May 27 17:46:59.008735 containerd[1527]: time="2025-05-27T17:46:59.008094891Z" level=info msg="connecting to shim f9c65ba7c3f5817ae9fef68868c9bed07e82458900067c2f21b1320928be427c" address="unix:///run/containerd/s/c9eafc71876b28cfe47b9c5dd8b254873c2047f8a2e0bea9a12c76d8a5c0e6ab" protocol=ttrpc version=3 May 27 17:46:59.025601 containerd[1527]: time="2025-05-27T17:46:59.025530541Z" level=info msg="CreateContainer within sandbox \"9e39fa342bb3c808fbe887c029f9199b0bdd36f6ddfeea3240faed7387674a3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f58f66c95df6555e6ad80cfd00b91907791ea73d960351dfa73c08ba07e1d3db\"" May 27 17:46:59.027780 containerd[1527]: time="2025-05-27T17:46:59.027734360Z" level=info msg="StartContainer for \"f58f66c95df6555e6ad80cfd00b91907791ea73d960351dfa73c08ba07e1d3db\"" May 27 17:46:59.031869 containerd[1527]: time="2025-05-27T17:46:59.031764899Z" level=info msg="connecting to shim f58f66c95df6555e6ad80cfd00b91907791ea73d960351dfa73c08ba07e1d3db" address="unix:///run/containerd/s/e1b748b8d2fcaad83fc8832cd366b48373338b63673ef89fbd7b034b5e11a295" protocol=ttrpc version=3 May 27 17:46:59.082541 systemd[1]: Started cri-containerd-f9c65ba7c3f5817ae9fef68868c9bed07e82458900067c2f21b1320928be427c.scope - libcontainer container f9c65ba7c3f5817ae9fef68868c9bed07e82458900067c2f21b1320928be427c. May 27 17:46:59.120337 systemd[1]: Started cri-containerd-f58f66c95df6555e6ad80cfd00b91907791ea73d960351dfa73c08ba07e1d3db.scope - libcontainer container f58f66c95df6555e6ad80cfd00b91907791ea73d960351dfa73c08ba07e1d3db. May 27 17:46:59.177642 containerd[1527]: time="2025-05-27T17:46:59.177468676Z" level=info msg="StartContainer for \"f9c65ba7c3f5817ae9fef68868c9bed07e82458900067c2f21b1320928be427c\" returns successfully" May 27 17:46:59.191529 containerd[1527]: time="2025-05-27T17:46:59.191457560Z" level=info msg="StartContainer for \"f58f66c95df6555e6ad80cfd00b91907791ea73d960351dfa73c08ba07e1d3db\" returns successfully" May 27 17:46:59.230089 kubelet[2672]: E0527 17:46:59.229317 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:59.239245 kubelet[2672]: E0527 17:46:59.238310 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:59.240035 kubelet[2672]: E0527 17:46:59.239982 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:46:59.286342 kubelet[2672]: I0527 17:46:59.285674 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:46:59.286342 kubelet[2672]: I0527 17:46:59.285729 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:46:59.286342 kubelet[2672]: I0527 17:46:59.285948 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-f6nch","kube-system/cilium-mk7hk","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465","kube-system/coredns-7c65d6cfc9-xkc97","kube-system/coredns-7c65d6cfc9-54sq5"] May 27 17:46:59.286342 kubelet[2672]: E0527 17:46:59.286014 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:46:59.286342 kubelet[2672]: E0527 17:46:59.286044 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-mk7hk" May 27 17:46:59.286342 kubelet[2672]: E0527 17:46:59.286058 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:59.286342 kubelet[2672]: E0527 17:46:59.286089 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:46:59.286342 kubelet[2672]: E0527 17:46:59.286104 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:59.286342 kubelet[2672]: E0527 17:46:59.286124 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:46:59.286342 kubelet[2672]: E0527 17:46:59.286135 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:46:59.286342 kubelet[2672]: E0527 17:46:59.286146 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:46:59.286342 kubelet[2672]: I0527 17:46:59.286161 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:46:59.321281 kubelet[2672]: I0527 17:46:59.321194 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-54sq5" podStartSLOduration=26.321163132 podStartE2EDuration="26.321163132s" podCreationTimestamp="2025-05-27 17:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:59.266388527 +0000 UTC m=+30.631319690" watchObservedRunningTime="2025-05-27 17:46:59.321163132 +0000 UTC m=+30.686094533" May 27 17:46:59.719028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650917350.mount: Deactivated successfully. May 27 17:47:00.241927 kubelet[2672]: E0527 17:47:00.241266 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:47:00.241927 kubelet[2672]: E0527 17:47:00.241732 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:47:00.266314 kubelet[2672]: I0527 17:47:00.266225 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xkc97" podStartSLOduration=27.266196709 podStartE2EDuration="27.266196709s" podCreationTimestamp="2025-05-27 17:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:46:59.36568001 +0000 UTC m=+30.730611171" watchObservedRunningTime="2025-05-27 17:47:00.266196709 +0000 UTC m=+31.631127886" May 27 17:47:01.244519 kubelet[2672]: E0527 17:47:01.244446 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:47:01.245651 kubelet[2672]: E0527 17:47:01.244530 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:47:09.315888 kubelet[2672]: I0527 17:47:09.315815 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:09.315888 kubelet[2672]: I0527 17:47:09.315897 2672 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:09.327234 kubelet[2672]: I0527 17:47:09.327179 2672 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:09.347169 kubelet[2672]: I0527 17:47:09.347115 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:09.347632 kubelet[2672]: I0527 17:47:09.347582 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-f6nch","kube-system/coredns-7c65d6cfc9-xkc97","kube-system/coredns-7c65d6cfc9-54sq5","kube-system/cilium-mk7hk","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465"] May 27 17:47:09.347869 kubelet[2672]: E0527 17:47:09.347847 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:47:09.348152 kubelet[2672]: E0527 17:47:09.348005 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:47:09.348152 kubelet[2672]: E0527 17:47:09.348032 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:47:09.348152 kubelet[2672]: E0527 17:47:09.348052 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-mk7hk" May 27 17:47:09.348152 kubelet[2672]: E0527 17:47:09.348068 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:09.348152 kubelet[2672]: E0527 17:47:09.348082 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:47:09.348152 kubelet[2672]: E0527 17:47:09.348098 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:09.348152 kubelet[2672]: E0527 17:47:09.348112 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:09.348152 kubelet[2672]: I0527 17:47:09.348126 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:14.018526 systemd[1]: Started sshd@7-143.198.147.228:22-139.178.68.195:55584.service - OpenSSH per-connection server daemon (139.178.68.195:55584). May 27 17:47:14.123486 sshd[4011]: Accepted publickey for core from 139.178.68.195 port 55584 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:14.125394 sshd-session[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:14.131858 systemd-logind[1499]: New session 8 of user core. May 27 17:47:14.136309 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:47:14.787107 sshd[4014]: Connection closed by 139.178.68.195 port 55584 May 27 17:47:14.788389 sshd-session[4011]: pam_unix(sshd:session): session closed for user core May 27 17:47:14.794300 systemd-logind[1499]: Session 8 logged out. Waiting for processes to exit. May 27 17:47:14.794345 systemd[1]: sshd@7-143.198.147.228:22-139.178.68.195:55584.service: Deactivated successfully. May 27 17:47:14.796707 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:47:14.801302 systemd-logind[1499]: Removed session 8. May 27 17:47:19.379924 kubelet[2672]: I0527 17:47:19.379742 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:19.379924 kubelet[2672]: I0527 17:47:19.379864 2672 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:19.384214 kubelet[2672]: I0527 17:47:19.384179 2672 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:19.406693 kubelet[2672]: I0527 17:47:19.406612 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:19.407908 kubelet[2672]: I0527 17:47:19.407707 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-f6nch","kube-system/coredns-7c65d6cfc9-xkc97","kube-system/coredns-7c65d6cfc9-54sq5","kube-system/cilium-mk7hk","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465"] May 27 17:47:19.408376 kubelet[2672]: E0527 17:47:19.408238 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:47:19.408376 kubelet[2672]: E0527 17:47:19.408276 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:47:19.408376 kubelet[2672]: E0527 17:47:19.408310 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:47:19.408376 kubelet[2672]: E0527 17:47:19.408332 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-mk7hk" May 27 17:47:19.408376 kubelet[2672]: E0527 17:47:19.408346 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:19.408868 kubelet[2672]: E0527 17:47:19.408649 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:47:19.408868 kubelet[2672]: E0527 17:47:19.408698 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:19.408868 kubelet[2672]: E0527 17:47:19.408731 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:19.408868 kubelet[2672]: I0527 17:47:19.408748 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:19.810520 systemd[1]: Started sshd@8-143.198.147.228:22-139.178.68.195:55596.service - OpenSSH per-connection server daemon (139.178.68.195:55596). May 27 17:47:19.899006 sshd[4028]: Accepted publickey for core from 139.178.68.195 port 55596 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:19.901123 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:19.909605 systemd-logind[1499]: New session 9 of user core. May 27 17:47:19.917413 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:47:20.110992 sshd[4030]: Connection closed by 139.178.68.195 port 55596 May 27 17:47:20.111632 sshd-session[4028]: pam_unix(sshd:session): session closed for user core May 27 17:47:20.118166 systemd[1]: sshd@8-143.198.147.228:22-139.178.68.195:55596.service: Deactivated successfully. May 27 17:47:20.121947 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:47:20.123494 systemd-logind[1499]: Session 9 logged out. Waiting for processes to exit. May 27 17:47:20.126519 systemd-logind[1499]: Removed session 9. May 27 17:47:25.126742 systemd[1]: Started sshd@9-143.198.147.228:22-139.178.68.195:60384.service - OpenSSH per-connection server daemon (139.178.68.195:60384). May 27 17:47:25.189349 sshd[4043]: Accepted publickey for core from 139.178.68.195 port 60384 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:25.191460 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:25.199204 systemd-logind[1499]: New session 10 of user core. May 27 17:47:25.215208 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:47:25.368923 sshd[4045]: Connection closed by 139.178.68.195 port 60384 May 27 17:47:25.369445 sshd-session[4043]: pam_unix(sshd:session): session closed for user core May 27 17:47:25.375105 systemd[1]: sshd@9-143.198.147.228:22-139.178.68.195:60384.service: Deactivated successfully. May 27 17:47:25.378270 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:47:25.381159 systemd-logind[1499]: Session 10 logged out. Waiting for processes to exit. May 27 17:47:25.384498 systemd-logind[1499]: Removed session 10. May 27 17:47:29.436693 kubelet[2672]: I0527 17:47:29.435127 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:29.436693 kubelet[2672]: I0527 17:47:29.435406 2672 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:29.437621 kubelet[2672]: I0527 17:47:29.437592 2672 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:29.456492 kubelet[2672]: I0527 17:47:29.456439 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:29.456824 kubelet[2672]: I0527 17:47:29.456792 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-f6nch","kube-system/coredns-7c65d6cfc9-xkc97","kube-system/coredns-7c65d6cfc9-54sq5","kube-system/cilium-mk7hk","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465"] May 27 17:47:29.456926 kubelet[2672]: E0527 17:47:29.456907 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:47:29.456988 kubelet[2672]: E0527 17:47:29.456939 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:47:29.456988 kubelet[2672]: E0527 17:47:29.456953 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:47:29.457060 kubelet[2672]: E0527 17:47:29.456993 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-mk7hk" May 27 17:47:29.457060 kubelet[2672]: E0527 17:47:29.457011 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:29.457060 kubelet[2672]: E0527 17:47:29.457024 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:47:29.457060 kubelet[2672]: E0527 17:47:29.457039 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:29.457180 kubelet[2672]: E0527 17:47:29.457074 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:29.457180 kubelet[2672]: I0527 17:47:29.457094 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:30.387867 systemd[1]: Started sshd@10-143.198.147.228:22-139.178.68.195:60388.service - OpenSSH per-connection server daemon (139.178.68.195:60388). May 27 17:47:30.461386 sshd[4061]: Accepted publickey for core from 139.178.68.195 port 60388 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:30.463809 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:30.471979 systemd-logind[1499]: New session 11 of user core. May 27 17:47:30.482842 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:47:30.645032 sshd[4063]: Connection closed by 139.178.68.195 port 60388 May 27 17:47:30.644866 sshd-session[4061]: pam_unix(sshd:session): session closed for user core May 27 17:47:30.659474 systemd[1]: sshd@10-143.198.147.228:22-139.178.68.195:60388.service: Deactivated successfully. May 27 17:47:30.663638 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:47:30.666530 systemd-logind[1499]: Session 11 logged out. Waiting for processes to exit. May 27 17:47:30.670627 systemd[1]: Started sshd@11-143.198.147.228:22-139.178.68.195:60394.service - OpenSSH per-connection server daemon (139.178.68.195:60394). May 27 17:47:30.672212 systemd-logind[1499]: Removed session 11. May 27 17:47:30.734155 sshd[4076]: Accepted publickey for core from 139.178.68.195 port 60394 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:30.736805 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:30.744071 systemd-logind[1499]: New session 12 of user core. May 27 17:47:30.747191 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:47:30.958151 sshd[4078]: Connection closed by 139.178.68.195 port 60394 May 27 17:47:30.960065 sshd-session[4076]: pam_unix(sshd:session): session closed for user core May 27 17:47:30.973223 systemd[1]: sshd@11-143.198.147.228:22-139.178.68.195:60394.service: Deactivated successfully. May 27 17:47:30.977500 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:47:30.979271 systemd-logind[1499]: Session 12 logged out. Waiting for processes to exit. May 27 17:47:30.988414 systemd[1]: Started sshd@12-143.198.147.228:22-139.178.68.195:60408.service - OpenSSH per-connection server daemon (139.178.68.195:60408). May 27 17:47:30.991697 systemd-logind[1499]: Removed session 12. May 27 17:47:31.070868 sshd[4088]: Accepted publickey for core from 139.178.68.195 port 60408 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:31.073195 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:31.081164 systemd-logind[1499]: New session 13 of user core. May 27 17:47:31.094294 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:47:31.248673 sshd[4091]: Connection closed by 139.178.68.195 port 60408 May 27 17:47:31.250170 sshd-session[4088]: pam_unix(sshd:session): session closed for user core May 27 17:47:31.255436 systemd[1]: sshd@12-143.198.147.228:22-139.178.68.195:60408.service: Deactivated successfully. May 27 17:47:31.258123 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:47:31.260016 systemd-logind[1499]: Session 13 logged out. Waiting for processes to exit. May 27 17:47:31.262339 systemd-logind[1499]: Removed session 13. May 27 17:47:34.449601 systemd[1]: Started sshd@13-143.198.147.228:22-203.252.10.3:16350.service - OpenSSH per-connection server daemon (203.252.10.3:16350). May 27 17:47:36.274250 systemd[1]: Started sshd@14-143.198.147.228:22-139.178.68.195:39168.service - OpenSSH per-connection server daemon (139.178.68.195:39168). May 27 17:47:36.346229 sshd[4108]: Accepted publickey for core from 139.178.68.195 port 39168 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:36.350579 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:36.360942 systemd-logind[1499]: New session 14 of user core. May 27 17:47:36.367224 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:47:36.514873 sshd[4110]: Connection closed by 139.178.68.195 port 39168 May 27 17:47:36.515946 sshd-session[4108]: pam_unix(sshd:session): session closed for user core May 27 17:47:36.522017 systemd-logind[1499]: Session 14 logged out. Waiting for processes to exit. May 27 17:47:36.522948 systemd[1]: sshd@14-143.198.147.228:22-139.178.68.195:39168.service: Deactivated successfully. May 27 17:47:36.526184 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:47:36.531706 systemd-logind[1499]: Removed session 14. May 27 17:47:36.836463 sshd[4103]: Invalid user spam from 203.252.10.3 port 16350 May 27 17:47:37.249816 sshd-session[4120]: pam_faillock(sshd:auth): User unknown May 27 17:47:37.255125 sshd[4103]: Postponed keyboard-interactive for invalid user spam from 203.252.10.3 port 16350 ssh2 [preauth] May 27 17:47:37.767601 sshd-session[4120]: pam_unix(sshd:auth): check pass; user unknown May 27 17:47:37.767654 sshd-session[4120]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=203.252.10.3 May 27 17:47:37.768669 sshd-session[4120]: pam_faillock(sshd:auth): User unknown May 27 17:47:39.477273 kubelet[2672]: I0527 17:47:39.477217 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:39.477273 kubelet[2672]: I0527 17:47:39.477273 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:39.477768 kubelet[2672]: I0527 17:47:39.477626 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-f6nch","kube-system/coredns-7c65d6cfc9-xkc97","kube-system/coredns-7c65d6cfc9-54sq5","kube-system/cilium-mk7hk","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465"] May 27 17:47:39.477768 kubelet[2672]: E0527 17:47:39.477676 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:47:39.477768 kubelet[2672]: E0527 17:47:39.477690 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:47:39.477768 kubelet[2672]: E0527 17:47:39.477700 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:47:39.477768 kubelet[2672]: E0527 17:47:39.477710 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-mk7hk" May 27 17:47:39.477768 kubelet[2672]: E0527 17:47:39.477719 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:39.477768 kubelet[2672]: E0527 17:47:39.477729 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:47:39.477768 kubelet[2672]: E0527 17:47:39.477737 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:39.477768 kubelet[2672]: E0527 17:47:39.477747 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:39.477768 kubelet[2672]: I0527 17:47:39.477758 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:39.906731 sshd[4103]: PAM: Permission denied for illegal user spam from 203.252.10.3 May 27 17:47:39.907451 sshd[4103]: Failed keyboard-interactive/pam for invalid user spam from 203.252.10.3 port 16350 ssh2 May 27 17:47:40.334954 sshd[4103]: Connection closed by invalid user spam 203.252.10.3 port 16350 [preauth] May 27 17:47:40.336076 systemd[1]: sshd@13-143.198.147.228:22-203.252.10.3:16350.service: Deactivated successfully. May 27 17:47:41.537613 systemd[1]: Started sshd@15-143.198.147.228:22-139.178.68.195:39172.service - OpenSSH per-connection server daemon (139.178.68.195:39172). May 27 17:47:41.618801 sshd[4124]: Accepted publickey for core from 139.178.68.195 port 39172 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:41.621419 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:41.629991 systemd-logind[1499]: New session 15 of user core. May 27 17:47:41.635244 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:47:41.777780 sshd[4126]: Connection closed by 139.178.68.195 port 39172 May 27 17:47:41.778389 sshd-session[4124]: pam_unix(sshd:session): session closed for user core May 27 17:47:41.790515 systemd[1]: sshd@15-143.198.147.228:22-139.178.68.195:39172.service: Deactivated successfully. May 27 17:47:41.793723 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:47:41.794987 systemd-logind[1499]: Session 15 logged out. Waiting for processes to exit. May 27 17:47:41.800730 systemd[1]: Started sshd@16-143.198.147.228:22-139.178.68.195:39174.service - OpenSSH per-connection server daemon (139.178.68.195:39174). May 27 17:47:41.802270 systemd-logind[1499]: Removed session 15. May 27 17:47:41.867638 sshd[4137]: Accepted publickey for core from 139.178.68.195 port 39174 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:41.869948 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:41.878644 systemd-logind[1499]: New session 16 of user core. May 27 17:47:41.886271 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:47:42.233738 sshd[4139]: Connection closed by 139.178.68.195 port 39174 May 27 17:47:42.234458 sshd-session[4137]: pam_unix(sshd:session): session closed for user core May 27 17:47:42.252402 systemd[1]: sshd@16-143.198.147.228:22-139.178.68.195:39174.service: Deactivated successfully. May 27 17:47:42.255430 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:47:42.258780 systemd-logind[1499]: Session 16 logged out. Waiting for processes to exit. May 27 17:47:42.262733 systemd[1]: Started sshd@17-143.198.147.228:22-139.178.68.195:39176.service - OpenSSH per-connection server daemon (139.178.68.195:39176). May 27 17:47:42.264019 systemd-logind[1499]: Removed session 16. May 27 17:47:42.362123 sshd[4149]: Accepted publickey for core from 139.178.68.195 port 39176 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:42.364988 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:42.374132 systemd-logind[1499]: New session 17 of user core. May 27 17:47:42.383232 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:47:44.316122 sshd[4151]: Connection closed by 139.178.68.195 port 39176 May 27 17:47:44.317402 sshd-session[4149]: pam_unix(sshd:session): session closed for user core May 27 17:47:44.340429 systemd[1]: sshd@17-143.198.147.228:22-139.178.68.195:39176.service: Deactivated successfully. May 27 17:47:44.344681 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:47:44.347641 systemd-logind[1499]: Session 17 logged out. Waiting for processes to exit. May 27 17:47:44.358235 systemd[1]: Started sshd@18-143.198.147.228:22-139.178.68.195:45844.service - OpenSSH per-connection server daemon (139.178.68.195:45844). May 27 17:47:44.361903 systemd-logind[1499]: Removed session 17. May 27 17:47:44.467662 sshd[4167]: Accepted publickey for core from 139.178.68.195 port 45844 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:44.468586 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:44.476902 systemd-logind[1499]: New session 18 of user core. May 27 17:47:44.486288 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:47:44.832434 sshd[4170]: Connection closed by 139.178.68.195 port 45844 May 27 17:47:44.834029 sshd-session[4167]: pam_unix(sshd:session): session closed for user core May 27 17:47:44.848091 systemd[1]: sshd@18-143.198.147.228:22-139.178.68.195:45844.service: Deactivated successfully. May 27 17:47:44.851662 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:47:44.854135 systemd-logind[1499]: Session 18 logged out. Waiting for processes to exit. May 27 17:47:44.862660 systemd[1]: Started sshd@19-143.198.147.228:22-139.178.68.195:45848.service - OpenSSH per-connection server daemon (139.178.68.195:45848). May 27 17:47:44.867792 systemd-logind[1499]: Removed session 18. May 27 17:47:44.939139 sshd[4179]: Accepted publickey for core from 139.178.68.195 port 45848 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:44.941091 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:44.948835 systemd-logind[1499]: New session 19 of user core. May 27 17:47:44.955301 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:47:45.108345 sshd[4181]: Connection closed by 139.178.68.195 port 45848 May 27 17:47:45.109559 sshd-session[4179]: pam_unix(sshd:session): session closed for user core May 27 17:47:45.116022 systemd[1]: sshd@19-143.198.147.228:22-139.178.68.195:45848.service: Deactivated successfully. May 27 17:47:45.119602 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:47:45.121134 systemd-logind[1499]: Session 19 logged out. Waiting for processes to exit. May 27 17:47:45.124427 systemd-logind[1499]: Removed session 19. May 27 17:47:49.500540 kubelet[2672]: I0527 17:47:49.500477 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:49.502184 kubelet[2672]: I0527 17:47:49.501536 2672 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:49.506086 kubelet[2672]: I0527 17:47:49.506050 2672 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:49.528910 kubelet[2672]: I0527 17:47:49.528844 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:49.529338 kubelet[2672]: I0527 17:47:49.529301 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-f6nch","kube-system/coredns-7c65d6cfc9-xkc97","kube-system/coredns-7c65d6cfc9-54sq5","kube-system/cilium-mk7hk","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465"] May 27 17:47:49.529512 kubelet[2672]: E0527 17:47:49.529494 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:47:49.529657 kubelet[2672]: E0527 17:47:49.529641 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:47:49.529918 kubelet[2672]: E0527 17:47:49.529772 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:47:49.529918 kubelet[2672]: E0527 17:47:49.529796 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-mk7hk" May 27 17:47:49.529918 kubelet[2672]: E0527 17:47:49.529816 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:49.529918 kubelet[2672]: E0527 17:47:49.529834 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:47:49.529918 kubelet[2672]: E0527 17:47:49.529849 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:49.529918 kubelet[2672]: E0527 17:47:49.529863 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:49.529918 kubelet[2672]: I0527 17:47:49.529897 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:50.127278 systemd[1]: Started sshd@20-143.198.147.228:22-139.178.68.195:45854.service - OpenSSH per-connection server daemon (139.178.68.195:45854). May 27 17:47:50.192607 sshd[4196]: Accepted publickey for core from 139.178.68.195 port 45854 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:50.194957 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:50.203440 systemd-logind[1499]: New session 20 of user core. May 27 17:47:50.210225 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:47:50.361453 sshd[4198]: Connection closed by 139.178.68.195 port 45854 May 27 17:47:50.362717 sshd-session[4196]: pam_unix(sshd:session): session closed for user core May 27 17:47:50.368496 systemd[1]: sshd@20-143.198.147.228:22-139.178.68.195:45854.service: Deactivated successfully. May 27 17:47:50.372537 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:47:50.374590 systemd-logind[1499]: Session 20 logged out. Waiting for processes to exit. May 27 17:47:50.378384 systemd-logind[1499]: Removed session 20. May 27 17:47:50.881916 kubelet[2672]: E0527 17:47:50.880748 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:47:50.881916 kubelet[2672]: E0527 17:47:50.881748 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:47:50.882790 kubelet[2672]: E0527 17:47:50.882761 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:47:55.379858 systemd[1]: Started sshd@21-143.198.147.228:22-139.178.68.195:40708.service - OpenSSH per-connection server daemon (139.178.68.195:40708). May 27 17:47:55.441771 sshd[4210]: Accepted publickey for core from 139.178.68.195 port 40708 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:47:55.443540 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:47:55.451520 systemd-logind[1499]: New session 21 of user core. May 27 17:47:55.456196 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:47:55.596394 sshd[4212]: Connection closed by 139.178.68.195 port 40708 May 27 17:47:55.597668 sshd-session[4210]: pam_unix(sshd:session): session closed for user core May 27 17:47:55.605511 systemd[1]: sshd@21-143.198.147.228:22-139.178.68.195:40708.service: Deactivated successfully. May 27 17:47:55.610346 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:47:55.611984 systemd-logind[1499]: Session 21 logged out. Waiting for processes to exit. May 27 17:47:55.615742 systemd-logind[1499]: Removed session 21. May 27 17:47:59.560932 kubelet[2672]: I0527 17:47:59.560869 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:47:59.562560 kubelet[2672]: I0527 17:47:59.560953 2672 container_gc.go:88] "Attempting to delete unused containers" May 27 17:47:59.566064 kubelet[2672]: I0527 17:47:59.566008 2672 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:47:59.586234 kubelet[2672]: I0527 17:47:59.586180 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:47:59.586532 kubelet[2672]: I0527 17:47:59.586482 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-f6nch","kube-system/coredns-7c65d6cfc9-54sq5","kube-system/coredns-7c65d6cfc9-xkc97","kube-system/cilium-mk7hk","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465"] May 27 17:47:59.586612 kubelet[2672]: E0527 17:47:59.586554 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-f6nch" May 27 17:47:59.586612 kubelet[2672]: E0527 17:47:59.586567 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:47:59.586612 kubelet[2672]: E0527 17:47:59.586576 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:47:59.586612 kubelet[2672]: E0527 17:47:59.586586 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-mk7hk" May 27 17:47:59.586612 kubelet[2672]: E0527 17:47:59.586606 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:59.586759 kubelet[2672]: E0527 17:47:59.586618 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:47:59.586759 kubelet[2672]: E0527 17:47:59.586628 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:59.586759 kubelet[2672]: E0527 17:47:59.586636 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:47:59.586759 kubelet[2672]: I0527 17:47:59.586646 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:47:59.881526 kubelet[2672]: E0527 17:47:59.881287 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:00.621710 systemd[1]: Started sshd@22-143.198.147.228:22-139.178.68.195:40716.service - OpenSSH per-connection server daemon (139.178.68.195:40716). May 27 17:48:00.690262 sshd[4224]: Accepted publickey for core from 139.178.68.195 port 40716 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:48:00.692266 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:00.699960 systemd-logind[1499]: New session 22 of user core. May 27 17:48:00.709348 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:48:00.859393 sshd[4226]: Connection closed by 139.178.68.195 port 40716 May 27 17:48:00.859215 sshd-session[4224]: pam_unix(sshd:session): session closed for user core May 27 17:48:00.873905 systemd[1]: sshd@22-143.198.147.228:22-139.178.68.195:40716.service: Deactivated successfully. May 27 17:48:00.878699 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:48:00.881575 systemd-logind[1499]: Session 22 logged out. Waiting for processes to exit. May 27 17:48:00.888969 systemd[1]: Started sshd@23-143.198.147.228:22-139.178.68.195:40718.service - OpenSSH per-connection server daemon (139.178.68.195:40718). May 27 17:48:00.890760 systemd-logind[1499]: Removed session 22. May 27 17:48:00.969397 sshd[4238]: Accepted publickey for core from 139.178.68.195 port 40718 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:48:00.971563 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:00.978990 systemd-logind[1499]: New session 23 of user core. May 27 17:48:00.989557 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:48:01.882778 kubelet[2672]: E0527 17:48:01.881847 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:02.474271 containerd[1527]: time="2025-05-27T17:48:02.474198758Z" level=info msg="StopContainer for \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" with timeout 30 (s)" May 27 17:48:02.486254 containerd[1527]: time="2025-05-27T17:48:02.486118882Z" level=info msg="Stop container \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" with signal terminated" May 27 17:48:02.510595 containerd[1527]: time="2025-05-27T17:48:02.510282329Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:48:02.516596 systemd[1]: cri-containerd-c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed.scope: Deactivated successfully. May 27 17:48:02.528318 containerd[1527]: time="2025-05-27T17:48:02.528149304Z" level=info msg="received exit event container_id:\"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" id:\"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" pid:3295 exited_at:{seconds:1748368082 nanos:526868036}" May 27 17:48:02.528745 containerd[1527]: time="2025-05-27T17:48:02.528688811Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" id:\"d1caf9e47ce5e433dc31e5fa8996ddc1306523b82428a5a7787d9d41b4f2eb21\" pid:4259 exited_at:{seconds:1748368082 nanos:527340291}" May 27 17:48:02.528956 containerd[1527]: time="2025-05-27T17:48:02.528770190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" id:\"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" pid:3295 exited_at:{seconds:1748368082 nanos:526868036}" May 27 17:48:02.533583 containerd[1527]: time="2025-05-27T17:48:02.533446539Z" level=info msg="StopContainer for \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" with timeout 2 (s)" May 27 17:48:02.536719 containerd[1527]: time="2025-05-27T17:48:02.536660864Z" level=info msg="Stop container \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" with signal terminated" May 27 17:48:02.570227 systemd-networkd[1461]: lxc_health: Link DOWN May 27 17:48:02.570707 systemd-networkd[1461]: lxc_health: Lost carrier May 27 17:48:02.599326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed-rootfs.mount: Deactivated successfully. May 27 17:48:02.606355 systemd[1]: cri-containerd-30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c.scope: Deactivated successfully. May 27 17:48:02.606840 systemd[1]: cri-containerd-30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c.scope: Consumed 9.082s CPU time, 187.5M memory peak, 69.1M read from disk, 13.3M written to disk. May 27 17:48:02.610052 containerd[1527]: time="2025-05-27T17:48:02.609842614Z" level=info msg="received exit event container_id:\"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" id:\"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" pid:3327 exited_at:{seconds:1748368082 nanos:609452082}" May 27 17:48:02.610933 containerd[1527]: time="2025-05-27T17:48:02.610803717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" id:\"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" pid:3327 exited_at:{seconds:1748368082 nanos:609452082}" May 27 17:48:02.624516 containerd[1527]: time="2025-05-27T17:48:02.624437881Z" level=info msg="StopContainer for \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" returns successfully" May 27 17:48:02.627078 containerd[1527]: time="2025-05-27T17:48:02.626878035Z" level=info msg="StopPodSandbox for \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\"" May 27 17:48:02.627078 containerd[1527]: time="2025-05-27T17:48:02.627023555Z" level=info msg="Container to stop \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:48:02.654700 systemd[1]: cri-containerd-9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e.scope: Deactivated successfully. May 27 17:48:02.658471 containerd[1527]: time="2025-05-27T17:48:02.658222989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" id:\"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" pid:2871 exit_status:137 exited_at:{seconds:1748368082 nanos:651163709}" May 27 17:48:02.684693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c-rootfs.mount: Deactivated successfully. May 27 17:48:02.720971 containerd[1527]: time="2025-05-27T17:48:02.720282647Z" level=info msg="StopContainer for \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" returns successfully" May 27 17:48:02.725478 containerd[1527]: time="2025-05-27T17:48:02.725236820Z" level=info msg="StopPodSandbox for \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\"" May 27 17:48:02.726483 containerd[1527]: time="2025-05-27T17:48:02.726406235Z" level=info msg="Container to stop \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:48:02.727105 containerd[1527]: time="2025-05-27T17:48:02.727013352Z" level=info msg="Container to stop \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:48:02.727286 containerd[1527]: time="2025-05-27T17:48:02.727046158Z" level=info msg="Container to stop \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:48:02.727286 containerd[1527]: time="2025-05-27T17:48:02.727240563Z" level=info msg="Container to stop \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:48:02.727286 containerd[1527]: time="2025-05-27T17:48:02.727259633Z" level=info msg="Container to stop \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:48:02.765285 systemd[1]: cri-containerd-5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708.scope: Deactivated successfully. May 27 17:48:02.787400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e-rootfs.mount: Deactivated successfully. May 27 17:48:02.805369 containerd[1527]: time="2025-05-27T17:48:02.805270359Z" level=info msg="shim disconnected" id=9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e namespace=k8s.io May 27 17:48:02.805369 containerd[1527]: time="2025-05-27T17:48:02.805337446Z" level=warning msg="cleaning up after shim disconnected" id=9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e namespace=k8s.io May 27 17:48:02.819926 containerd[1527]: time="2025-05-27T17:48:02.805348908Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:48:02.877268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708-rootfs.mount: Deactivated successfully. May 27 17:48:02.891919 containerd[1527]: time="2025-05-27T17:48:02.891555643Z" level=info msg="shim disconnected" id=5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708 namespace=k8s.io May 27 17:48:02.891919 containerd[1527]: time="2025-05-27T17:48:02.891606700Z" level=warning msg="cleaning up after shim disconnected" id=5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708 namespace=k8s.io May 27 17:48:02.891919 containerd[1527]: time="2025-05-27T17:48:02.891618862Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:48:02.935913 containerd[1527]: time="2025-05-27T17:48:02.934623834Z" level=info msg="received exit event sandbox_id:\"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" exit_status:137 exited_at:{seconds:1748368082 nanos:774716721}" May 27 17:48:02.937955 containerd[1527]: time="2025-05-27T17:48:02.937613103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" id:\"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" pid:2823 exit_status:137 exited_at:{seconds:1748368082 nanos:774716721}" May 27 17:48:02.943117 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e-shm.mount: Deactivated successfully. May 27 17:48:02.963484 containerd[1527]: time="2025-05-27T17:48:02.962841317Z" level=info msg="received exit event sandbox_id:\"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" exit_status:137 exited_at:{seconds:1748368082 nanos:651163709}" May 27 17:48:02.964607 containerd[1527]: time="2025-05-27T17:48:02.963569082Z" level=info msg="TearDown network for sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" successfully" May 27 17:48:02.964607 containerd[1527]: time="2025-05-27T17:48:02.963610469Z" level=info msg="StopPodSandbox for \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" returns successfully" May 27 17:48:02.964607 containerd[1527]: time="2025-05-27T17:48:02.963955315Z" level=info msg="TearDown network for sandbox \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" successfully" May 27 17:48:02.964607 containerd[1527]: time="2025-05-27T17:48:02.963983518Z" level=info msg="StopPodSandbox for \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" returns successfully" May 27 17:48:03.036292 kubelet[2672]: I0527 17:48:03.036221 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-hostproc\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037443 kubelet[2672]: I0527 17:48:03.036439 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f562065d-86b8-4757-9ef9-7aa958d54c7d-hubble-tls\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037443 kubelet[2672]: I0527 17:48:03.036478 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-etc-cni-netd\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037443 kubelet[2672]: I0527 17:48:03.036503 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfkm7\" (UniqueName: \"kubernetes.io/projected/f562065d-86b8-4757-9ef9-7aa958d54c7d-kube-api-access-dfkm7\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037443 kubelet[2672]: I0527 17:48:03.036521 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-lib-modules\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037443 kubelet[2672]: I0527 17:48:03.036536 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-xtables-lock\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037443 kubelet[2672]: I0527 17:48:03.036551 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-cgroup\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037706 kubelet[2672]: I0527 17:48:03.036583 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhkh6\" (UniqueName: \"kubernetes.io/projected/8d029677-a101-412c-b857-6a30c5d7aaf0-kube-api-access-mhkh6\") pod \"8d029677-a101-412c-b857-6a30c5d7aaf0\" (UID: \"8d029677-a101-412c-b857-6a30c5d7aaf0\") " May 27 17:48:03.037706 kubelet[2672]: I0527 17:48:03.036598 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cni-path\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037706 kubelet[2672]: I0527 17:48:03.036616 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-config-path\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037706 kubelet[2672]: I0527 17:48:03.036631 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-host-proc-sys-net\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037706 kubelet[2672]: I0527 17:48:03.036649 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-host-proc-sys-kernel\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037706 kubelet[2672]: I0527 17:48:03.036666 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-bpf-maps\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037862 kubelet[2672]: I0527 17:48:03.036680 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-run\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.037862 kubelet[2672]: I0527 17:48:03.036698 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d029677-a101-412c-b857-6a30c5d7aaf0-cilium-config-path\") pod \"8d029677-a101-412c-b857-6a30c5d7aaf0\" (UID: \"8d029677-a101-412c-b857-6a30c5d7aaf0\") " May 27 17:48:03.037862 kubelet[2672]: I0527 17:48:03.036714 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f562065d-86b8-4757-9ef9-7aa958d54c7d-clustermesh-secrets\") pod \"f562065d-86b8-4757-9ef9-7aa958d54c7d\" (UID: \"f562065d-86b8-4757-9ef9-7aa958d54c7d\") " May 27 17:48:03.041259 kubelet[2672]: I0527 17:48:03.040958 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-hostproc" (OuterVolumeSpecName: "hostproc") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.041954 kubelet[2672]: I0527 17:48:03.041379 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.041954 kubelet[2672]: I0527 17:48:03.041719 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f562065d-86b8-4757-9ef9-7aa958d54c7d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:48:03.042304 kubelet[2672]: I0527 17:48:03.042271 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.042388 kubelet[2672]: I0527 17:48:03.042317 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.042388 kubelet[2672]: I0527 17:48:03.042349 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.042388 kubelet[2672]: I0527 17:48:03.042369 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.044029 kubelet[2672]: I0527 17:48:03.042867 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.044233 kubelet[2672]: I0527 17:48:03.044213 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.044345 kubelet[2672]: I0527 17:48:03.044329 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.044417 kubelet[2672]: I0527 17:48:03.044349 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cni-path" (OuterVolumeSpecName: "cni-path") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:48:03.047354 kubelet[2672]: I0527 17:48:03.047255 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d029677-a101-412c-b857-6a30c5d7aaf0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d029677-a101-412c-b857-6a30c5d7aaf0" (UID: "8d029677-a101-412c-b857-6a30c5d7aaf0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 17:48:03.047492 kubelet[2672]: I0527 17:48:03.047451 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f562065d-86b8-4757-9ef9-7aa958d54c7d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 17:48:03.049263 kubelet[2672]: I0527 17:48:03.049165 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 17:48:03.051371 kubelet[2672]: I0527 17:48:03.051308 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f562065d-86b8-4757-9ef9-7aa958d54c7d-kube-api-access-dfkm7" (OuterVolumeSpecName: "kube-api-access-dfkm7") pod "f562065d-86b8-4757-9ef9-7aa958d54c7d" (UID: "f562065d-86b8-4757-9ef9-7aa958d54c7d"). InnerVolumeSpecName "kube-api-access-dfkm7". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:48:03.052303 kubelet[2672]: I0527 17:48:03.052236 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d029677-a101-412c-b857-6a30c5d7aaf0-kube-api-access-mhkh6" (OuterVolumeSpecName: "kube-api-access-mhkh6") pod "8d029677-a101-412c-b857-6a30c5d7aaf0" (UID: "8d029677-a101-412c-b857-6a30c5d7aaf0"). InnerVolumeSpecName "kube-api-access-mhkh6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:48:03.137781 kubelet[2672]: I0527 17:48:03.137699 2672 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-etc-cni-netd\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.137781 kubelet[2672]: I0527 17:48:03.137751 2672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfkm7\" (UniqueName: \"kubernetes.io/projected/f562065d-86b8-4757-9ef9-7aa958d54c7d-kube-api-access-dfkm7\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.137781 kubelet[2672]: I0527 17:48:03.137770 2672 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f562065d-86b8-4757-9ef9-7aa958d54c7d-hubble-tls\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.137781 kubelet[2672]: I0527 17:48:03.137785 2672 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-cgroup\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.137781 kubelet[2672]: I0527 17:48:03.137800 2672 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-lib-modules\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.140578 kubelet[2672]: I0527 17:48:03.137814 2672 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-xtables-lock\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.140578 kubelet[2672]: I0527 17:48:03.137828 2672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhkh6\" (UniqueName: \"kubernetes.io/projected/8d029677-a101-412c-b857-6a30c5d7aaf0-kube-api-access-mhkh6\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.140578 kubelet[2672]: I0527 17:48:03.137840 2672 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-config-path\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.140578 kubelet[2672]: I0527 17:48:03.137851 2672 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cni-path\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.140578 kubelet[2672]: I0527 17:48:03.137864 2672 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-host-proc-sys-net\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.140578 kubelet[2672]: I0527 17:48:03.137910 2672 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-host-proc-sys-kernel\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.140578 kubelet[2672]: I0527 17:48:03.137927 2672 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-bpf-maps\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.140578 kubelet[2672]: I0527 17:48:03.137941 2672 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-cilium-run\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.141010 kubelet[2672]: I0527 17:48:03.137956 2672 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d029677-a101-412c-b857-6a30c5d7aaf0-cilium-config-path\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.141010 kubelet[2672]: I0527 17:48:03.137971 2672 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f562065d-86b8-4757-9ef9-7aa958d54c7d-clustermesh-secrets\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.141010 kubelet[2672]: I0527 17:48:03.137984 2672 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f562065d-86b8-4757-9ef9-7aa958d54c7d-hostproc\") on node \"ci-4344.0.0-f-2f5fe7c465\" DevicePath \"\"" May 27 17:48:03.449327 kubelet[2672]: I0527 17:48:03.447510 2672 scope.go:117] "RemoveContainer" containerID="30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c" May 27 17:48:03.455505 containerd[1527]: time="2025-05-27T17:48:03.455446265Z" level=info msg="RemoveContainer for \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\"" May 27 17:48:03.470366 containerd[1527]: time="2025-05-27T17:48:03.469531180Z" level=info msg="RemoveContainer for \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" returns successfully" May 27 17:48:03.470535 kubelet[2672]: I0527 17:48:03.470066 2672 scope.go:117] "RemoveContainer" containerID="dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760" May 27 17:48:03.473669 systemd[1]: Removed slice kubepods-burstable-podf562065d_86b8_4757_9ef9_7aa958d54c7d.slice - libcontainer container kubepods-burstable-podf562065d_86b8_4757_9ef9_7aa958d54c7d.slice. May 27 17:48:03.474116 systemd[1]: kubepods-burstable-podf562065d_86b8_4757_9ef9_7aa958d54c7d.slice: Consumed 9.216s CPU time, 187.8M memory peak, 69.1M read from disk, 13.3M written to disk. May 27 17:48:03.483246 systemd[1]: Removed slice kubepods-besteffort-pod8d029677_a101_412c_b857_6a30c5d7aaf0.slice - libcontainer container kubepods-besteffort-pod8d029677_a101_412c_b857_6a30c5d7aaf0.slice. May 27 17:48:03.487225 containerd[1527]: time="2025-05-27T17:48:03.486592810Z" level=info msg="RemoveContainer for \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\"" May 27 17:48:03.497565 containerd[1527]: time="2025-05-27T17:48:03.497375675Z" level=info msg="RemoveContainer for \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\" returns successfully" May 27 17:48:03.498051 kubelet[2672]: I0527 17:48:03.498013 2672 scope.go:117] "RemoveContainer" containerID="a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31" May 27 17:48:03.506149 containerd[1527]: time="2025-05-27T17:48:03.506102403Z" level=info msg="RemoveContainer for \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\"" May 27 17:48:03.514545 containerd[1527]: time="2025-05-27T17:48:03.514199144Z" level=info msg="RemoveContainer for \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\" returns successfully" May 27 17:48:03.515350 kubelet[2672]: I0527 17:48:03.515314 2672 scope.go:117] "RemoveContainer" containerID="3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d" May 27 17:48:03.521131 containerd[1527]: time="2025-05-27T17:48:03.521066896Z" level=info msg="RemoveContainer for \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\"" May 27 17:48:03.527272 containerd[1527]: time="2025-05-27T17:48:03.527021329Z" level=info msg="RemoveContainer for \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\" returns successfully" May 27 17:48:03.527844 kubelet[2672]: I0527 17:48:03.527809 2672 scope.go:117] "RemoveContainer" containerID="ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd" May 27 17:48:03.531711 containerd[1527]: time="2025-05-27T17:48:03.531621328Z" level=info msg="RemoveContainer for \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\"" May 27 17:48:03.537016 containerd[1527]: time="2025-05-27T17:48:03.536966664Z" level=info msg="RemoveContainer for \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\" returns successfully" May 27 17:48:03.537909 kubelet[2672]: I0527 17:48:03.537786 2672 scope.go:117] "RemoveContainer" containerID="30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c" May 27 17:48:03.543847 containerd[1527]: time="2025-05-27T17:48:03.539399901Z" level=error msg="ContainerStatus for \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\": not found" May 27 17:48:03.544255 kubelet[2672]: E0527 17:48:03.544121 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\": not found" containerID="30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c" May 27 17:48:03.545640 kubelet[2672]: I0527 17:48:03.545457 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c"} err="failed to get container status \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"30e2c5481fa5a1c09b4517c0387cca3fb169a1ee89505546e1143fc5c489af3c\": not found" May 27 17:48:03.545981 kubelet[2672]: I0527 17:48:03.545650 2672 scope.go:117] "RemoveContainer" containerID="dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760" May 27 17:48:03.546714 containerd[1527]: time="2025-05-27T17:48:03.546206766Z" level=error msg="ContainerStatus for \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\": not found" May 27 17:48:03.547046 kubelet[2672]: E0527 17:48:03.547013 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\": not found" containerID="dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760" May 27 17:48:03.547111 kubelet[2672]: I0527 17:48:03.547063 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760"} err="failed to get container status \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc16a819f627f5d975f4cce7eb7d03b2e1152d78e7f97cc0028ad5ed16d35760\": not found" May 27 17:48:03.547111 kubelet[2672]: I0527 17:48:03.547100 2672 scope.go:117] "RemoveContainer" containerID="a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31" May 27 17:48:03.547488 containerd[1527]: time="2025-05-27T17:48:03.547425338Z" level=error msg="ContainerStatus for \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\": not found" May 27 17:48:03.547773 kubelet[2672]: E0527 17:48:03.547738 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\": not found" containerID="a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31" May 27 17:48:03.548039 kubelet[2672]: I0527 17:48:03.547930 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31"} err="failed to get container status \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\": rpc error: code = NotFound desc = an error occurred when try to find container \"a044bfcf3a56aad35b5f3680fee1d2dada4b86c9f21fe64f8aec25ffe147ca31\": not found" May 27 17:48:03.548039 kubelet[2672]: I0527 17:48:03.547986 2672 scope.go:117] "RemoveContainer" containerID="3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d" May 27 17:48:03.548504 containerd[1527]: time="2025-05-27T17:48:03.548443119Z" level=error msg="ContainerStatus for \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\": not found" May 27 17:48:03.548757 kubelet[2672]: E0527 17:48:03.548738 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\": not found" containerID="3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d" May 27 17:48:03.548873 kubelet[2672]: I0527 17:48:03.548852 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d"} err="failed to get container status \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f39ee7c735f0afe68eb11d228a835d42d99a71f556473131a1731c137fb596d\": not found" May 27 17:48:03.548969 kubelet[2672]: I0527 17:48:03.548960 2672 scope.go:117] "RemoveContainer" containerID="ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd" May 27 17:48:03.549532 containerd[1527]: time="2025-05-27T17:48:03.549445087Z" level=error msg="ContainerStatus for \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\": not found" May 27 17:48:03.549623 kubelet[2672]: E0527 17:48:03.549602 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\": not found" containerID="ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd" May 27 17:48:03.549663 kubelet[2672]: I0527 17:48:03.549630 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd"} err="failed to get container status \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae7bea47b95258e478b68533e9e071bba0a1ed829250f2cad721c9da313adddd\": not found" May 27 17:48:03.549701 kubelet[2672]: I0527 17:48:03.549672 2672 scope.go:117] "RemoveContainer" containerID="c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed" May 27 17:48:03.552030 containerd[1527]: time="2025-05-27T17:48:03.551976292Z" level=info msg="RemoveContainer for \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\"" May 27 17:48:03.556621 containerd[1527]: time="2025-05-27T17:48:03.556541042Z" level=info msg="RemoveContainer for \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" returns successfully" May 27 17:48:03.557194 kubelet[2672]: I0527 17:48:03.557017 2672 scope.go:117] "RemoveContainer" containerID="c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed" May 27 17:48:03.557662 containerd[1527]: time="2025-05-27T17:48:03.557611673Z" level=error msg="ContainerStatus for \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\": not found" May 27 17:48:03.558168 kubelet[2672]: E0527 17:48:03.558085 2672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\": not found" containerID="c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed" May 27 17:48:03.558168 kubelet[2672]: I0527 17:48:03.558131 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed"} err="failed to get container status \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1e5480e439ce8aa40e7f15df5ef92d750f1459ad4077e077c41e82bdedb57ed\": not found" May 27 17:48:03.597509 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708-shm.mount: Deactivated successfully. May 27 17:48:03.597685 systemd[1]: var-lib-kubelet-pods-f562065d\x2d86b8\x2d4757\x2d9ef9\x2d7aa958d54c7d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddfkm7.mount: Deactivated successfully. May 27 17:48:03.597779 systemd[1]: var-lib-kubelet-pods-8d029677\x2da101\x2d412c\x2db857\x2d6a30c5d7aaf0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmhkh6.mount: Deactivated successfully. May 27 17:48:03.598537 systemd[1]: var-lib-kubelet-pods-f562065d\x2d86b8\x2d4757\x2d9ef9\x2d7aa958d54c7d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 17:48:03.598712 systemd[1]: var-lib-kubelet-pods-f562065d\x2d86b8\x2d4757\x2d9ef9\x2d7aa958d54c7d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 17:48:04.076690 kubelet[2672]: E0527 17:48:04.076605 2672 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:48:04.344121 sshd[4240]: Connection closed by 139.178.68.195 port 40718 May 27 17:48:04.344914 sshd-session[4238]: pam_unix(sshd:session): session closed for user core May 27 17:48:04.355822 systemd[1]: sshd@23-143.198.147.228:22-139.178.68.195:40718.service: Deactivated successfully. May 27 17:48:04.358839 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:48:04.362275 systemd-logind[1499]: Session 23 logged out. Waiting for processes to exit. May 27 17:48:04.367680 systemd[1]: Started sshd@24-143.198.147.228:22-139.178.68.195:54016.service - OpenSSH per-connection server daemon (139.178.68.195:54016). May 27 17:48:04.370274 systemd-logind[1499]: Removed session 23. May 27 17:48:04.435557 sshd[4394]: Accepted publickey for core from 139.178.68.195 port 54016 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:48:04.437979 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:04.444103 systemd-logind[1499]: New session 24 of user core. May 27 17:48:04.453263 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:48:04.884191 kubelet[2672]: I0527 17:48:04.884137 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d029677-a101-412c-b857-6a30c5d7aaf0" path="/var/lib/kubelet/pods/8d029677-a101-412c-b857-6a30c5d7aaf0/volumes" May 27 17:48:04.884583 kubelet[2672]: I0527 17:48:04.884566 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f562065d-86b8-4757-9ef9-7aa958d54c7d" path="/var/lib/kubelet/pods/f562065d-86b8-4757-9ef9-7aa958d54c7d/volumes" May 27 17:48:05.141199 sshd[4396]: Connection closed by 139.178.68.195 port 54016 May 27 17:48:05.144446 sshd-session[4394]: pam_unix(sshd:session): session closed for user core May 27 17:48:05.157206 systemd[1]: sshd@24-143.198.147.228:22-139.178.68.195:54016.service: Deactivated successfully. May 27 17:48:05.162233 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:48:05.166240 systemd-logind[1499]: Session 24 logged out. Waiting for processes to exit. May 27 17:48:05.174389 systemd[1]: Started sshd@25-143.198.147.228:22-139.178.68.195:54020.service - OpenSSH per-connection server daemon (139.178.68.195:54020). May 27 17:48:05.178411 systemd-logind[1499]: Removed session 24. May 27 17:48:05.244109 kubelet[2672]: E0527 17:48:05.244055 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d029677-a101-412c-b857-6a30c5d7aaf0" containerName="cilium-operator" May 27 17:48:05.244109 kubelet[2672]: E0527 17:48:05.244094 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f562065d-86b8-4757-9ef9-7aa958d54c7d" containerName="cilium-agent" May 27 17:48:05.244109 kubelet[2672]: E0527 17:48:05.244104 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f562065d-86b8-4757-9ef9-7aa958d54c7d" containerName="mount-cgroup" May 27 17:48:05.244109 kubelet[2672]: E0527 17:48:05.244111 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f562065d-86b8-4757-9ef9-7aa958d54c7d" containerName="apply-sysctl-overwrites" May 27 17:48:05.244109 kubelet[2672]: E0527 17:48:05.244117 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f562065d-86b8-4757-9ef9-7aa958d54c7d" containerName="mount-bpf-fs" May 27 17:48:05.244109 kubelet[2672]: E0527 17:48:05.244127 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f562065d-86b8-4757-9ef9-7aa958d54c7d" containerName="clean-cilium-state" May 27 17:48:05.246122 kubelet[2672]: I0527 17:48:05.244166 2672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d029677-a101-412c-b857-6a30c5d7aaf0" containerName="cilium-operator" May 27 17:48:05.246122 kubelet[2672]: I0527 17:48:05.244173 2672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f562065d-86b8-4757-9ef9-7aa958d54c7d" containerName="cilium-agent" May 27 17:48:05.260627 systemd[1]: Created slice kubepods-burstable-pod28093a23_477e_43fa_b206_c39785284417.slice - libcontainer container kubepods-burstable-pod28093a23_477e_43fa_b206_c39785284417.slice. May 27 17:48:05.274255 sshd[4408]: Accepted publickey for core from 139.178.68.195 port 54020 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:48:05.276814 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:05.289089 systemd-logind[1499]: New session 25 of user core. May 27 17:48:05.292530 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:48:05.354116 sshd[4410]: Connection closed by 139.178.68.195 port 54020 May 27 17:48:05.354856 sshd-session[4408]: pam_unix(sshd:session): session closed for user core May 27 17:48:05.356676 kubelet[2672]: I0527 17:48:05.355641 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-cni-path\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.356676 kubelet[2672]: I0527 17:48:05.355700 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-lib-modules\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.356676 kubelet[2672]: I0527 17:48:05.355732 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28093a23-477e-43fa-b206-c39785284417-clustermesh-secrets\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.356676 kubelet[2672]: I0527 17:48:05.355760 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/28093a23-477e-43fa-b206-c39785284417-cilium-ipsec-secrets\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.356676 kubelet[2672]: I0527 17:48:05.355787 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-host-proc-sys-kernel\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.356676 kubelet[2672]: I0527 17:48:05.355804 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28093a23-477e-43fa-b206-c39785284417-hubble-tls\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.357152 kubelet[2672]: I0527 17:48:05.355824 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-cilium-cgroup\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.357152 kubelet[2672]: I0527 17:48:05.355840 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-bpf-maps\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.357152 kubelet[2672]: I0527 17:48:05.355867 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-xtables-lock\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.357152 kubelet[2672]: I0527 17:48:05.355950 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28093a23-477e-43fa-b206-c39785284417-cilium-config-path\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.357152 kubelet[2672]: I0527 17:48:05.355978 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-host-proc-sys-net\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.357152 kubelet[2672]: I0527 17:48:05.356005 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-cilium-run\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.357423 kubelet[2672]: I0527 17:48:05.356032 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-hostproc\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.357423 kubelet[2672]: I0527 17:48:05.356065 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxhtv\" (UniqueName: \"kubernetes.io/projected/28093a23-477e-43fa-b206-c39785284417-kube-api-access-nxhtv\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.357423 kubelet[2672]: I0527 17:48:05.356093 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28093a23-477e-43fa-b206-c39785284417-etc-cni-netd\") pod \"cilium-rvlxx\" (UID: \"28093a23-477e-43fa-b206-c39785284417\") " pod="kube-system/cilium-rvlxx" May 27 17:48:05.368954 systemd[1]: sshd@25-143.198.147.228:22-139.178.68.195:54020.service: Deactivated successfully. May 27 17:48:05.372322 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:48:05.373947 systemd-logind[1499]: Session 25 logged out. Waiting for processes to exit. May 27 17:48:05.378845 systemd[1]: Started sshd@26-143.198.147.228:22-139.178.68.195:54034.service - OpenSSH per-connection server daemon (139.178.68.195:54034). May 27 17:48:05.380161 systemd-logind[1499]: Removed session 25. May 27 17:48:05.443616 sshd[4417]: Accepted publickey for core from 139.178.68.195 port 54034 ssh2: RSA SHA256:iFW6VpwcfJb/83J++GzH3zYULQdnSj2fh5dwSJ45DF8 May 27 17:48:05.445422 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:05.454048 systemd-logind[1499]: New session 26 of user core. May 27 17:48:05.463366 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 17:48:05.569601 kubelet[2672]: E0527 17:48:05.569534 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:05.572320 containerd[1527]: time="2025-05-27T17:48:05.572260510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rvlxx,Uid:28093a23-477e-43fa-b206-c39785284417,Namespace:kube-system,Attempt:0,}" May 27 17:48:05.594354 containerd[1527]: time="2025-05-27T17:48:05.594281656Z" level=info msg="connecting to shim df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f" address="unix:///run/containerd/s/400fd6e258abbc402451974926bfb07b254177690a350d01bbe304b984275432" namespace=k8s.io protocol=ttrpc version=3 May 27 17:48:05.630177 systemd[1]: Started cri-containerd-df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f.scope - libcontainer container df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f. May 27 17:48:05.751246 containerd[1527]: time="2025-05-27T17:48:05.749709122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rvlxx,Uid:28093a23-477e-43fa-b206-c39785284417,Namespace:kube-system,Attempt:0,} returns sandbox id \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\"" May 27 17:48:05.751661 kubelet[2672]: E0527 17:48:05.751632 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:05.759455 containerd[1527]: time="2025-05-27T17:48:05.757189453Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:48:05.789396 containerd[1527]: time="2025-05-27T17:48:05.789333617Z" level=info msg="Container 05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:05.811468 containerd[1527]: time="2025-05-27T17:48:05.811239341Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f\"" May 27 17:48:05.815731 containerd[1527]: time="2025-05-27T17:48:05.815668567Z" level=info msg="StartContainer for \"05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f\"" May 27 17:48:05.820729 containerd[1527]: time="2025-05-27T17:48:05.820559228Z" level=info msg="connecting to shim 05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f" address="unix:///run/containerd/s/400fd6e258abbc402451974926bfb07b254177690a350d01bbe304b984275432" protocol=ttrpc version=3 May 27 17:48:05.844179 systemd[1]: Started cri-containerd-05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f.scope - libcontainer container 05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f. May 27 17:48:05.888147 containerd[1527]: time="2025-05-27T17:48:05.887472177Z" level=info msg="StartContainer for \"05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f\" returns successfully" May 27 17:48:05.905942 systemd[1]: cri-containerd-05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f.scope: Deactivated successfully. May 27 17:48:05.906683 systemd[1]: cri-containerd-05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f.scope: Consumed 30ms CPU time, 9M memory peak, 2.7M read from disk. May 27 17:48:05.911089 containerd[1527]: time="2025-05-27T17:48:05.911013511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f\" id:\"05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f\" pid:4488 exited_at:{seconds:1748368085 nanos:909987984}" May 27 17:48:05.911523 containerd[1527]: time="2025-05-27T17:48:05.911340106Z" level=info msg="received exit event container_id:\"05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f\" id:\"05b619889a5a108359d0d938153eb3f4c1f20e27e9193f25aa7ea06d3385401f\" pid:4488 exited_at:{seconds:1748368085 nanos:909987984}" May 27 17:48:06.489212 kubelet[2672]: E0527 17:48:06.489173 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:06.498234 containerd[1527]: time="2025-05-27T17:48:06.498168373Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:48:06.515927 containerd[1527]: time="2025-05-27T17:48:06.515208132Z" level=info msg="Container 4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:06.521818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697118885.mount: Deactivated successfully. May 27 17:48:06.529969 containerd[1527]: time="2025-05-27T17:48:06.529477549Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97\"" May 27 17:48:06.531479 containerd[1527]: time="2025-05-27T17:48:06.530705023Z" level=info msg="StartContainer for \"4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97\"" May 27 17:48:06.532691 containerd[1527]: time="2025-05-27T17:48:06.532634100Z" level=info msg="connecting to shim 4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97" address="unix:///run/containerd/s/400fd6e258abbc402451974926bfb07b254177690a350d01bbe304b984275432" protocol=ttrpc version=3 May 27 17:48:06.565421 systemd[1]: Started cri-containerd-4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97.scope - libcontainer container 4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97. May 27 17:48:06.616176 containerd[1527]: time="2025-05-27T17:48:06.616121095Z" level=info msg="StartContainer for \"4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97\" returns successfully" May 27 17:48:06.629584 systemd[1]: cri-containerd-4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97.scope: Deactivated successfully. May 27 17:48:06.630203 systemd[1]: cri-containerd-4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97.scope: Consumed 31ms CPU time, 7.3M memory peak, 2.1M read from disk. May 27 17:48:06.630992 containerd[1527]: time="2025-05-27T17:48:06.630743514Z" level=info msg="received exit event container_id:\"4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97\" id:\"4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97\" pid:4531 exited_at:{seconds:1748368086 nanos:629566862}" May 27 17:48:06.631352 containerd[1527]: time="2025-05-27T17:48:06.631109908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97\" id:\"4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97\" pid:4531 exited_at:{seconds:1748368086 nanos:629566862}" May 27 17:48:06.660108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c3bb6d33d0488a330c8aabb6647a99c5fa9d59cec5d1f5c41d71b652dec4b97-rootfs.mount: Deactivated successfully. May 27 17:48:07.496444 kubelet[2672]: E0527 17:48:07.496314 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:07.502926 containerd[1527]: time="2025-05-27T17:48:07.502396617Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:48:07.518923 containerd[1527]: time="2025-05-27T17:48:07.518190660Z" level=info msg="Container 1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:07.526022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022968518.mount: Deactivated successfully. May 27 17:48:07.531309 containerd[1527]: time="2025-05-27T17:48:07.531250909Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf\"" May 27 17:48:07.533417 containerd[1527]: time="2025-05-27T17:48:07.532904533Z" level=info msg="StartContainer for \"1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf\"" May 27 17:48:07.535585 containerd[1527]: time="2025-05-27T17:48:07.535150187Z" level=info msg="connecting to shim 1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf" address="unix:///run/containerd/s/400fd6e258abbc402451974926bfb07b254177690a350d01bbe304b984275432" protocol=ttrpc version=3 May 27 17:48:07.574199 systemd[1]: Started cri-containerd-1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf.scope - libcontainer container 1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf. May 27 17:48:07.637809 containerd[1527]: time="2025-05-27T17:48:07.637762538Z" level=info msg="StartContainer for \"1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf\" returns successfully" May 27 17:48:07.644422 systemd[1]: cri-containerd-1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf.scope: Deactivated successfully. May 27 17:48:07.648216 containerd[1527]: time="2025-05-27T17:48:07.648151415Z" level=info msg="received exit event container_id:\"1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf\" id:\"1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf\" pid:4577 exited_at:{seconds:1748368087 nanos:647552669}" May 27 17:48:07.649942 containerd[1527]: time="2025-05-27T17:48:07.649830146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf\" id:\"1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf\" pid:4577 exited_at:{seconds:1748368087 nanos:647552669}" May 27 17:48:07.681906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6f7cd0a58013e372a41710b22561fd5f6667669d1b2ef69357b9d2b53964bf-rootfs.mount: Deactivated successfully. May 27 17:48:08.503927 kubelet[2672]: E0527 17:48:08.503863 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:08.510042 containerd[1527]: time="2025-05-27T17:48:08.509996503Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:48:08.524652 containerd[1527]: time="2025-05-27T17:48:08.522296437Z" level=info msg="Container 22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:08.535417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201878880.mount: Deactivated successfully. May 27 17:48:08.539152 containerd[1527]: time="2025-05-27T17:48:08.539078850Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7\"" May 27 17:48:08.541574 containerd[1527]: time="2025-05-27T17:48:08.540481146Z" level=info msg="StartContainer for \"22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7\"" May 27 17:48:08.541574 containerd[1527]: time="2025-05-27T17:48:08.541488301Z" level=info msg="connecting to shim 22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7" address="unix:///run/containerd/s/400fd6e258abbc402451974926bfb07b254177690a350d01bbe304b984275432" protocol=ttrpc version=3 May 27 17:48:08.573222 systemd[1]: Started cri-containerd-22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7.scope - libcontainer container 22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7. May 27 17:48:08.630500 systemd[1]: cri-containerd-22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7.scope: Deactivated successfully. May 27 17:48:08.636569 containerd[1527]: time="2025-05-27T17:48:08.636489151Z" level=info msg="StartContainer for \"22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7\" returns successfully" May 27 17:48:08.641054 containerd[1527]: time="2025-05-27T17:48:08.640985653Z" level=info msg="received exit event container_id:\"22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7\" id:\"22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7\" pid:4616 exited_at:{seconds:1748368088 nanos:637525476}" May 27 17:48:08.642135 containerd[1527]: time="2025-05-27T17:48:08.641808965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7\" id:\"22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7\" pid:4616 exited_at:{seconds:1748368088 nanos:637525476}" May 27 17:48:08.672388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22e7f5bd8f326719d09448f931f910c78660dfb41eb1880b3a26a5d74d1f93b7-rootfs.mount: Deactivated successfully. May 27 17:48:09.078084 kubelet[2672]: E0527 17:48:09.078017 2672 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:48:09.519055 kubelet[2672]: E0527 17:48:09.517217 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:09.522441 containerd[1527]: time="2025-05-27T17:48:09.522385690Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:48:09.544064 containerd[1527]: time="2025-05-27T17:48:09.544003966Z" level=info msg="Container 32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0: CDI devices from CRI Config.CDIDevices: []" May 27 17:48:09.550569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311950094.mount: Deactivated successfully. May 27 17:48:09.564528 containerd[1527]: time="2025-05-27T17:48:09.564469998Z" level=info msg="CreateContainer within sandbox \"df34103ba3516e2d350ad76ff4595cf3c6484c76584c0c21d8a19d4e283fe56f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0\"" May 27 17:48:09.566820 containerd[1527]: time="2025-05-27T17:48:09.565384499Z" level=info msg="StartContainer for \"32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0\"" May 27 17:48:09.566820 containerd[1527]: time="2025-05-27T17:48:09.566323567Z" level=info msg="connecting to shim 32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0" address="unix:///run/containerd/s/400fd6e258abbc402451974926bfb07b254177690a350d01bbe304b984275432" protocol=ttrpc version=3 May 27 17:48:09.613404 systemd[1]: Started cri-containerd-32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0.scope - libcontainer container 32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0. May 27 17:48:09.669626 kubelet[2672]: I0527 17:48:09.669586 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:48:09.671054 kubelet[2672]: I0527 17:48:09.670803 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:48:09.671865 kubelet[2672]: I0527 17:48:09.671459 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-rvlxx","kube-system/coredns-7c65d6cfc9-xkc97","kube-system/coredns-7c65d6cfc9-54sq5","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465"] May 27 17:48:09.671865 kubelet[2672]: E0527 17:48:09.671512 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rvlxx" May 27 17:48:09.671865 kubelet[2672]: E0527 17:48:09.671533 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:48:09.671865 kubelet[2672]: E0527 17:48:09.671543 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:48:09.671865 kubelet[2672]: E0527 17:48:09.671554 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:48:09.671865 kubelet[2672]: E0527 17:48:09.671566 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:48:09.671865 kubelet[2672]: E0527 17:48:09.671576 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:48:09.671865 kubelet[2672]: E0527 17:48:09.671585 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:48:09.671865 kubelet[2672]: I0527 17:48:09.671596 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 27 17:48:09.675112 containerd[1527]: time="2025-05-27T17:48:09.672744369Z" level=info msg="StartContainer for \"32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0\" returns successfully" May 27 17:48:09.818720 containerd[1527]: time="2025-05-27T17:48:09.816559558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0\" id:\"f6e50de2efd205b88f939bfd4f5e3d4badb1574f77cc381fc8e7be6d94caeb3b\" pid:4682 exited_at:{seconds:1748368089 nanos:816089679}" May 27 17:48:10.259468 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 17:48:10.544441 kubelet[2672]: E0527 17:48:10.544316 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:10.571129 kubelet[2672]: I0527 17:48:10.569529 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rvlxx" podStartSLOduration=5.569510942 podStartE2EDuration="5.569510942s" podCreationTimestamp="2025-05-27 17:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:48:10.566962174 +0000 UTC m=+101.931893334" watchObservedRunningTime="2025-05-27 17:48:10.569510942 +0000 UTC m=+101.934442162" May 27 17:48:10.884395 kubelet[2672]: I0527 17:48:10.883215 2672 setters.go:600] "Node became not ready" node="ci-4344.0.0-f-2f5fe7c465" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T17:48:10Z","lastTransitionTime":"2025-05-27T17:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 17:48:11.572954 kubelet[2672]: E0527 17:48:11.571989 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:12.271123 containerd[1527]: time="2025-05-27T17:48:12.271059814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0\" id:\"165764ca5444cb3750bb598f3405370babb4849813d66766dc5ecfe1b517ad06\" pid:4825 exit_status:1 exited_at:{seconds:1748368092 nanos:269310322}" May 27 17:48:14.025384 systemd-networkd[1461]: lxc_health: Link UP May 27 17:48:14.039842 systemd-networkd[1461]: lxc_health: Gained carrier May 27 17:48:14.715270 containerd[1527]: time="2025-05-27T17:48:14.715201500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0\" id:\"064442317e29f189aa733d51994f9a9755845707de1ce8e30745236ddc6bf788\" pid:5205 exited_at:{seconds:1748368094 nanos:713643457}" May 27 17:48:15.219130 systemd-networkd[1461]: lxc_health: Gained IPv6LL May 27 17:48:15.573915 kubelet[2672]: E0527 17:48:15.572464 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:16.563104 kubelet[2672]: E0527 17:48:16.562838 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:16.908064 containerd[1527]: time="2025-05-27T17:48:16.907852169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0\" id:\"d04edf9cf06630c08dc9cb96b53aaadc66419f27ee29507a304f4556ecb75b18\" pid:5238 exited_at:{seconds:1748368096 nanos:907499098}" May 27 17:48:17.567022 kubelet[2672]: E0527 17:48:17.566622 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 17:48:19.105289 containerd[1527]: time="2025-05-27T17:48:19.105234001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32c5b6ea77cc0334b2e870eccdada160462b935f508f44db5389a2415d14dbd0\" id:\"4f94560115dce77968d109ed8ddfa06e0b7860e893e311281cd0beebfe408703\" pid:5265 exited_at:{seconds:1748368099 nanos:104079936}" May 27 17:48:19.121529 sshd[4423]: Connection closed by 139.178.68.195 port 54034 May 27 17:48:19.122684 sshd-session[4417]: pam_unix(sshd:session): session closed for user core May 27 17:48:19.130588 systemd[1]: sshd@26-143.198.147.228:22-139.178.68.195:54034.service: Deactivated successfully. May 27 17:48:19.134645 systemd[1]: session-26.scope: Deactivated successfully. May 27 17:48:19.138132 systemd-logind[1499]: Session 26 logged out. Waiting for processes to exit. May 27 17:48:19.143991 systemd-logind[1499]: Removed session 26. May 27 17:48:19.694512 kubelet[2672]: I0527 17:48:19.694419 2672 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 17:48:19.696552 kubelet[2672]: I0527 17:48:19.694792 2672 container_gc.go:88] "Attempting to delete unused containers" May 27 17:48:19.701629 containerd[1527]: time="2025-05-27T17:48:19.701545230Z" level=info msg="StopPodSandbox for \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\"" May 27 17:48:19.702278 containerd[1527]: time="2025-05-27T17:48:19.702218825Z" level=info msg="TearDown network for sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" successfully" May 27 17:48:19.702650 containerd[1527]: time="2025-05-27T17:48:19.702363608Z" level=info msg="StopPodSandbox for \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" returns successfully" May 27 17:48:19.706925 containerd[1527]: time="2025-05-27T17:48:19.705042068Z" level=info msg="RemovePodSandbox for \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\"" May 27 17:48:19.706925 containerd[1527]: time="2025-05-27T17:48:19.705101546Z" level=info msg="Forcibly stopping sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\"" May 27 17:48:19.706925 containerd[1527]: time="2025-05-27T17:48:19.705257762Z" level=info msg="TearDown network for sandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" successfully" May 27 17:48:19.707571 containerd[1527]: time="2025-05-27T17:48:19.707507962Z" level=info msg="Ensure that sandbox 5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708 in task-service has been cleanup successfully" May 27 17:48:19.712482 containerd[1527]: time="2025-05-27T17:48:19.712425572Z" level=info msg="RemovePodSandbox \"5e34de3531c957fe040ab802526074288d6a458e4db477d9213105dd57a7e708\" returns successfully" May 27 17:48:19.713533 containerd[1527]: time="2025-05-27T17:48:19.713492011Z" level=info msg="StopPodSandbox for \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\"" May 27 17:48:19.714044 containerd[1527]: time="2025-05-27T17:48:19.714011824Z" level=info msg="TearDown network for sandbox \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" successfully" May 27 17:48:19.714310 containerd[1527]: time="2025-05-27T17:48:19.714287787Z" level=info msg="StopPodSandbox for \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" returns successfully" May 27 17:48:19.715349 containerd[1527]: time="2025-05-27T17:48:19.715315221Z" level=info msg="RemovePodSandbox for \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\"" May 27 17:48:19.715569 containerd[1527]: time="2025-05-27T17:48:19.715545864Z" level=info msg="Forcibly stopping sandbox \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\"" May 27 17:48:19.715858 containerd[1527]: time="2025-05-27T17:48:19.715834579Z" level=info msg="TearDown network for sandbox \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" successfully" May 27 17:48:19.718172 containerd[1527]: time="2025-05-27T17:48:19.718129314Z" level=info msg="Ensure that sandbox 9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e in task-service has been cleanup successfully" May 27 17:48:19.720838 containerd[1527]: time="2025-05-27T17:48:19.720738008Z" level=info msg="RemovePodSandbox \"9e357a29610f5ac00eeb1901a06c854ba1f91cb00822bb39861233a65cbae65e\" returns successfully" May 27 17:48:19.722859 kubelet[2672]: I0527 17:48:19.722811 2672 image_gc_manager.go:431] "Attempting to delete unused images" May 27 17:48:19.743961 kubelet[2672]: I0527 17:48:19.743702 2672 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 17:48:19.744295 kubelet[2672]: I0527 17:48:19.744256 2672 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-xkc97","kube-system/coredns-7c65d6cfc9-54sq5","kube-system/cilium-rvlxx","kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-proxy-7sbp2","kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465","kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465"] May 27 17:48:19.744465 kubelet[2672]: E0527 17:48:19.744445 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-xkc97" May 27 17:48:19.744668 kubelet[2672]: E0527 17:48:19.744546 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-54sq5" May 27 17:48:19.744668 kubelet[2672]: E0527 17:48:19.744569 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rvlxx" May 27 17:48:19.744668 kubelet[2672]: E0527 17:48:19.744584 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-f-2f5fe7c465" May 27 17:48:19.744668 kubelet[2672]: E0527 17:48:19.744601 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-7sbp2" May 27 17:48:19.744668 kubelet[2672]: E0527 17:48:19.744617 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-f-2f5fe7c465" May 27 17:48:19.744668 kubelet[2672]: E0527 17:48:19.744632 2672 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-f-2f5fe7c465" May 27 17:48:19.744668 kubelet[2672]: I0527 17:48:19.744649 2672 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node"