May 27 18:14:28.974563 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 18:14:28.974604 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:14:28.974619 kernel: BIOS-provided physical RAM map: May 27 18:14:28.974629 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 18:14:28.974639 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 18:14:28.974649 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 18:14:28.974659 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 27 18:14:28.974674 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 27 18:14:28.974688 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 18:14:28.974697 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 18:14:28.974707 kernel: NX (Execute Disable) protection: active May 27 18:14:28.974717 kernel: APIC: Static calls initialized May 27 18:14:28.974728 kernel: SMBIOS 2.8 present. May 27 18:14:28.974739 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 27 18:14:28.974755 kernel: DMI: Memory slots populated: 1/1 May 27 18:14:28.974767 kernel: Hypervisor detected: KVM May 27 18:14:28.974781 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 18:14:28.974792 kernel: kvm-clock: using sched offset of 4580659790 cycles May 27 18:14:28.974805 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 18:14:28.974816 kernel: tsc: Detected 2494.138 MHz processor May 27 18:14:28.974829 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 18:14:28.974841 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 18:14:28.974853 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 27 18:14:28.974870 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 18:14:28.974882 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 18:14:28.974893 kernel: ACPI: Early table checksum verification disabled May 27 18:14:28.974905 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 27 18:14:28.974917 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:14:28.974928 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:14:28.974939 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:14:28.974950 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 27 18:14:28.974962 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:14:28.974979 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:14:28.974992 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:14:28.975005 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:14:28.975016 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 27 18:14:28.975028 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 27 18:14:28.975040 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 27 18:14:28.975052 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 27 18:14:28.975064 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 27 18:14:28.975087 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 27 18:14:28.975099 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 27 18:14:28.975112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 27 18:14:28.975127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 27 18:14:28.975140 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 27 18:14:28.975158 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 27 18:14:28.975190 kernel: Zone ranges: May 27 18:14:28.975203 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 18:14:28.975217 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 27 18:14:28.975229 kernel: Normal empty May 27 18:14:28.975241 kernel: Device empty May 27 18:14:28.975256 kernel: Movable zone start for each node May 27 18:14:28.975271 kernel: Early memory node ranges May 27 18:14:28.975283 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 18:14:28.975295 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 27 18:14:28.975314 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 27 18:14:28.975326 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 18:14:28.975339 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 18:14:28.975348 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 27 18:14:28.975357 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 18:14:28.975365 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 18:14:28.975382 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 18:14:28.975395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 18:14:28.975410 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 18:14:28.975429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 18:14:28.975442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 18:14:28.975455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 18:14:28.975468 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 18:14:28.975479 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 18:14:28.975492 kernel: TSC deadline timer available May 27 18:14:28.975505 kernel: CPU topo: Max. logical packages: 1 May 27 18:14:28.975518 kernel: CPU topo: Max. logical dies: 1 May 27 18:14:28.975530 kernel: CPU topo: Max. dies per package: 1 May 27 18:14:28.975542 kernel: CPU topo: Max. threads per core: 1 May 27 18:14:28.975561 kernel: CPU topo: Num. cores per package: 2 May 27 18:14:28.975576 kernel: CPU topo: Num. threads per package: 2 May 27 18:14:28.975590 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 18:14:28.975602 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 18:14:28.975615 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 27 18:14:28.975627 kernel: Booting paravirtualized kernel on KVM May 27 18:14:28.975640 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 18:14:28.975655 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 18:14:28.975668 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 18:14:28.975687 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 18:14:28.975700 kernel: pcpu-alloc: [0] 0 1 May 27 18:14:28.975713 kernel: kvm-guest: PV spinlocks disabled, no host support May 27 18:14:28.975729 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:14:28.975743 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 18:14:28.975755 kernel: random: crng init done May 27 18:14:28.975767 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 18:14:28.975780 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 18:14:28.975798 kernel: Fallback order for Node 0: 0 May 27 18:14:28.975811 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 27 18:14:28.975825 kernel: Policy zone: DMA32 May 27 18:14:28.975837 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 18:14:28.975851 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 18:14:28.975864 kernel: Kernel/User page tables isolation: enabled May 27 18:14:28.975877 kernel: ftrace: allocating 40081 entries in 157 pages May 27 18:14:28.975890 kernel: ftrace: allocated 157 pages with 5 groups May 27 18:14:28.975904 kernel: Dynamic Preempt: voluntary May 27 18:14:28.975941 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 18:14:28.975959 kernel: rcu: RCU event tracing is enabled. May 27 18:14:28.975968 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 18:14:28.975976 kernel: Trampoline variant of Tasks RCU enabled. May 27 18:14:28.975985 kernel: Rude variant of Tasks RCU enabled. May 27 18:14:28.975994 kernel: Tracing variant of Tasks RCU enabled. May 27 18:14:28.976003 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 18:14:28.976011 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 18:14:28.976026 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:14:28.976052 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:14:28.976064 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:14:28.976080 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 18:14:28.976093 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 18:14:28.976107 kernel: Console: colour VGA+ 80x25 May 27 18:14:28.976118 kernel: printk: legacy console [tty0] enabled May 27 18:14:28.976129 kernel: printk: legacy console [ttyS0] enabled May 27 18:14:28.976143 kernel: ACPI: Core revision 20240827 May 27 18:14:28.976159 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 18:14:28.977264 kernel: APIC: Switch to symmetric I/O mode setup May 27 18:14:28.977281 kernel: x2apic enabled May 27 18:14:28.977291 kernel: APIC: Switched APIC routing to: physical x2apic May 27 18:14:28.977304 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 18:14:28.977317 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns May 27 18:14:28.977327 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) May 27 18:14:28.977339 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 27 18:14:28.977353 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 27 18:14:28.977367 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 18:14:28.977384 kernel: Spectre V2 : Mitigation: Retpolines May 27 18:14:28.977398 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 18:14:28.977412 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 27 18:14:28.977426 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 18:14:28.977440 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 18:14:28.977454 kernel: MDS: Mitigation: Clear CPU buffers May 27 18:14:28.977470 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 27 18:14:28.977484 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 18:14:28.977494 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 18:14:28.977503 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 18:14:28.977513 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 18:14:28.977522 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 18:14:28.977534 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 27 18:14:28.977543 kernel: Freeing SMP alternatives memory: 32K May 27 18:14:28.977553 kernel: pid_max: default: 32768 minimum: 301 May 27 18:14:28.977562 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 18:14:28.977574 kernel: landlock: Up and running. May 27 18:14:28.977583 kernel: SELinux: Initializing. May 27 18:14:28.977593 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 18:14:28.977602 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 18:14:28.977617 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 27 18:14:28.977632 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 27 18:14:28.977642 kernel: signal: max sigframe size: 1776 May 27 18:14:28.977651 kernel: rcu: Hierarchical SRCU implementation. May 27 18:14:28.977666 kernel: rcu: Max phase no-delay instances is 400. May 27 18:14:28.977684 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 18:14:28.977700 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 18:14:28.977709 kernel: smp: Bringing up secondary CPUs ... May 27 18:14:28.977719 kernel: smpboot: x86: Booting SMP configuration: May 27 18:14:28.977732 kernel: .... node #0, CPUs: #1 May 27 18:14:28.977742 kernel: smp: Brought up 1 node, 2 CPUs May 27 18:14:28.977751 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) May 27 18:14:28.977765 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 125140K reserved, 0K cma-reserved) May 27 18:14:28.977780 kernel: devtmpfs: initialized May 27 18:14:28.977797 kernel: x86/mm: Memory block size: 128MB May 27 18:14:28.977811 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 18:14:28.977827 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 18:14:28.977836 kernel: pinctrl core: initialized pinctrl subsystem May 27 18:14:28.977846 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 18:14:28.977855 kernel: audit: initializing netlink subsys (disabled) May 27 18:14:28.977864 kernel: audit: type=2000 audit(1748369665.335:1): state=initialized audit_enabled=0 res=1 May 27 18:14:28.977874 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 18:14:28.977883 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 18:14:28.977895 kernel: cpuidle: using governor menu May 27 18:14:28.977905 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 18:14:28.977914 kernel: dca service started, version 1.12.1 May 27 18:14:28.977923 kernel: PCI: Using configuration type 1 for base access May 27 18:14:28.977933 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 18:14:28.977942 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 18:14:28.977951 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 18:14:28.977960 kernel: ACPI: Added _OSI(Module Device) May 27 18:14:28.977971 kernel: ACPI: Added _OSI(Processor Device) May 27 18:14:28.977988 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 18:14:28.978003 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 18:14:28.978031 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 18:14:28.978048 kernel: ACPI: Interpreter enabled May 27 18:14:28.978062 kernel: ACPI: PM: (supports S0 S5) May 27 18:14:28.978074 kernel: ACPI: Using IOAPIC for interrupt routing May 27 18:14:28.978088 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 18:14:28.978101 kernel: PCI: Using E820 reservations for host bridge windows May 27 18:14:28.978113 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 27 18:14:28.978139 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 18:14:28.978426 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 27 18:14:28.978542 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 27 18:14:28.978649 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 27 18:14:28.978666 kernel: acpiphp: Slot [3] registered May 27 18:14:28.978680 kernel: acpiphp: Slot [4] registered May 27 18:14:28.978695 kernel: acpiphp: Slot [5] registered May 27 18:14:28.978715 kernel: acpiphp: Slot [6] registered May 27 18:14:28.978725 kernel: acpiphp: Slot [7] registered May 27 18:14:28.978734 kernel: acpiphp: Slot [8] registered May 27 18:14:28.978743 kernel: acpiphp: Slot [9] registered May 27 18:14:28.978752 kernel: acpiphp: Slot [10] registered May 27 18:14:28.978762 kernel: acpiphp: Slot [11] registered May 27 18:14:28.978773 kernel: acpiphp: Slot [12] registered May 27 18:14:28.978788 kernel: acpiphp: Slot [13] registered May 27 18:14:28.978801 kernel: acpiphp: Slot [14] registered May 27 18:14:28.978811 kernel: acpiphp: Slot [15] registered May 27 18:14:28.978826 kernel: acpiphp: Slot [16] registered May 27 18:14:28.978836 kernel: acpiphp: Slot [17] registered May 27 18:14:28.978845 kernel: acpiphp: Slot [18] registered May 27 18:14:28.978854 kernel: acpiphp: Slot [19] registered May 27 18:14:28.978863 kernel: acpiphp: Slot [20] registered May 27 18:14:28.978872 kernel: acpiphp: Slot [21] registered May 27 18:14:28.978881 kernel: acpiphp: Slot [22] registered May 27 18:14:28.978891 kernel: acpiphp: Slot [23] registered May 27 18:14:28.978900 kernel: acpiphp: Slot [24] registered May 27 18:14:28.978912 kernel: acpiphp: Slot [25] registered May 27 18:14:28.978921 kernel: acpiphp: Slot [26] registered May 27 18:14:28.978931 kernel: acpiphp: Slot [27] registered May 27 18:14:28.978940 kernel: acpiphp: Slot [28] registered May 27 18:14:28.978949 kernel: acpiphp: Slot [29] registered May 27 18:14:28.978958 kernel: acpiphp: Slot [30] registered May 27 18:14:28.978967 kernel: acpiphp: Slot [31] registered May 27 18:14:28.978977 kernel: PCI host bridge to bus 0000:00 May 27 18:14:28.979150 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 18:14:28.980403 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 18:14:28.980563 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 18:14:28.980705 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 27 18:14:28.980858 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 27 18:14:28.980993 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 18:14:28.981252 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 27 18:14:28.981457 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 27 18:14:28.981664 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 27 18:14:28.981828 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 27 18:14:28.981988 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 27 18:14:28.982141 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 27 18:14:28.982351 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 27 18:14:28.982509 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 27 18:14:28.982704 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 27 18:14:28.982867 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 27 18:14:28.983037 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 27 18:14:28.984317 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 27 18:14:28.984516 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 27 18:14:28.984733 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 27 18:14:28.984932 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 27 18:14:28.985092 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 27 18:14:28.985281 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 27 18:14:28.985429 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 27 18:14:28.985588 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 18:14:28.985728 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 18:14:28.985844 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 27 18:14:28.985957 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 27 18:14:28.986078 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 27 18:14:28.986305 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 18:14:28.986433 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 27 18:14:28.986565 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 27 18:14:28.986672 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 27 18:14:28.986824 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 27 18:14:28.986944 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 27 18:14:28.987063 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 27 18:14:28.987622 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 27 18:14:28.987816 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 18:14:28.987968 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 27 18:14:28.988114 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 27 18:14:28.988293 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 27 18:14:28.988489 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 18:14:28.988658 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 27 18:14:28.988831 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 27 18:14:28.988969 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 27 18:14:28.989146 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 27 18:14:28.990268 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 27 18:14:28.990537 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 27 18:14:28.990560 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 18:14:28.990575 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 18:14:28.990589 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 18:14:28.990604 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 18:14:28.990617 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 27 18:14:28.990632 kernel: iommu: Default domain type: Translated May 27 18:14:28.990647 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 18:14:28.990666 kernel: PCI: Using ACPI for IRQ routing May 27 18:14:28.990680 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 18:14:28.990695 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 18:14:28.990709 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 27 18:14:28.990874 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 27 18:14:28.991028 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 27 18:14:28.991193 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 18:14:28.991214 kernel: vgaarb: loaded May 27 18:14:28.991229 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 18:14:28.991251 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 18:14:28.991266 kernel: clocksource: Switched to clocksource kvm-clock May 27 18:14:28.991279 kernel: VFS: Disk quotas dquot_6.6.0 May 27 18:14:28.991294 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 18:14:28.991309 kernel: pnp: PnP ACPI init May 27 18:14:28.991323 kernel: pnp: PnP ACPI: found 4 devices May 27 18:14:28.991336 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 18:14:28.991350 kernel: NET: Registered PF_INET protocol family May 27 18:14:28.991362 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 18:14:28.991381 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 27 18:14:28.991397 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 18:14:28.991410 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 18:14:28.991423 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 27 18:14:28.991437 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 27 18:14:28.991451 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 18:14:28.991466 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 18:14:28.991476 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 18:14:28.991502 kernel: NET: Registered PF_XDP protocol family May 27 18:14:28.991638 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 18:14:28.991765 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 18:14:28.991889 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 18:14:28.992000 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 27 18:14:28.992122 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 27 18:14:28.993193 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 27 18:14:28.993361 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 27 18:14:28.993390 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 27 18:14:28.993509 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28998 usecs May 27 18:14:28.993527 kernel: PCI: CLS 0 bytes, default 64 May 27 18:14:28.993540 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 18:14:28.993554 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns May 27 18:14:28.993567 kernel: Initialise system trusted keyrings May 27 18:14:28.993593 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 27 18:14:28.993606 kernel: Key type asymmetric registered May 27 18:14:28.993619 kernel: Asymmetric key parser 'x509' registered May 27 18:14:28.993640 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 18:14:28.993654 kernel: io scheduler mq-deadline registered May 27 18:14:28.993667 kernel: io scheduler kyber registered May 27 18:14:28.993680 kernel: io scheduler bfq registered May 27 18:14:28.993693 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 18:14:28.993707 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 27 18:14:28.993722 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 27 18:14:28.993734 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 27 18:14:28.993748 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 18:14:28.993763 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 18:14:28.993783 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 18:14:28.993795 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 18:14:28.993809 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 18:14:28.994040 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 18:14:28.995278 kernel: rtc_cmos 00:03: registered as rtc0 May 27 18:14:28.995427 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T18:14:28 UTC (1748369668) May 27 18:14:28.995525 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 27 18:14:28.995545 kernel: intel_pstate: CPU model not supported May 27 18:14:28.995555 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 18:14:28.995565 kernel: NET: Registered PF_INET6 protocol family May 27 18:14:28.995574 kernel: Segment Routing with IPv6 May 27 18:14:28.995584 kernel: In-situ OAM (IOAM) with IPv6 May 27 18:14:28.995593 kernel: NET: Registered PF_PACKET protocol family May 27 18:14:28.995602 kernel: Key type dns_resolver registered May 27 18:14:28.995612 kernel: IPI shorthand broadcast: enabled May 27 18:14:28.995621 kernel: sched_clock: Marking stable (3901008538, 117674693)->(4041703537, -23020306) May 27 18:14:28.995634 kernel: registered taskstats version 1 May 27 18:14:28.995643 kernel: Loading compiled-in X.509 certificates May 27 18:14:28.995653 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 18:14:28.995662 kernel: Demotion targets for Node 0: null May 27 18:14:28.995671 kernel: Key type .fscrypt registered May 27 18:14:28.995680 kernel: Key type fscrypt-provisioning registered May 27 18:14:28.995708 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 18:14:28.995721 kernel: ima: Allocated hash algorithm: sha1 May 27 18:14:28.995730 kernel: ima: No architecture policies found May 27 18:14:28.995742 kernel: clk: Disabling unused clocks May 27 18:14:28.995753 kernel: Warning: unable to open an initial console. May 27 18:14:28.995767 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 18:14:28.995779 kernel: Write protecting the kernel read-only data: 24576k May 27 18:14:28.995788 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 18:14:28.995798 kernel: Run /init as init process May 27 18:14:28.995807 kernel: with arguments: May 27 18:14:28.995817 kernel: /init May 27 18:14:28.995826 kernel: with environment: May 27 18:14:28.995839 kernel: HOME=/ May 27 18:14:28.995849 kernel: TERM=linux May 27 18:14:28.995859 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 18:14:28.995875 systemd[1]: Successfully made /usr/ read-only. May 27 18:14:28.995896 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:14:28.995911 systemd[1]: Detected virtualization kvm. May 27 18:14:28.995921 systemd[1]: Detected architecture x86-64. May 27 18:14:28.995934 systemd[1]: Running in initrd. May 27 18:14:28.995943 systemd[1]: No hostname configured, using default hostname. May 27 18:14:28.995953 systemd[1]: Hostname set to . May 27 18:14:28.995963 systemd[1]: Initializing machine ID from VM UUID. May 27 18:14:28.995973 systemd[1]: Queued start job for default target initrd.target. May 27 18:14:28.995982 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:14:28.995993 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:14:28.996004 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 18:14:28.996016 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:14:28.996027 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 18:14:28.996045 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 18:14:28.996062 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 18:14:28.996084 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 18:14:28.996101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:14:28.996114 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:14:28.996127 systemd[1]: Reached target paths.target - Path Units. May 27 18:14:28.996139 systemd[1]: Reached target slices.target - Slice Units. May 27 18:14:28.996153 systemd[1]: Reached target swap.target - Swaps. May 27 18:14:28.996168 systemd[1]: Reached target timers.target - Timer Units. May 27 18:14:28.997150 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:14:28.997163 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:14:28.997202 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 18:14:28.997212 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 18:14:28.997223 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:14:28.997233 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:14:28.997247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:14:28.997262 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:14:28.997277 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 18:14:28.997292 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:14:28.997309 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 18:14:28.997319 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 18:14:28.997329 systemd[1]: Starting systemd-fsck-usr.service... May 27 18:14:28.997340 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:14:28.997350 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:14:28.997359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:14:28.997370 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 18:14:28.997384 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:14:28.997394 systemd[1]: Finished systemd-fsck-usr.service. May 27 18:14:28.997404 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 18:14:28.997463 systemd-journald[211]: Collecting audit messages is disabled. May 27 18:14:28.997493 systemd-journald[211]: Journal started May 27 18:14:28.997520 systemd-journald[211]: Runtime Journal (/run/log/journal/33e325c614044b67ae6b32b5885a2940) is 4.9M, max 39.5M, 34.6M free. May 27 18:14:28.999208 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:14:28.974545 systemd-modules-load[213]: Inserted module 'overlay' May 27 18:14:29.008163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:14:29.051235 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 18:14:29.051281 kernel: Bridge firewalling registered May 27 18:14:29.033096 systemd-modules-load[213]: Inserted module 'br_netfilter' May 27 18:14:29.051813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:14:29.057054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:14:29.058061 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:14:29.065338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 18:14:29.068372 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:14:29.069331 systemd-tmpfiles[223]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 18:14:29.075403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:14:29.080288 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:14:29.102859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:14:29.103711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:14:29.112370 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:14:29.120293 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:14:29.123392 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 18:14:29.165295 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:14:29.197482 systemd-resolved[247]: Positive Trust Anchors: May 27 18:14:29.197505 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:14:29.197561 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:14:29.208395 systemd-resolved[247]: Defaulting to hostname 'linux'. May 27 18:14:29.211313 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:14:29.212877 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:14:29.343286 kernel: SCSI subsystem initialized May 27 18:14:29.356260 kernel: Loading iSCSI transport class v2.0-870. May 27 18:14:29.372288 kernel: iscsi: registered transport (tcp) May 27 18:14:29.403306 kernel: iscsi: registered transport (qla4xxx) May 27 18:14:29.403436 kernel: QLogic iSCSI HBA Driver May 27 18:14:29.443128 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:14:29.474308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:14:29.478802 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:14:29.582780 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 18:14:29.586753 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 18:14:29.661254 kernel: raid6: avx2x4 gen() 15761 MB/s May 27 18:14:29.678242 kernel: raid6: avx2x2 gen() 15443 MB/s May 27 18:14:29.695469 kernel: raid6: avx2x1 gen() 11576 MB/s May 27 18:14:29.695576 kernel: raid6: using algorithm avx2x4 gen() 15761 MB/s May 27 18:14:29.713267 kernel: raid6: .... xor() 6283 MB/s, rmw enabled May 27 18:14:29.713355 kernel: raid6: using avx2x2 recovery algorithm May 27 18:14:29.739241 kernel: xor: automatically using best checksumming function avx May 27 18:14:29.974271 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 18:14:29.985685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 18:14:29.990088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:14:30.040675 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 27 18:14:30.049427 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:14:30.053781 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 18:14:30.096581 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 27 18:14:30.142528 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:14:30.145841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:14:30.236021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:14:30.241401 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 18:14:30.329218 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 27 18:14:30.329637 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 27 18:14:30.342315 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 18:14:30.342407 kernel: GPT:9289727 != 125829119 May 27 18:14:30.342422 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 18:14:30.342435 kernel: GPT:9289727 != 125829119 May 27 18:14:30.342447 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 18:14:30.342460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:14:30.375633 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 27 18:14:30.376050 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) May 27 18:14:30.380220 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 27 18:14:30.382209 kernel: scsi host0: Virtio SCSI HBA May 27 18:14:30.411895 kernel: cryptd: max_cpu_qlen set to 1000 May 27 18:14:30.452784 kernel: libata version 3.00 loaded. May 27 18:14:30.466803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:14:30.468358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:14:30.470543 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:14:30.482215 kernel: ata_piix 0000:00:01.1: version 2.13 May 27 18:14:30.479677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:14:30.481874 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:14:30.492224 kernel: scsi host1: ata_piix May 27 18:14:30.494909 kernel: scsi host2: ata_piix May 27 18:14:30.495352 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 27 18:14:30.495381 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 27 18:14:30.501238 kernel: AES CTR mode by8 optimization enabled May 27 18:14:30.507234 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 18:14:30.522590 kernel: ACPI: bus type USB registered May 27 18:14:30.525221 kernel: usbcore: registered new interface driver usbfs May 27 18:14:30.541486 kernel: usbcore: registered new interface driver hub May 27 18:14:30.541571 kernel: usbcore: registered new device driver usb May 27 18:14:30.614346 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 18:14:30.630945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:14:30.645269 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 18:14:30.653591 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 18:14:30.654200 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 18:14:30.679563 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 18:14:30.683450 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 18:14:30.696466 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 27 18:14:30.697035 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 27 18:14:30.697306 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 27 18:14:30.698236 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 27 18:14:30.700248 kernel: hub 1-0:1.0: USB hub found May 27 18:14:30.703483 kernel: hub 1-0:1.0: 2 ports detected May 27 18:14:30.708431 disk-uuid[611]: Primary Header is updated. May 27 18:14:30.708431 disk-uuid[611]: Secondary Entries is updated. May 27 18:14:30.708431 disk-uuid[611]: Secondary Header is updated. May 27 18:14:30.725221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:14:30.897387 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 18:14:30.899753 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:14:30.900504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:14:30.901669 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:14:30.904596 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 18:14:30.944028 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 18:14:31.740133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 18:14:31.741039 disk-uuid[612]: The operation has completed successfully. May 27 18:14:31.798512 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 18:14:31.799371 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 18:14:31.862499 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 18:14:31.896428 sh[636]: Success May 27 18:14:31.923277 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 18:14:31.923401 kernel: device-mapper: uevent: version 1.0.3 May 27 18:14:31.925169 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 18:14:31.941222 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 27 18:14:32.020334 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 18:14:32.022060 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 18:14:32.034981 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 18:14:32.051229 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 18:14:32.054219 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (648) May 27 18:14:32.056574 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 18:14:32.056689 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 18:14:32.058207 kernel: BTRFS info (device dm-0): using free-space-tree May 27 18:14:32.068444 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 18:14:32.070709 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 18:14:32.071482 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 18:14:32.072690 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 18:14:32.075454 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 18:14:32.117533 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (679) May 27 18:14:32.117652 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:14:32.119673 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:14:32.119758 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:14:32.133252 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:14:32.135737 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 18:14:32.141425 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 18:14:32.254783 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:14:32.273453 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:14:32.350709 systemd-networkd[817]: lo: Link UP May 27 18:14:32.350722 systemd-networkd[817]: lo: Gained carrier May 27 18:14:32.354392 systemd-networkd[817]: Enumeration completed May 27 18:14:32.354971 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 27 18:14:32.354978 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 27 18:14:32.356138 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:14:32.356145 systemd-networkd[817]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 18:14:32.357544 systemd-networkd[817]: eth0: Link UP May 27 18:14:32.357549 systemd-networkd[817]: eth0: Gained carrier May 27 18:14:32.357567 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 27 18:14:32.358146 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:14:32.358867 systemd[1]: Reached target network.target - Network. May 27 18:14:32.361750 systemd-networkd[817]: eth1: Link UP May 27 18:14:32.361757 systemd-networkd[817]: eth1: Gained carrier May 27 18:14:32.361782 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:14:32.375369 systemd-networkd[817]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 May 27 18:14:32.379360 systemd-networkd[817]: eth0: DHCPv4 address 143.110.225.216/20, gateway 143.110.224.1 acquired from 169.254.169.253 May 27 18:14:32.396756 ignition[726]: Ignition 2.21.0 May 27 18:14:32.397846 ignition[726]: Stage: fetch-offline May 27 18:14:32.397926 ignition[726]: no configs at "/usr/lib/ignition/base.d" May 27 18:14:32.397943 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:14:32.398167 ignition[726]: parsed url from cmdline: "" May 27 18:14:32.398189 ignition[726]: no config URL provided May 27 18:14:32.398199 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:14:32.398214 ignition[726]: no config at "/usr/lib/ignition/user.ign" May 27 18:14:32.398224 ignition[726]: failed to fetch config: resource requires networking May 27 18:14:32.399634 ignition[726]: Ignition finished successfully May 27 18:14:32.403166 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:14:32.405756 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 18:14:32.449645 ignition[826]: Ignition 2.21.0 May 27 18:14:32.449673 ignition[826]: Stage: fetch May 27 18:14:32.449989 ignition[826]: no configs at "/usr/lib/ignition/base.d" May 27 18:14:32.450075 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:14:32.451302 ignition[826]: parsed url from cmdline: "" May 27 18:14:32.451311 ignition[826]: no config URL provided May 27 18:14:32.451326 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:14:32.451344 ignition[826]: no config at "/usr/lib/ignition/user.ign" May 27 18:14:32.451401 ignition[826]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 27 18:14:32.475756 ignition[826]: GET result: OK May 27 18:14:32.476037 ignition[826]: parsing config with SHA512: 3d4f6277d84f9ad63d91a71c6827da82db27b2d1d07d834e3d0d28b16d6179ff0d8c33e1df5db6965d998f80e496726acc910b148657d308c36412522688e809 May 27 18:14:32.483541 unknown[826]: fetched base config from "system" May 27 18:14:32.483559 unknown[826]: fetched base config from "system" May 27 18:14:32.483569 unknown[826]: fetched user config from "digitalocean" May 27 18:14:32.484456 ignition[826]: fetch: fetch complete May 27 18:14:32.484467 ignition[826]: fetch: fetch passed May 27 18:14:32.484583 ignition[826]: Ignition finished successfully May 27 18:14:32.488031 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 18:14:32.491442 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 18:14:32.535440 ignition[833]: Ignition 2.21.0 May 27 18:14:32.535466 ignition[833]: Stage: kargs May 27 18:14:32.535738 ignition[833]: no configs at "/usr/lib/ignition/base.d" May 27 18:14:32.535755 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:14:32.540393 ignition[833]: kargs: kargs passed May 27 18:14:32.541167 ignition[833]: Ignition finished successfully May 27 18:14:32.543463 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 18:14:32.546347 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 18:14:32.584607 ignition[840]: Ignition 2.21.0 May 27 18:14:32.584626 ignition[840]: Stage: disks May 27 18:14:32.584918 ignition[840]: no configs at "/usr/lib/ignition/base.d" May 27 18:14:32.584931 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:14:32.588331 ignition[840]: disks: disks passed May 27 18:14:32.588409 ignition[840]: Ignition finished successfully May 27 18:14:32.590810 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 18:14:32.591904 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 18:14:32.592506 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 18:14:32.593362 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:14:32.594072 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:14:32.594732 systemd[1]: Reached target basic.target - Basic System. May 27 18:14:32.597107 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 18:14:32.644051 systemd-fsck[849]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 18:14:32.649326 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 18:14:32.651893 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 18:14:32.786219 kernel: EXT4-fs (vda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 18:14:32.787594 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 18:14:32.788667 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 18:14:32.791164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:14:32.794300 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 18:14:32.796373 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 27 18:14:32.808408 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 18:14:32.810329 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 18:14:32.811535 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:14:32.824319 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (857) May 27 18:14:32.824414 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:14:32.824429 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:14:32.824442 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:14:32.827988 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 18:14:32.831679 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 18:14:32.868629 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:14:32.886520 coreos-metadata[859]: May 27 18:14:32.886 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:14:32.897496 coreos-metadata[859]: May 27 18:14:32.897 INFO Fetch successful May 27 18:14:32.907122 coreos-metadata[860]: May 27 18:14:32.907 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:14:32.912546 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 27 18:14:32.913583 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 27 18:14:32.916142 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory May 27 18:14:32.919998 coreos-metadata[860]: May 27 18:14:32.919 INFO Fetch successful May 27 18:14:32.924955 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory May 27 18:14:32.930346 coreos-metadata[860]: May 27 18:14:32.930 INFO wrote hostname ci-4344.0.0-0-76b74bdce7 to /sysroot/etc/hostname May 27 18:14:32.934757 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory May 27 18:14:32.935303 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 18:14:32.941528 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory May 27 18:14:33.075846 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 18:14:33.078828 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 18:14:33.080085 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 18:14:33.107975 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 18:14:33.110280 kernel: BTRFS info (device vda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:14:33.128572 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 18:14:33.144247 ignition[978]: INFO : Ignition 2.21.0 May 27 18:14:33.144247 ignition[978]: INFO : Stage: mount May 27 18:14:33.145982 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:14:33.145982 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:14:33.148624 ignition[978]: INFO : mount: mount passed May 27 18:14:33.148624 ignition[978]: INFO : Ignition finished successfully May 27 18:14:33.149232 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 18:14:33.152081 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 18:14:33.179036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:14:33.213339 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (990) May 27 18:14:33.218570 kernel: BTRFS info (device vda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:14:33.218677 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:14:33.218696 kernel: BTRFS info (device vda6): using free-space-tree May 27 18:14:33.225261 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:14:33.260642 ignition[1006]: INFO : Ignition 2.21.0 May 27 18:14:33.260642 ignition[1006]: INFO : Stage: files May 27 18:14:33.263236 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:14:33.263236 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:14:33.263236 ignition[1006]: DEBUG : files: compiled without relabeling support, skipping May 27 18:14:33.265675 ignition[1006]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 18:14:33.265675 ignition[1006]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 18:14:33.269485 ignition[1006]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 18:14:33.270236 ignition[1006]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 18:14:33.271391 unknown[1006]: wrote ssh authorized keys file for user: core May 27 18:14:33.272260 ignition[1006]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 18:14:33.275414 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 18:14:33.275414 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 18:14:33.332403 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 18:14:33.703468 systemd-networkd[817]: eth0: Gained IPv6LL May 27 18:14:34.343491 systemd-networkd[817]: eth1: Gained IPv6LL May 27 18:14:34.718896 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 18:14:34.718896 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 18:14:34.721112 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 18:14:35.179455 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 18:14:35.237170 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 18:14:35.237170 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 18:14:35.239903 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 18:14:35.239903 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 18:14:35.239903 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 18:14:35.239903 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 18:14:35.239903 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 18:14:35.239903 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 18:14:35.239903 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 18:14:35.244520 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:14:35.244520 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:14:35.244520 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 18:14:35.244520 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 18:14:35.244520 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 18:14:35.244520 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 18:14:35.876270 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 18:14:36.231268 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 18:14:36.232405 ignition[1006]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 18:14:36.233678 ignition[1006]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 18:14:36.235215 ignition[1006]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 18:14:36.235215 ignition[1006]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 18:14:36.235215 ignition[1006]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 27 18:14:36.237217 ignition[1006]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 27 18:14:36.237217 ignition[1006]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 18:14:36.237217 ignition[1006]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 18:14:36.237217 ignition[1006]: INFO : files: files passed May 27 18:14:36.237217 ignition[1006]: INFO : Ignition finished successfully May 27 18:14:36.237654 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 18:14:36.241350 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 18:14:36.244266 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 18:14:36.259572 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 18:14:36.259699 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 18:14:36.270751 initrd-setup-root-after-ignition[1037]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:14:36.270751 initrd-setup-root-after-ignition[1037]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 18:14:36.273208 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:14:36.274861 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:14:36.276095 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 18:14:36.278088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 18:14:36.339539 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 18:14:36.339721 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 18:14:36.341561 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 18:14:36.342211 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 18:14:36.343095 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 18:14:36.344668 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 18:14:36.378283 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:14:36.381289 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 18:14:36.409239 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 18:14:36.410868 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:14:36.411613 systemd[1]: Stopped target timers.target - Timer Units. May 27 18:14:36.412641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 18:14:36.412889 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:14:36.413961 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 18:14:36.414847 systemd[1]: Stopped target basic.target - Basic System. May 27 18:14:36.415725 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 18:14:36.416444 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:14:36.417373 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 18:14:36.418155 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 18:14:36.418976 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 18:14:36.419721 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:14:36.420687 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 18:14:36.421529 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 18:14:36.422390 systemd[1]: Stopped target swap.target - Swaps. May 27 18:14:36.423008 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 18:14:36.423261 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 18:14:36.424568 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 18:14:36.425779 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:14:36.426499 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 18:14:36.426712 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:14:36.427355 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 18:14:36.427568 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 18:14:36.428609 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 18:14:36.428826 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:14:36.429787 systemd[1]: ignition-files.service: Deactivated successfully. May 27 18:14:36.429956 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 18:14:36.430740 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 18:14:36.430925 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 18:14:36.433490 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 18:14:36.439572 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 18:14:36.440309 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 18:14:36.440576 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:14:36.442548 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 18:14:36.442754 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:14:36.452308 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 18:14:36.453032 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 18:14:36.473093 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 18:14:36.479973 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 18:14:36.480970 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 18:14:36.482009 ignition[1061]: INFO : Ignition 2.21.0 May 27 18:14:36.482009 ignition[1061]: INFO : Stage: umount May 27 18:14:36.482009 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:14:36.482009 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 27 18:14:36.487290 ignition[1061]: INFO : umount: umount passed May 27 18:14:36.487739 ignition[1061]: INFO : Ignition finished successfully May 27 18:14:36.489474 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 18:14:36.489664 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 18:14:36.490977 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 18:14:36.491048 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 18:14:36.492056 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 18:14:36.492151 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 18:14:36.492629 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 18:14:36.492682 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 18:14:36.493695 systemd[1]: Stopped target network.target - Network. May 27 18:14:36.494239 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 18:14:36.494326 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:14:36.494981 systemd[1]: Stopped target paths.target - Path Units. May 27 18:14:36.495574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 18:14:36.499332 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:14:36.499890 systemd[1]: Stopped target slices.target - Slice Units. May 27 18:14:36.501005 systemd[1]: Stopped target sockets.target - Socket Units. May 27 18:14:36.501738 systemd[1]: iscsid.socket: Deactivated successfully. May 27 18:14:36.501800 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:14:36.502326 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 18:14:36.502378 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:14:36.503077 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 18:14:36.503198 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 18:14:36.503830 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 18:14:36.503880 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 18:14:36.504462 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 18:14:36.504517 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 18:14:36.505525 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 18:14:36.506361 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 18:14:36.511470 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 18:14:36.511665 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 18:14:36.518752 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 18:14:36.519234 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 18:14:36.519310 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:14:36.521296 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 18:14:36.521654 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 18:14:36.521802 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 18:14:36.523714 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 18:14:36.524583 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 18:14:36.525129 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 18:14:36.525203 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 18:14:36.527151 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 18:14:36.527577 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 18:14:36.527652 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:14:36.529915 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 18:14:36.530014 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 18:14:36.530620 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 18:14:36.530679 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 18:14:36.531269 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:14:36.534111 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 18:14:36.549879 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 18:14:36.550235 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:14:36.551699 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 18:14:36.551772 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 18:14:36.552280 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 18:14:36.552315 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:14:36.552721 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 18:14:36.552807 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 18:14:36.557010 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 18:14:36.557104 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 18:14:36.557653 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 18:14:36.557718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:14:36.562407 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 18:14:36.563602 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 18:14:36.563729 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:14:36.566995 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 18:14:36.567085 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:14:36.570839 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 18:14:36.570931 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:14:36.572290 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 18:14:36.572348 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:14:36.573019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:14:36.573092 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:14:36.574626 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 18:14:36.576314 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 18:14:36.586844 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 18:14:36.587015 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 18:14:36.588854 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 18:14:36.591453 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 18:14:36.627450 systemd[1]: Switching root. May 27 18:14:36.659225 systemd-journald[211]: Journal stopped May 27 18:14:38.041253 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). May 27 18:14:38.041348 kernel: SELinux: policy capability network_peer_controls=1 May 27 18:14:38.041365 kernel: SELinux: policy capability open_perms=1 May 27 18:14:38.041377 kernel: SELinux: policy capability extended_socket_class=1 May 27 18:14:38.041391 kernel: SELinux: policy capability always_check_network=0 May 27 18:14:38.041413 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 18:14:38.041447 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 18:14:38.041463 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 18:14:38.041479 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 18:14:38.041490 kernel: SELinux: policy capability userspace_initial_context=0 May 27 18:14:38.041502 kernel: audit: type=1403 audit(1748369676.843:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 18:14:38.041516 systemd[1]: Successfully loaded SELinux policy in 50.692ms. May 27 18:14:38.041533 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.458ms. May 27 18:14:38.041553 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:14:38.041567 systemd[1]: Detected virtualization kvm. May 27 18:14:38.041580 systemd[1]: Detected architecture x86-64. May 27 18:14:38.041597 systemd[1]: Detected first boot. May 27 18:14:38.041610 systemd[1]: Hostname set to . May 27 18:14:38.041624 systemd[1]: Initializing machine ID from VM UUID. May 27 18:14:38.041637 zram_generator::config[1104]: No configuration found. May 27 18:14:38.041651 kernel: Guest personality initialized and is inactive May 27 18:14:38.041665 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 18:14:38.041679 kernel: Initialized host personality May 27 18:14:38.041690 kernel: NET: Registered PF_VSOCK protocol family May 27 18:14:38.041703 systemd[1]: Populated /etc with preset unit settings. May 27 18:14:38.041722 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 18:14:38.041734 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 18:14:38.041747 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 18:14:38.041760 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 18:14:38.041775 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 18:14:38.041794 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 18:14:38.041812 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 18:14:38.041830 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 18:14:38.041853 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 18:14:38.041873 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 18:14:38.041893 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 18:14:38.041906 systemd[1]: Created slice user.slice - User and Session Slice. May 27 18:14:38.041919 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:14:38.041935 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:14:38.041948 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 18:14:38.041966 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 18:14:38.041980 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 18:14:38.041996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:14:38.042014 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 18:14:38.042028 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:14:38.042042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:14:38.042056 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 18:14:38.042069 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 18:14:38.042107 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 18:14:38.042120 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 18:14:38.042133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:14:38.042146 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:14:38.042158 systemd[1]: Reached target slices.target - Slice Units. May 27 18:14:38.042171 systemd[1]: Reached target swap.target - Swaps. May 27 18:14:38.042201 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 18:14:38.042219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 18:14:38.042237 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 18:14:38.042261 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:14:38.042280 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:14:38.042296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:14:38.042310 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 18:14:38.042323 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 18:14:38.042335 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 18:14:38.042349 systemd[1]: Mounting media.mount - External Media Directory... May 27 18:14:38.042362 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:14:38.042375 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 18:14:38.042390 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 18:14:38.042403 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 18:14:38.042418 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 18:14:38.042439 systemd[1]: Reached target machines.target - Containers. May 27 18:14:38.042452 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 18:14:38.042464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:14:38.042483 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:14:38.042496 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 18:14:38.042509 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:14:38.042524 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:14:38.042537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:14:38.042549 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 18:14:38.042562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:14:38.042577 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 18:14:38.042589 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 18:14:38.042601 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 18:14:38.042615 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 18:14:38.042632 systemd[1]: Stopped systemd-fsck-usr.service. May 27 18:14:38.042645 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:14:38.042658 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:14:38.042671 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:14:38.042687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:14:38.042703 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 18:14:38.042716 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 18:14:38.042735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:14:38.042748 systemd[1]: verity-setup.service: Deactivated successfully. May 27 18:14:38.042762 systemd[1]: Stopped verity-setup.service. May 27 18:14:38.042781 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:14:38.042794 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 18:14:38.042807 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 18:14:38.042820 systemd[1]: Mounted media.mount - External Media Directory. May 27 18:14:38.042833 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 18:14:38.042846 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 18:14:38.042858 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 18:14:38.042870 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:14:38.042883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:14:38.042900 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:14:38.042913 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 18:14:38.042925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:14:38.042939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:14:38.042951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:14:38.042964 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:14:38.042977 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 18:14:38.042989 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 18:14:38.043001 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 18:14:38.043022 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 18:14:38.043035 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 18:14:38.043047 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:14:38.043060 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 18:14:38.043073 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 18:14:38.043087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:14:38.043099 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 18:14:38.043117 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:14:38.043130 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 18:14:38.043146 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 18:14:38.043159 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 18:14:38.043171 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:14:38.057916 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:14:38.058032 systemd-journald[1181]: Collecting audit messages is disabled. May 27 18:14:38.058075 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 18:14:38.058090 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 18:14:38.058114 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 18:14:38.058129 systemd-journald[1181]: Journal started May 27 18:14:38.058155 systemd-journald[1181]: Runtime Journal (/run/log/journal/33e325c614044b67ae6b32b5885a2940) is 4.9M, max 39.5M, 34.6M free. May 27 18:14:37.561055 systemd[1]: Queued start job for default target multi-user.target. May 27 18:14:38.059250 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:14:37.585673 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 18:14:37.586419 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 18:14:38.062163 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 18:14:38.106216 kernel: loop0: detected capacity change from 0 to 113872 May 27 18:14:38.116978 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 18:14:38.121281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:14:38.132346 kernel: loop: module loaded May 27 18:14:38.132104 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 18:14:38.151030 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 18:14:38.155108 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:14:38.155373 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:14:38.156058 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:14:38.169864 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 18:14:38.171507 kernel: fuse: init (API version 7.41) May 27 18:14:38.173266 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 18:14:38.173686 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 18:14:38.181924 kernel: ACPI: bus type drm_connector registered May 27 18:14:38.179642 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:14:38.183288 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 27 18:14:38.183319 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 27 18:14:38.186303 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:14:38.199220 kernel: loop1: detected capacity change from 0 to 8 May 27 18:14:38.206338 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:14:38.211727 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 18:14:38.214788 systemd-journald[1181]: Time spent on flushing to /var/log/journal/33e325c614044b67ae6b32b5885a2940 is 71.958ms for 1021 entries. May 27 18:14:38.214788 systemd-journald[1181]: System Journal (/var/log/journal/33e325c614044b67ae6b32b5885a2940) is 8M, max 195.6M, 187.6M free. May 27 18:14:38.316985 systemd-journald[1181]: Received client request to flush runtime journal. May 27 18:14:38.317124 kernel: loop2: detected capacity change from 0 to 229808 May 27 18:14:38.317163 kernel: loop3: detected capacity change from 0 to 146240 May 27 18:14:38.329676 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 18:14:38.354230 kernel: loop4: detected capacity change from 0 to 113872 May 27 18:14:38.363871 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 18:14:38.369421 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:14:38.373277 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:14:38.390214 kernel: loop5: detected capacity change from 0 to 8 May 27 18:14:38.396880 kernel: loop6: detected capacity change from 0 to 229808 May 27 18:14:38.415261 kernel: loop7: detected capacity change from 0 to 146240 May 27 18:14:38.422781 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 27 18:14:38.423835 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 27 18:14:38.435288 (sd-merge)[1250]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 27 18:14:38.435997 (sd-merge)[1250]: Merged extensions into '/usr'. May 27 18:14:38.442283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:14:38.453143 systemd[1]: Reload requested from client PID 1207 ('systemd-sysext') (unit systemd-sysext.service)... May 27 18:14:38.453170 systemd[1]: Reloading... May 27 18:14:38.653220 zram_generator::config[1284]: No configuration found. May 27 18:14:38.834970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:14:38.894216 ldconfig[1200]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 18:14:38.941832 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 18:14:38.942812 systemd[1]: Reloading finished in 488 ms. May 27 18:14:38.957461 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 18:14:38.959512 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 18:14:38.967584 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 18:14:38.979530 systemd[1]: Starting ensure-sysext.service... May 27 18:14:38.991022 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:14:39.008142 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 18:14:39.044416 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... May 27 18:14:39.044454 systemd[1]: Reloading... May 27 18:14:39.064709 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 18:14:39.064787 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 18:14:39.065134 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 18:14:39.065607 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 18:14:39.066815 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 18:14:39.067297 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 27 18:14:39.067360 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 27 18:14:39.073824 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:14:39.073840 systemd-tmpfiles[1326]: Skipping /boot May 27 18:14:39.093348 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:14:39.093370 systemd-tmpfiles[1326]: Skipping /boot May 27 18:14:39.218219 zram_generator::config[1354]: No configuration found. May 27 18:14:39.357989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:14:39.473275 systemd[1]: Reloading finished in 428 ms. May 27 18:14:39.487544 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:14:39.512430 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:14:39.515573 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 18:14:39.521734 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 18:14:39.529575 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:14:39.536707 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 18:14:39.545243 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:14:39.545536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:14:39.550687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:14:39.556580 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:14:39.565770 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:14:39.567494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:14:39.567686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:14:39.567815 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:14:39.573009 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:14:39.574057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:14:39.574339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:14:39.574469 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:14:39.574581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:14:39.583435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:14:39.583853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:14:39.598401 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:14:39.599086 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:14:39.599258 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:14:39.602470 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 18:14:39.602936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:14:39.612685 systemd[1]: Finished ensure-sysext.service. May 27 18:14:39.624861 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 18:14:39.626504 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 18:14:39.653235 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 18:14:39.660166 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:14:39.661071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:14:39.664899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:14:39.666643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:14:39.668556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:14:39.669166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:14:39.672331 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:14:39.672501 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:14:39.693755 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:14:39.694303 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:14:39.718841 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 18:14:39.720637 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 18:14:39.742576 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 18:14:39.748784 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:14:39.756488 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 18:14:39.758563 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 18:14:39.766462 augenrules[1443]: No rules May 27 18:14:39.771432 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:14:39.776333 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:14:39.828126 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 18:14:39.857684 systemd-udevd[1437]: Using default interface naming scheme 'v255'. May 27 18:14:39.892163 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 18:14:39.893490 systemd[1]: Reached target time-set.target - System Time Set. May 27 18:14:39.928440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:14:39.965611 kernel: hrtimer: interrupt took 9040076 ns May 27 18:14:39.966989 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:14:40.077326 systemd-resolved[1401]: Positive Trust Anchors: May 27 18:14:40.077363 systemd-resolved[1401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:14:40.077539 systemd-resolved[1401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:14:40.132842 systemd-resolved[1401]: Using system hostname 'ci-4344.0.0-0-76b74bdce7'. May 27 18:14:40.166515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:14:40.168912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:14:40.169582 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:14:40.171671 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 18:14:40.173603 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 18:14:40.174472 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 18:14:40.178950 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 18:14:40.181564 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 18:14:40.183950 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 18:14:40.184514 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 18:14:40.214120 systemd[1]: Reached target paths.target - Path Units. May 27 18:14:40.215874 systemd[1]: Reached target timers.target - Timer Units. May 27 18:14:40.226063 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 18:14:40.249289 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 18:14:40.263050 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 18:14:40.265145 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 18:14:40.266254 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 18:14:40.277690 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 18:14:40.281253 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 18:14:40.282852 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 18:14:40.290419 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:14:40.291071 systemd[1]: Reached target basic.target - Basic System. May 27 18:14:40.291732 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 18:14:40.291781 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 18:14:40.298563 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 18:14:40.306533 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 18:14:40.313724 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 18:14:40.322231 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 18:14:40.329580 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 18:14:40.331320 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 18:14:40.341537 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 18:14:40.356524 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 18:14:40.365543 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 18:14:40.376817 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 18:14:40.386999 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 18:14:40.402622 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 18:14:40.405323 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 18:14:40.418198 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 18:14:40.428535 systemd[1]: Starting update-engine.service - Update Engine... May 27 18:14:40.430987 jq[1484]: false May 27 18:14:40.436516 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 18:14:40.446570 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 18:14:40.447618 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 18:14:40.447869 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 18:14:40.457376 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Refreshing passwd entry cache May 27 18:14:40.455559 oslogin_cache_refresh[1486]: Refreshing passwd entry cache May 27 18:14:40.475151 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 18:14:40.475455 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 18:14:40.485896 systemd-networkd[1457]: lo: Link UP May 27 18:14:40.486545 systemd-networkd[1457]: lo: Gained carrier May 27 18:14:40.490406 systemd-networkd[1457]: Enumeration completed May 27 18:14:40.490610 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:14:40.492478 systemd[1]: Reached target network.target - Network. May 27 18:14:40.502613 systemd[1]: Starting containerd.service - containerd container runtime... May 27 18:14:40.503783 oslogin_cache_refresh[1486]: Failure getting users, quitting May 27 18:14:40.505033 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Failure getting users, quitting May 27 18:14:40.505033 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:14:40.505033 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Refreshing group entry cache May 27 18:14:40.505033 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Failure getting groups, quitting May 27 18:14:40.505033 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:14:40.503808 oslogin_cache_refresh[1486]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:14:40.503885 oslogin_cache_refresh[1486]: Refreshing group entry cache May 27 18:14:40.504708 oslogin_cache_refresh[1486]: Failure getting groups, quitting May 27 18:14:40.504721 oslogin_cache_refresh[1486]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:14:40.514945 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 18:14:40.521642 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 18:14:40.532283 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 18:14:40.533302 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 18:14:40.539210 jq[1496]: true May 27 18:14:40.552874 update_engine[1494]: I20250527 18:14:40.552714 1494 main.cc:92] Flatcar Update Engine starting May 27 18:14:40.594629 tar[1503]: linux-amd64/LICENSE May 27 18:14:40.594629 tar[1503]: linux-amd64/helm May 27 18:14:40.608274 extend-filesystems[1485]: Found loop4 May 27 18:14:40.608274 extend-filesystems[1485]: Found loop5 May 27 18:14:40.608274 extend-filesystems[1485]: Found loop6 May 27 18:14:40.608274 extend-filesystems[1485]: Found loop7 May 27 18:14:40.608274 extend-filesystems[1485]: Found vda May 27 18:14:40.608274 extend-filesystems[1485]: Found vda1 May 27 18:14:40.608274 extend-filesystems[1485]: Found vda2 May 27 18:14:40.608274 extend-filesystems[1485]: Found vda3 May 27 18:14:40.608274 extend-filesystems[1485]: Found usr May 27 18:14:40.608274 extend-filesystems[1485]: Found vda4 May 27 18:14:40.608274 extend-filesystems[1485]: Found vda6 May 27 18:14:40.608274 extend-filesystems[1485]: Found vda7 May 27 18:14:40.608274 extend-filesystems[1485]: Found vda9 May 27 18:14:40.608274 extend-filesystems[1485]: Found vdb May 27 18:14:40.646479 dbus-daemon[1481]: [system] SELinux support is enabled May 27 18:14:40.680104 update_engine[1494]: I20250527 18:14:40.670011 1494 update_check_scheduler.cc:74] Next update check in 10m17s May 27 18:14:40.682648 coreos-metadata[1478]: May 27 18:14:40.646 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:14:40.682648 coreos-metadata[1478]: May 27 18:14:40.646 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 27 18:14:40.612137 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 18:14:40.613302 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 18:14:40.646723 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 18:14:40.688909 jq[1520]: true May 27 18:14:40.651472 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 18:14:40.651530 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 18:14:40.652255 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 18:14:40.652289 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 18:14:40.654412 systemd[1]: motdgen.service: Deactivated successfully. May 27 18:14:40.654838 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 18:14:40.657429 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 18:14:40.687398 systemd[1]: Started update-engine.service - Update Engine. May 27 18:14:40.720450 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 18:14:40.725736 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 18:14:40.853205 bash[1543]: Updated "/home/core/.ssh/authorized_keys" May 27 18:14:40.858353 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 18:14:40.867613 systemd[1]: Starting sshkeys.service... May 27 18:14:40.961456 systemd-logind[1493]: New seat seat0. May 27 18:14:40.973672 systemd[1]: Started systemd-logind.service - User Login Management. May 27 18:14:40.990454 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 18:14:40.994825 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 18:14:41.182707 coreos-metadata[1550]: May 27 18:14:41.182 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 27 18:14:41.187965 coreos-metadata[1550]: May 27 18:14:41.187 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 27 18:14:41.188098 containerd[1523]: time="2025-05-27T18:14:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 18:14:41.190898 containerd[1523]: time="2025-05-27T18:14:41.190756868Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 18:14:41.225562 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 18:14:41.259973 systemd-networkd[1457]: eth0: Configuring with /run/systemd/network/10-72:fb:a8:0d:4f:7d.network. May 27 18:14:41.263806 systemd-networkd[1457]: eth0: Link UP May 27 18:14:41.264055 systemd-networkd[1457]: eth0: Gained carrier May 27 18:14:41.271325 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 27 18:14:41.274566 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 18:14:41.279039 containerd[1523]: time="2025-05-27T18:14:41.278976623Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.307µs" May 27 18:14:41.279039 containerd[1523]: time="2025-05-27T18:14:41.279049191Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 18:14:41.279214 containerd[1523]: time="2025-05-27T18:14:41.279073246Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 18:14:41.281492 containerd[1523]: time="2025-05-27T18:14:41.281436461Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 18:14:41.281492 containerd[1523]: time="2025-05-27T18:14:41.281484186Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 18:14:41.281621 containerd[1523]: time="2025-05-27T18:14:41.281514577Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:14:41.281621 containerd[1523]: time="2025-05-27T18:14:41.281581504Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:14:41.281621 containerd[1523]: time="2025-05-27T18:14:41.281596979Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:14:41.281991 containerd[1523]: time="2025-05-27T18:14:41.281957429Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:14:41.281991 containerd[1523]: time="2025-05-27T18:14:41.281982925Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:14:41.282081 containerd[1523]: time="2025-05-27T18:14:41.281996909Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:14:41.282081 containerd[1523]: time="2025-05-27T18:14:41.282006821Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 18:14:41.282156 containerd[1523]: time="2025-05-27T18:14:41.282114700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 18:14:41.285936 containerd[1523]: time="2025-05-27T18:14:41.285517871Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:14:41.285936 containerd[1523]: time="2025-05-27T18:14:41.285590135Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:14:41.285936 containerd[1523]: time="2025-05-27T18:14:41.285605147Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 18:14:41.286194 kernel: mousedev: PS/2 mouse device common for all mice May 27 18:14:41.286987 containerd[1523]: time="2025-05-27T18:14:41.286950348Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 18:14:41.290039 containerd[1523]: time="2025-05-27T18:14:41.289452178Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 18:14:41.290039 containerd[1523]: time="2025-05-27T18:14:41.289600228Z" level=info msg="metadata content store policy set" policy=shared May 27 18:14:41.295155 containerd[1523]: time="2025-05-27T18:14:41.295102131Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 18:14:41.295155 containerd[1523]: time="2025-05-27T18:14:41.295192228Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 18:14:41.295371 containerd[1523]: time="2025-05-27T18:14:41.295217132Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 18:14:41.295371 containerd[1523]: time="2025-05-27T18:14:41.295233359Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 18:14:41.295371 containerd[1523]: time="2025-05-27T18:14:41.295288234Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 18:14:41.295371 containerd[1523]: time="2025-05-27T18:14:41.295300630Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 18:14:41.295371 containerd[1523]: time="2025-05-27T18:14:41.295316097Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 18:14:41.295371 containerd[1523]: time="2025-05-27T18:14:41.295328126Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 18:14:41.295371 containerd[1523]: time="2025-05-27T18:14:41.295339772Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 18:14:41.295371 containerd[1523]: time="2025-05-27T18:14:41.295351480Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 18:14:41.295371 containerd[1523]: time="2025-05-27T18:14:41.295368243Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 18:14:41.295555 containerd[1523]: time="2025-05-27T18:14:41.295417341Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295620565Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295672491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295702822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295719705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295736136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295751157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295767130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295781410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295798203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295815394Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295831734Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295930744Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.295952920Z" level=info msg="Start snapshots syncer" May 27 18:14:41.296629 containerd[1523]: time="2025-05-27T18:14:41.296033629Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 18:14:41.297068 containerd[1523]: time="2025-05-27T18:14:41.296368724Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 18:14:41.297068 containerd[1523]: time="2025-05-27T18:14:41.296446542Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 18:14:41.298411 containerd[1523]: time="2025-05-27T18:14:41.297253686Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302134737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302210555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302229594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302248006Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302262163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302273306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302285595Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302328654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302374171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302396930Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302444206Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302466457Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:14:41.302563 containerd[1523]: time="2025-05-27T18:14:41.302476305Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:14:41.302972 containerd[1523]: time="2025-05-27T18:14:41.302485076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:14:41.302972 containerd[1523]: time="2025-05-27T18:14:41.302493084Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 18:14:41.302972 containerd[1523]: time="2025-05-27T18:14:41.302501673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 18:14:41.302972 containerd[1523]: time="2025-05-27T18:14:41.302511684Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 18:14:41.302972 containerd[1523]: time="2025-05-27T18:14:41.302531396Z" level=info msg="runtime interface created" May 27 18:14:41.302972 containerd[1523]: time="2025-05-27T18:14:41.302537015Z" level=info msg="created NRI interface" May 27 18:14:41.302972 containerd[1523]: time="2025-05-27T18:14:41.302545591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 18:14:41.302972 containerd[1523]: time="2025-05-27T18:14:41.302583026Z" level=info msg="Connect containerd service" May 27 18:14:41.302972 containerd[1523]: time="2025-05-27T18:14:41.302634170Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 18:14:41.308004 containerd[1523]: time="2025-05-27T18:14:41.307864052Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 18:14:41.362438 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 27 18:14:41.373581 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 27 18:14:41.374550 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 18:14:41.485735 kernel: ISO 9660 Extensions: RRIP_1991A May 27 18:14:41.493833 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 27 18:14:41.512544 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 27 18:14:41.569589 systemd-networkd[1457]: eth1: Configuring with /run/systemd/network/10-62:94:a3:bd:9b:1e.network. May 27 18:14:41.573106 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 27 18:14:41.575405 systemd-networkd[1457]: eth1: Link UP May 27 18:14:41.575973 systemd-networkd[1457]: eth1: Gained carrier May 27 18:14:41.576489 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 27 18:14:41.579529 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 27 18:14:41.582080 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 27 18:14:41.593154 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 18:14:41.628033 coreos-metadata[1478]: May 27 18:14:41.627 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 27 18:14:41.641295 coreos-metadata[1478]: May 27 18:14:41.641 INFO Fetch successful May 27 18:14:41.652926 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 18:14:41.660233 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 18:14:41.693608 containerd[1523]: time="2025-05-27T18:14:41.693308887Z" level=info msg="Start subscribing containerd event" May 27 18:14:41.693608 containerd[1523]: time="2025-05-27T18:14:41.693372805Z" level=info msg="Start recovering state" May 27 18:14:41.693608 containerd[1523]: time="2025-05-27T18:14:41.693493065Z" level=info msg="Start event monitor" May 27 18:14:41.693608 containerd[1523]: time="2025-05-27T18:14:41.693507364Z" level=info msg="Start cni network conf syncer for default" May 27 18:14:41.693608 containerd[1523]: time="2025-05-27T18:14:41.693514625Z" level=info msg="Start streaming server" May 27 18:14:41.693608 containerd[1523]: time="2025-05-27T18:14:41.693522613Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 18:14:41.693608 containerd[1523]: time="2025-05-27T18:14:41.693529784Z" level=info msg="runtime interface starting up..." May 27 18:14:41.693608 containerd[1523]: time="2025-05-27T18:14:41.693535629Z" level=info msg="starting plugins..." May 27 18:14:41.693608 containerd[1523]: time="2025-05-27T18:14:41.693548958Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 18:14:41.696218 containerd[1523]: time="2025-05-27T18:14:41.694088221Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 18:14:41.696218 containerd[1523]: time="2025-05-27T18:14:41.694137660Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 18:14:41.694353 systemd[1]: Started containerd.service - containerd container runtime. May 27 18:14:41.698696 containerd[1523]: time="2025-05-27T18:14:41.698644736Z" level=info msg="containerd successfully booted in 0.511757s" May 27 18:14:41.718585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 18:14:41.726286 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 18:14:41.727748 systemd[1]: issuegen.service: Deactivated successfully. May 27 18:14:41.728243 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 18:14:41.730918 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 18:14:41.734267 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 18:14:41.738143 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 18:14:41.745794 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 27 18:14:41.746168 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 18:14:41.748082 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 18:14:41.756029 kernel: ACPI: button: Power Button [PWRF] May 27 18:14:41.785935 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 18:14:41.791135 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 18:14:41.794521 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 18:14:41.796370 systemd[1]: Reached target getty.target - Login Prompts. May 27 18:14:41.815283 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 18:14:41.960205 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 27 18:14:41.962210 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 27 18:14:41.969908 kernel: Console: switching to colour dummy device 80x25 May 27 18:14:41.971545 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 27 18:14:41.971619 kernel: [drm] features: -context_init May 27 18:14:41.974319 kernel: [drm] number of scanouts: 1 May 27 18:14:41.974415 kernel: [drm] number of cap sets: 0 May 27 18:14:41.976246 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 27 18:14:42.015793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:14:42.046545 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 18:14:42.066896 systemd-logind[1493]: Watching system buttons on /dev/input/event2 (Power Button) May 27 18:14:42.115783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:14:42.116152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:14:42.120785 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:14:42.128695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:14:42.187716 coreos-metadata[1550]: May 27 18:14:42.187 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 27 18:14:42.225391 coreos-metadata[1550]: May 27 18:14:42.224 INFO Fetch successful May 27 18:14:42.243433 unknown[1550]: wrote ssh authorized keys file for user: core May 27 18:14:42.290005 update-ssh-keys[1632]: Updated "/home/core/.ssh/authorized_keys" May 27 18:14:42.293234 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 18:14:42.298048 systemd[1]: Finished sshkeys.service. May 27 18:14:42.309776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:14:42.333211 kernel: EDAC MC: Ver: 3.0.0 May 27 18:14:42.388452 tar[1503]: linux-amd64/README.md May 27 18:14:42.414604 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 18:14:42.663454 systemd-networkd[1457]: eth0: Gained IPv6LL May 27 18:14:42.665073 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 27 18:14:42.669031 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 18:14:42.669764 systemd[1]: Reached target network-online.target - Network is Online. May 27 18:14:42.672263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:14:42.674077 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 18:14:42.712826 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 18:14:43.007599 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 18:14:43.010840 systemd[1]: Started sshd@0-143.110.225.216:22-139.178.68.195:41790.service - OpenSSH per-connection server daemon (139.178.68.195:41790). May 27 18:14:43.047332 systemd-networkd[1457]: eth1: Gained IPv6LL May 27 18:14:43.047877 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 27 18:14:43.154008 sshd[1655]: Accepted publickey for core from 139.178.68.195 port 41790 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:14:43.156476 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:14:43.176498 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 18:14:43.180323 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 18:14:43.189061 systemd-logind[1493]: New session 1 of user core. May 27 18:14:43.218685 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 18:14:43.226206 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 18:14:43.246359 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 18:14:43.251722 systemd-logind[1493]: New session c1 of user core. May 27 18:14:43.442977 systemd[1659]: Queued start job for default target default.target. May 27 18:14:43.451693 systemd[1659]: Created slice app.slice - User Application Slice. May 27 18:14:43.451741 systemd[1659]: Reached target paths.target - Paths. May 27 18:14:43.451793 systemd[1659]: Reached target timers.target - Timers. May 27 18:14:43.456379 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 18:14:43.476662 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 18:14:43.478646 systemd[1659]: Reached target sockets.target - Sockets. May 27 18:14:43.478726 systemd[1659]: Reached target basic.target - Basic System. May 27 18:14:43.478769 systemd[1659]: Reached target default.target - Main User Target. May 27 18:14:43.478812 systemd[1659]: Startup finished in 213ms. May 27 18:14:43.479360 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 18:14:43.486472 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 18:14:43.556536 systemd[1]: Started sshd@1-143.110.225.216:22-139.178.68.195:41800.service - OpenSSH per-connection server daemon (139.178.68.195:41800). May 27 18:14:43.635587 sshd[1670]: Accepted publickey for core from 139.178.68.195 port 41800 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:14:43.638371 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:14:43.646198 systemd-logind[1493]: New session 2 of user core. May 27 18:14:43.652467 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 18:14:43.722794 sshd[1672]: Connection closed by 139.178.68.195 port 41800 May 27 18:14:43.723794 sshd-session[1670]: pam_unix(sshd:session): session closed for user core May 27 18:14:43.737378 systemd[1]: sshd@1-143.110.225.216:22-139.178.68.195:41800.service: Deactivated successfully. May 27 18:14:43.742131 systemd[1]: session-2.scope: Deactivated successfully. May 27 18:14:43.744281 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. May 27 18:14:43.749503 systemd-logind[1493]: Removed session 2. May 27 18:14:43.752543 systemd[1]: Started sshd@2-143.110.225.216:22-139.178.68.195:41802.service - OpenSSH per-connection server daemon (139.178.68.195:41802). May 27 18:14:43.822237 sshd[1678]: Accepted publickey for core from 139.178.68.195 port 41802 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:14:43.825675 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:14:43.836071 systemd-logind[1493]: New session 3 of user core. May 27 18:14:43.843520 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 18:14:43.914919 sshd[1680]: Connection closed by 139.178.68.195 port 41802 May 27 18:14:43.917429 sshd-session[1678]: pam_unix(sshd:session): session closed for user core May 27 18:14:43.922974 systemd[1]: sshd@2-143.110.225.216:22-139.178.68.195:41802.service: Deactivated successfully. May 27 18:14:43.927013 systemd[1]: session-3.scope: Deactivated successfully. May 27 18:14:43.933100 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. May 27 18:14:43.935024 systemd-logind[1493]: Removed session 3. May 27 18:14:44.023639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:14:44.024870 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 18:14:44.026164 systemd[1]: Startup finished in 4.027s (kernel) + 8.154s (initrd) + 7.232s (userspace) = 19.414s. May 27 18:14:44.032089 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:14:44.799473 kubelet[1690]: E0527 18:14:44.799364 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:14:44.803256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:14:44.803789 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:14:44.804800 systemd[1]: kubelet.service: Consumed 1.442s CPU time, 267.6M memory peak. May 27 18:14:53.934691 systemd[1]: Started sshd@3-143.110.225.216:22-139.178.68.195:34026.service - OpenSSH per-connection server daemon (139.178.68.195:34026). May 27 18:14:54.020354 sshd[1702]: Accepted publickey for core from 139.178.68.195 port 34026 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:14:54.022973 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:14:54.031131 systemd-logind[1493]: New session 4 of user core. May 27 18:14:54.039603 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 18:14:54.107483 sshd[1704]: Connection closed by 139.178.68.195 port 34026 May 27 18:14:54.111818 sshd-session[1702]: pam_unix(sshd:session): session closed for user core May 27 18:14:54.125888 systemd[1]: sshd@3-143.110.225.216:22-139.178.68.195:34026.service: Deactivated successfully. May 27 18:14:54.129666 systemd[1]: session-4.scope: Deactivated successfully. May 27 18:14:54.132625 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. May 27 18:14:54.136809 systemd[1]: Started sshd@4-143.110.225.216:22-139.178.68.195:34028.service - OpenSSH per-connection server daemon (139.178.68.195:34028). May 27 18:14:54.139153 systemd-logind[1493]: Removed session 4. May 27 18:14:54.210167 sshd[1710]: Accepted publickey for core from 139.178.68.195 port 34028 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:14:54.213445 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:14:54.224199 systemd-logind[1493]: New session 5 of user core. May 27 18:14:54.231561 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 18:14:54.293991 sshd[1712]: Connection closed by 139.178.68.195 port 34028 May 27 18:14:54.295353 sshd-session[1710]: pam_unix(sshd:session): session closed for user core May 27 18:14:54.314306 systemd[1]: sshd@4-143.110.225.216:22-139.178.68.195:34028.service: Deactivated successfully. May 27 18:14:54.318530 systemd[1]: session-5.scope: Deactivated successfully. May 27 18:14:54.321595 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. May 27 18:14:54.325928 systemd[1]: Started sshd@5-143.110.225.216:22-139.178.68.195:34034.service - OpenSSH per-connection server daemon (139.178.68.195:34034). May 27 18:14:54.327417 systemd-logind[1493]: Removed session 5. May 27 18:14:54.392043 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 34034 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:14:54.394596 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:14:54.403727 systemd-logind[1493]: New session 6 of user core. May 27 18:14:54.413566 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 18:14:54.478306 sshd[1720]: Connection closed by 139.178.68.195 port 34034 May 27 18:14:54.479338 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 27 18:14:54.490829 systemd[1]: sshd@5-143.110.225.216:22-139.178.68.195:34034.service: Deactivated successfully. May 27 18:14:54.493922 systemd[1]: session-6.scope: Deactivated successfully. May 27 18:14:54.495275 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. May 27 18:14:54.502366 systemd[1]: Started sshd@6-143.110.225.216:22-139.178.68.195:34044.service - OpenSSH per-connection server daemon (139.178.68.195:34044). May 27 18:14:54.504490 systemd-logind[1493]: Removed session 6. May 27 18:14:54.576372 sshd[1726]: Accepted publickey for core from 139.178.68.195 port 34044 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:14:54.578950 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:14:54.588231 systemd-logind[1493]: New session 7 of user core. May 27 18:14:54.593581 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 18:14:54.668498 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 18:14:54.669124 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:14:54.686939 sudo[1729]: pam_unix(sudo:session): session closed for user root May 27 18:14:54.691210 sshd[1728]: Connection closed by 139.178.68.195 port 34044 May 27 18:14:54.691418 sshd-session[1726]: pam_unix(sshd:session): session closed for user core May 27 18:14:54.702987 systemd[1]: sshd@6-143.110.225.216:22-139.178.68.195:34044.service: Deactivated successfully. May 27 18:14:54.706331 systemd[1]: session-7.scope: Deactivated successfully. May 27 18:14:54.708870 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. May 27 18:14:54.710800 systemd-logind[1493]: Removed session 7. May 27 18:14:54.713741 systemd[1]: Started sshd@7-143.110.225.216:22-139.178.68.195:34056.service - OpenSSH per-connection server daemon (139.178.68.195:34056). May 27 18:14:54.798058 sshd[1735]: Accepted publickey for core from 139.178.68.195 port 34056 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:14:54.800151 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:14:54.804096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 18:14:54.808053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:14:54.811581 systemd-logind[1493]: New session 8 of user core. May 27 18:14:54.821836 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 18:14:54.892010 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 18:14:54.893232 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:14:54.902495 sudo[1742]: pam_unix(sudo:session): session closed for user root May 27 18:14:54.913121 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 18:14:54.914393 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:14:54.938821 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:14:55.016303 augenrules[1766]: No rules May 27 18:14:55.019628 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:14:55.022309 sudo[1741]: pam_unix(sudo:session): session closed for user root May 27 18:14:55.019932 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:14:55.027141 sshd[1740]: Connection closed by 139.178.68.195 port 34056 May 27 18:14:55.026267 sshd-session[1735]: pam_unix(sshd:session): session closed for user core May 27 18:14:55.040881 systemd[1]: sshd@7-143.110.225.216:22-139.178.68.195:34056.service: Deactivated successfully. May 27 18:14:55.045910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:14:55.047101 systemd[1]: session-8.scope: Deactivated successfully. May 27 18:14:55.050495 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. May 27 18:14:55.057847 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:14:55.060611 systemd[1]: Started sshd@8-143.110.225.216:22-139.178.68.195:34062.service - OpenSSH per-connection server daemon (139.178.68.195:34062). May 27 18:14:55.062784 systemd-logind[1493]: Removed session 8. May 27 18:14:55.118678 kubelet[1775]: E0527 18:14:55.118635 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:14:55.124273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:14:55.124521 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:14:55.125389 systemd[1]: kubelet.service: Consumed 224ms CPU time, 108.6M memory peak. May 27 18:14:55.133855 sshd[1779]: Accepted publickey for core from 139.178.68.195 port 34062 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:14:55.137447 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:14:55.146584 systemd-logind[1493]: New session 9 of user core. May 27 18:14:55.153553 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 18:14:55.216219 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 18:14:55.216701 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:14:55.744562 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 18:14:55.761427 (dockerd)[1807]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 18:14:56.167818 dockerd[1807]: time="2025-05-27T18:14:56.167332843Z" level=info msg="Starting up" May 27 18:14:56.169557 dockerd[1807]: time="2025-05-27T18:14:56.169501771Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 18:14:56.220128 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport743699852-merged.mount: Deactivated successfully. May 27 18:14:56.257659 dockerd[1807]: time="2025-05-27T18:14:56.257573783Z" level=info msg="Loading containers: start." May 27 18:14:56.270228 kernel: Initializing XFRM netlink socket May 27 18:14:56.564490 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 27 18:14:56.628267 systemd-networkd[1457]: docker0: Link UP May 27 18:14:56.632560 dockerd[1807]: time="2025-05-27T18:14:56.632466944Z" level=info msg="Loading containers: done." May 27 18:14:56.655038 dockerd[1807]: time="2025-05-27T18:14:56.654942302Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 18:14:56.655340 dockerd[1807]: time="2025-05-27T18:14:56.655063444Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 18:14:56.655340 dockerd[1807]: time="2025-05-27T18:14:56.655258901Z" level=info msg="Initializing buildkit" May 27 18:14:56.685129 dockerd[1807]: time="2025-05-27T18:14:56.685051092Z" level=info msg="Completed buildkit initialization" May 27 18:14:56.695351 dockerd[1807]: time="2025-05-27T18:14:56.695251594Z" level=info msg="Daemon has completed initialization" May 27 18:14:56.696229 dockerd[1807]: time="2025-05-27T18:14:56.695627108Z" level=info msg="API listen on /run/docker.sock" May 27 18:14:56.696747 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 18:14:57.683743 systemd-resolved[1401]: Clock change detected. Flushing caches. May 27 18:14:57.684808 systemd-timesyncd[1412]: Contacted time server 204.2.134.172:123 (2.flatcar.pool.ntp.org). May 27 18:14:57.686915 systemd-timesyncd[1412]: Initial clock synchronization to Tue 2025-05-27 18:14:57.683624 UTC. May 27 18:14:58.258561 containerd[1523]: time="2025-05-27T18:14:58.258078166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 18:14:58.781081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1085469737.mount: Deactivated successfully. May 27 18:15:00.046553 containerd[1523]: time="2025-05-27T18:15:00.046476034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:00.049388 containerd[1523]: time="2025-05-27T18:15:00.049215245Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 18:15:00.050223 containerd[1523]: time="2025-05-27T18:15:00.050130722Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:00.055884 containerd[1523]: time="2025-05-27T18:15:00.055799472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:00.057800 containerd[1523]: time="2025-05-27T18:15:00.057695212Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.799552902s" May 27 18:15:00.058239 containerd[1523]: time="2025-05-27T18:15:00.057773217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 18:15:00.059134 containerd[1523]: time="2025-05-27T18:15:00.059071384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 18:15:01.963264 containerd[1523]: time="2025-05-27T18:15:01.962517701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:01.966475 containerd[1523]: time="2025-05-27T18:15:01.965775607Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 18:15:01.968661 containerd[1523]: time="2025-05-27T18:15:01.968218775Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:01.975759 containerd[1523]: time="2025-05-27T18:15:01.975117841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:01.978898 containerd[1523]: time="2025-05-27T18:15:01.978829528Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.919509144s" May 27 18:15:01.979175 containerd[1523]: time="2025-05-27T18:15:01.979133405Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 18:15:01.980535 containerd[1523]: time="2025-05-27T18:15:01.979945541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 18:15:04.283759 containerd[1523]: time="2025-05-27T18:15:04.282212671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:04.284356 containerd[1523]: time="2025-05-27T18:15:04.283912322Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 18:15:04.284702 containerd[1523]: time="2025-05-27T18:15:04.284651221Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:04.288939 containerd[1523]: time="2025-05-27T18:15:04.288873989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:04.291111 containerd[1523]: time="2025-05-27T18:15:04.291034141Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 2.311040542s" May 27 18:15:04.291439 containerd[1523]: time="2025-05-27T18:15:04.291397522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 18:15:04.292220 containerd[1523]: time="2025-05-27T18:15:04.292160464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 18:15:04.294212 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 27 18:15:05.554636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114787582.mount: Deactivated successfully. May 27 18:15:06.125672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 18:15:06.130139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:15:06.410179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:15:06.430581 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:15:06.478032 containerd[1523]: time="2025-05-27T18:15:06.477968536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:06.480488 containerd[1523]: time="2025-05-27T18:15:06.480110472Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 18:15:06.481301 containerd[1523]: time="2025-05-27T18:15:06.481262117Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:06.485056 containerd[1523]: time="2025-05-27T18:15:06.485001084Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 2.192793707s" May 27 18:15:06.485453 containerd[1523]: time="2025-05-27T18:15:06.485322703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 18:15:06.485453 containerd[1523]: time="2025-05-27T18:15:06.485277800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:06.487069 containerd[1523]: time="2025-05-27T18:15:06.487026580Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 18:15:06.528192 kubelet[2093]: E0527 18:15:06.528069 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:15:06.534006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:15:06.534238 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:15:06.535231 systemd[1]: kubelet.service: Consumed 278ms CPU time, 108.1M memory peak. May 27 18:15:06.998311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2163436936.mount: Deactivated successfully. May 27 18:15:07.350987 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 27 18:15:08.154171 containerd[1523]: time="2025-05-27T18:15:08.154100583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:08.156594 containerd[1523]: time="2025-05-27T18:15:08.156527504Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 18:15:08.158007 containerd[1523]: time="2025-05-27T18:15:08.157953944Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:08.161190 containerd[1523]: time="2025-05-27T18:15:08.161137020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:08.163364 containerd[1523]: time="2025-05-27T18:15:08.163090668Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.675736385s" May 27 18:15:08.163958 containerd[1523]: time="2025-05-27T18:15:08.163604370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 18:15:08.164812 containerd[1523]: time="2025-05-27T18:15:08.164773324Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 18:15:08.639931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814279030.mount: Deactivated successfully. May 27 18:15:08.645707 containerd[1523]: time="2025-05-27T18:15:08.645625080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:15:08.648246 containerd[1523]: time="2025-05-27T18:15:08.648172565Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 18:15:08.648915 containerd[1523]: time="2025-05-27T18:15:08.648803692Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:15:08.654759 containerd[1523]: time="2025-05-27T18:15:08.654661662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:15:08.657304 containerd[1523]: time="2025-05-27T18:15:08.657225672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.247662ms" May 27 18:15:08.657304 containerd[1523]: time="2025-05-27T18:15:08.657302801Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 18:15:08.658190 containerd[1523]: time="2025-05-27T18:15:08.658117676Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 18:15:12.077750 containerd[1523]: time="2025-05-27T18:15:12.077635212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:12.079746 containerd[1523]: time="2025-05-27T18:15:12.079327217Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 18:15:12.080709 containerd[1523]: time="2025-05-27T18:15:12.080639774Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:12.084777 containerd[1523]: time="2025-05-27T18:15:12.084702925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:12.086988 containerd[1523]: time="2025-05-27T18:15:12.086914179Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.428517852s" May 27 18:15:12.086988 containerd[1523]: time="2025-05-27T18:15:12.086983470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 18:15:16.428118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:15:16.429038 systemd[1]: kubelet.service: Consumed 278ms CPU time, 108.1M memory peak. May 27 18:15:16.434247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:15:16.480973 systemd[1]: Reload requested from client PID 2195 ('systemctl') (unit session-9.scope)... May 27 18:15:16.480993 systemd[1]: Reloading... May 27 18:15:16.681761 zram_generator::config[2247]: No configuration found. May 27 18:15:16.798634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:15:16.966826 systemd[1]: Reloading finished in 485 ms. May 27 18:15:17.044405 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 18:15:17.044698 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 18:15:17.045318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:15:17.045407 systemd[1]: kubelet.service: Consumed 174ms CPU time, 98.3M memory peak. May 27 18:15:17.050002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:15:17.248494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:15:17.265775 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 18:15:17.343778 kubelet[2293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:15:17.344823 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 18:15:17.344823 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:15:17.344823 kubelet[2293]: I0527 18:15:17.344379 2293 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 18:15:18.092364 kubelet[2293]: I0527 18:15:18.092245 2293 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 18:15:18.092364 kubelet[2293]: I0527 18:15:18.092318 2293 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 18:15:18.093016 kubelet[2293]: I0527 18:15:18.092869 2293 server.go:956] "Client rotation is on, will bootstrap in background" May 27 18:15:18.178779 kubelet[2293]: I0527 18:15:18.178135 2293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 18:15:18.181234 kubelet[2293]: E0527 18:15:18.181123 2293 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.110.225.216:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.110.225.216:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 18:15:18.206866 kubelet[2293]: I0527 18:15:18.206675 2293 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 18:15:18.218592 kubelet[2293]: I0527 18:15:18.218387 2293 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 18:15:18.225297 kubelet[2293]: I0527 18:15:18.224875 2293 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 18:15:18.230130 kubelet[2293]: I0527 18:15:18.224961 2293 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-0-76b74bdce7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 18:15:18.230750 kubelet[2293]: I0527 18:15:18.230491 2293 topology_manager.go:138] "Creating topology manager with none policy" May 27 18:15:18.230750 kubelet[2293]: I0527 18:15:18.230529 2293 container_manager_linux.go:303] "Creating device plugin manager" May 27 18:15:18.231029 kubelet[2293]: I0527 18:15:18.231008 2293 state_mem.go:36] "Initialized new in-memory state store" May 27 18:15:18.234923 kubelet[2293]: I0527 18:15:18.234624 2293 kubelet.go:480] "Attempting to sync node with API server" May 27 18:15:18.234923 kubelet[2293]: I0527 18:15:18.234809 2293 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 18:15:18.234923 kubelet[2293]: I0527 18:15:18.234865 2293 kubelet.go:386] "Adding apiserver pod source" May 27 18:15:18.234923 kubelet[2293]: I0527 18:15:18.234891 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 18:15:18.248230 kubelet[2293]: E0527 18:15:18.248173 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.110.225.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-0-76b74bdce7&limit=500&resourceVersion=0\": dial tcp 143.110.225.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 18:15:18.248711 kubelet[2293]: I0527 18:15:18.248575 2293 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 18:15:18.251754 kubelet[2293]: I0527 18:15:18.249486 2293 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 18:15:18.251974 kubelet[2293]: W0527 18:15:18.251938 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 18:15:18.259284 kubelet[2293]: I0527 18:15:18.258813 2293 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 18:15:18.259284 kubelet[2293]: I0527 18:15:18.258895 2293 server.go:1289] "Started kubelet" May 27 18:15:18.265385 kubelet[2293]: E0527 18:15:18.265315 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.110.225.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.225.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 18:15:18.266606 kubelet[2293]: I0527 18:15:18.265919 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 18:15:18.272787 kubelet[2293]: I0527 18:15:18.271666 2293 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 18:15:18.276745 kubelet[2293]: I0527 18:15:18.276685 2293 server.go:317] "Adding debug handlers to kubelet server" May 27 18:15:18.277750 kubelet[2293]: I0527 18:15:18.277700 2293 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 18:15:18.309022 kubelet[2293]: I0527 18:15:18.308950 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 18:15:18.312451 kubelet[2293]: E0527 18:15:18.296677 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" May 27 18:15:18.312451 kubelet[2293]: I0527 18:15:18.303322 2293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 18:15:18.312451 kubelet[2293]: I0527 18:15:18.311827 2293 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 18:15:18.312451 kubelet[2293]: I0527 18:15:18.311997 2293 reconciler.go:26] "Reconciler: start to sync state" May 27 18:15:18.317195 kubelet[2293]: I0527 18:15:18.317150 2293 factory.go:223] Registration of the systemd container factory successfully May 27 18:15:18.317611 kubelet[2293]: I0527 18:15:18.317572 2293 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 18:15:18.318199 kubelet[2293]: I0527 18:15:18.302764 2293 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 18:15:18.319825 kubelet[2293]: E0527 18:15:18.316445 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.110.225.216:6443/api/v1/namespaces/default/events\": dial tcp 143.110.225.216:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.0.0-0-76b74bdce7.184374fed8de4efd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-0-76b74bdce7,UID:ci-4344.0.0-0-76b74bdce7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-0-76b74bdce7,},FirstTimestamp:2025-05-27 18:15:18.258847485 +0000 UTC m=+0.985080469,LastTimestamp:2025-05-27 18:15:18.258847485 +0000 UTC m=+0.985080469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-0-76b74bdce7,}" May 27 18:15:18.320744 kubelet[2293]: E0527 18:15:18.320015 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.110.225.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.225.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 18:15:18.320744 kubelet[2293]: E0527 18:15:18.320139 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.225.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-0-76b74bdce7?timeout=10s\": dial tcp 143.110.225.216:6443: connect: connection refused" interval="200ms" May 27 18:15:18.322832 kubelet[2293]: I0527 18:15:18.322797 2293 factory.go:223] Registration of the containerd container factory successfully May 27 18:15:18.352532 kubelet[2293]: I0527 18:15:18.350450 2293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 18:15:18.355050 kubelet[2293]: I0527 18:15:18.354977 2293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 18:15:18.355685 kubelet[2293]: I0527 18:15:18.355031 2293 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 18:15:18.355685 kubelet[2293]: I0527 18:15:18.355444 2293 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 18:15:18.355685 kubelet[2293]: I0527 18:15:18.355466 2293 kubelet.go:2436] "Starting kubelet main sync loop" May 27 18:15:18.357222 kubelet[2293]: E0527 18:15:18.357125 2293 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 18:15:18.363855 kubelet[2293]: E0527 18:15:18.363810 2293 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 18:15:18.369160 kubelet[2293]: E0527 18:15:18.369092 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.110.225.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.225.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 18:15:18.379743 kubelet[2293]: I0527 18:15:18.379671 2293 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 18:15:18.379743 kubelet[2293]: I0527 18:15:18.379701 2293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 18:15:18.379997 kubelet[2293]: I0527 18:15:18.379787 2293 state_mem.go:36] "Initialized new in-memory state store" May 27 18:15:18.381766 kubelet[2293]: I0527 18:15:18.381593 2293 policy_none.go:49] "None policy: Start" May 27 18:15:18.381766 kubelet[2293]: I0527 18:15:18.381639 2293 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 18:15:18.381766 kubelet[2293]: I0527 18:15:18.381663 2293 state_mem.go:35] "Initializing new in-memory state store" May 27 18:15:18.394359 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 18:15:18.412058 kubelet[2293]: E0527 18:15:18.411998 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" May 27 18:15:18.418324 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 18:15:18.426805 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 18:15:18.444403 kubelet[2293]: E0527 18:15:18.444351 2293 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 18:15:18.445522 kubelet[2293]: I0527 18:15:18.445488 2293 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 18:15:18.445675 kubelet[2293]: I0527 18:15:18.445519 2293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 18:15:18.447349 kubelet[2293]: I0527 18:15:18.446147 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 18:15:18.450078 kubelet[2293]: E0527 18:15:18.450033 2293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 18:15:18.450078 kubelet[2293]: E0527 18:15:18.450107 2293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.0.0-0-76b74bdce7\" not found" May 27 18:15:18.481713 systemd[1]: Created slice kubepods-burstable-podcdc72edd9dc47b2d5abb4b69e3e47e3b.slice - libcontainer container kubepods-burstable-podcdc72edd9dc47b2d5abb4b69e3e47e3b.slice. May 27 18:15:18.510632 kubelet[2293]: E0527 18:15:18.510572 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.515470 systemd[1]: Created slice kubepods-burstable-pod78c9191f3bca4d1e3e357772826fc7bb.slice - libcontainer container kubepods-burstable-pod78c9191f3bca4d1e3e357772826fc7bb.slice. May 27 18:15:18.521530 kubelet[2293]: E0527 18:15:18.521441 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.225.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-0-76b74bdce7?timeout=10s\": dial tcp 143.110.225.216:6443: connect: connection refused" interval="400ms" May 27 18:15:18.526419 kubelet[2293]: E0527 18:15:18.526340 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.535908 systemd[1]: Created slice kubepods-burstable-pod5cc47f18f3a266b228df123ed3143bdd.slice - libcontainer container kubepods-burstable-pod5cc47f18f3a266b228df123ed3143bdd.slice. May 27 18:15:18.543475 kubelet[2293]: E0527 18:15:18.543424 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.547817 kubelet[2293]: I0527 18:15:18.547775 2293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.549042 kubelet[2293]: E0527 18:15:18.548972 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.225.216:6443/api/v1/nodes\": dial tcp 143.110.225.216:6443: connect: connection refused" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.613959 kubelet[2293]: I0527 18:15:18.612932 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.613959 kubelet[2293]: I0527 18:15:18.613011 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cc47f18f3a266b228df123ed3143bdd-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-0-76b74bdce7\" (UID: \"5cc47f18f3a266b228df123ed3143bdd\") " pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.613959 kubelet[2293]: I0527 18:15:18.613042 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdc72edd9dc47b2d5abb4b69e3e47e3b-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-0-76b74bdce7\" (UID: \"cdc72edd9dc47b2d5abb4b69e3e47e3b\") " pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.613959 kubelet[2293]: I0527 18:15:18.613071 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.613959 kubelet[2293]: I0527 18:15:18.613118 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.614331 kubelet[2293]: I0527 18:15:18.613147 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.614331 kubelet[2293]: I0527 18:15:18.613199 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.614331 kubelet[2293]: I0527 18:15:18.613276 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdc72edd9dc47b2d5abb4b69e3e47e3b-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-0-76b74bdce7\" (UID: \"cdc72edd9dc47b2d5abb4b69e3e47e3b\") " pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.614331 kubelet[2293]: I0527 18:15:18.613307 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdc72edd9dc47b2d5abb4b69e3e47e3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-0-76b74bdce7\" (UID: \"cdc72edd9dc47b2d5abb4b69e3e47e3b\") " pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.751832 kubelet[2293]: I0527 18:15:18.751772 2293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.752780 kubelet[2293]: E0527 18:15:18.752707 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.225.216:6443/api/v1/nodes\": dial tcp 143.110.225.216:6443: connect: connection refused" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:18.812507 kubelet[2293]: E0527 18:15:18.812417 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:18.817125 containerd[1523]: time="2025-05-27T18:15:18.816788746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-0-76b74bdce7,Uid:cdc72edd9dc47b2d5abb4b69e3e47e3b,Namespace:kube-system,Attempt:0,}" May 27 18:15:18.827935 kubelet[2293]: E0527 18:15:18.827811 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:18.828584 containerd[1523]: time="2025-05-27T18:15:18.828505463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-0-76b74bdce7,Uid:78c9191f3bca4d1e3e357772826fc7bb,Namespace:kube-system,Attempt:0,}" May 27 18:15:18.847742 kubelet[2293]: E0527 18:15:18.847641 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:18.849141 containerd[1523]: time="2025-05-27T18:15:18.848934989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-0-76b74bdce7,Uid:5cc47f18f3a266b228df123ed3143bdd,Namespace:kube-system,Attempt:0,}" May 27 18:15:18.929784 kubelet[2293]: E0527 18:15:18.929393 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.225.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-0-76b74bdce7?timeout=10s\": dial tcp 143.110.225.216:6443: connect: connection refused" interval="800ms" May 27 18:15:19.065814 containerd[1523]: time="2025-05-27T18:15:19.065113723Z" level=info msg="connecting to shim e3b77edf8ad92a8c66f7dd8b15a264a7338debe19df2d1bd271f837b0f9fbdf3" address="unix:///run/containerd/s/0937936471f0038d041d86889e9b8c1f648e4b347310a4832cc7bcaedb694c04" namespace=k8s.io protocol=ttrpc version=3 May 27 18:15:19.070818 containerd[1523]: time="2025-05-27T18:15:19.070702708Z" level=info msg="connecting to shim 4ef99ba6d6eb5cc73bdcf6b8f046bffe22694e314ea104813da7cb6165b868d2" address="unix:///run/containerd/s/b00f5ed806f18742068cc14f9b7f6b0d31545d06093fbfc28b940b37a709c0da" namespace=k8s.io protocol=ttrpc version=3 May 27 18:15:19.077108 containerd[1523]: time="2025-05-27T18:15:19.077038215Z" level=info msg="connecting to shim 1bc7f980d54f32217342e0e94733426af8241e0d8f57db982d09caa807077392" address="unix:///run/containerd/s/244ffae3e303de91909f682330ed359e45dbf31f525b71cfc73292af8f122d1f" namespace=k8s.io protocol=ttrpc version=3 May 27 18:15:19.132768 kubelet[2293]: E0527 18:15:19.132618 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.110.225.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.225.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 18:15:19.160712 kubelet[2293]: I0527 18:15:19.160483 2293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:19.161676 kubelet[2293]: E0527 18:15:19.161568 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.225.216:6443/api/v1/nodes\": dial tcp 143.110.225.216:6443: connect: connection refused" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:19.231675 kubelet[2293]: E0527 18:15:19.230630 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.110.225.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.225.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 18:15:19.247567 systemd[1]: Started cri-containerd-1bc7f980d54f32217342e0e94733426af8241e0d8f57db982d09caa807077392.scope - libcontainer container 1bc7f980d54f32217342e0e94733426af8241e0d8f57db982d09caa807077392. May 27 18:15:19.262308 systemd[1]: Started cri-containerd-4ef99ba6d6eb5cc73bdcf6b8f046bffe22694e314ea104813da7cb6165b868d2.scope - libcontainer container 4ef99ba6d6eb5cc73bdcf6b8f046bffe22694e314ea104813da7cb6165b868d2. May 27 18:15:19.268332 systemd[1]: Started cri-containerd-e3b77edf8ad92a8c66f7dd8b15a264a7338debe19df2d1bd271f837b0f9fbdf3.scope - libcontainer container e3b77edf8ad92a8c66f7dd8b15a264a7338debe19df2d1bd271f837b0f9fbdf3. May 27 18:15:19.457174 containerd[1523]: time="2025-05-27T18:15:19.456437500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-0-76b74bdce7,Uid:78c9191f3bca4d1e3e357772826fc7bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ef99ba6d6eb5cc73bdcf6b8f046bffe22694e314ea104813da7cb6165b868d2\"" May 27 18:15:19.464484 kubelet[2293]: E0527 18:15:19.464435 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:19.466403 containerd[1523]: time="2025-05-27T18:15:19.465867399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-0-76b74bdce7,Uid:cdc72edd9dc47b2d5abb4b69e3e47e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3b77edf8ad92a8c66f7dd8b15a264a7338debe19df2d1bd271f837b0f9fbdf3\"" May 27 18:15:19.469754 kubelet[2293]: E0527 18:15:19.469690 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:19.476432 containerd[1523]: time="2025-05-27T18:15:19.475055676Z" level=info msg="CreateContainer within sandbox \"4ef99ba6d6eb5cc73bdcf6b8f046bffe22694e314ea104813da7cb6165b868d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 18:15:19.478286 containerd[1523]: time="2025-05-27T18:15:19.478189053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-0-76b74bdce7,Uid:5cc47f18f3a266b228df123ed3143bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bc7f980d54f32217342e0e94733426af8241e0d8f57db982d09caa807077392\"" May 27 18:15:19.479987 containerd[1523]: time="2025-05-27T18:15:19.478834041Z" level=info msg="CreateContainer within sandbox \"e3b77edf8ad92a8c66f7dd8b15a264a7338debe19df2d1bd271f837b0f9fbdf3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 18:15:19.485145 kubelet[2293]: E0527 18:15:19.483814 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:19.492335 containerd[1523]: time="2025-05-27T18:15:19.492265119Z" level=info msg="CreateContainer within sandbox \"1bc7f980d54f32217342e0e94733426af8241e0d8f57db982d09caa807077392\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 18:15:19.497014 containerd[1523]: time="2025-05-27T18:15:19.496604282Z" level=info msg="Container b973308055465e40c68d00c86c05d198dc59d96de6acb55917c0b9223d9af632: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:19.503297 containerd[1523]: time="2025-05-27T18:15:19.503222876Z" level=info msg="Container 4048c89957c9cefa7989113d8824f352c2d22e30b58ed00842a49e3b25a20d37: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:19.517016 containerd[1523]: time="2025-05-27T18:15:19.516917099Z" level=info msg="Container 5c9214fac7c2d95ac5b5d57493f07fe7811d96429d3becab8845410fed387bcc: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:19.525761 containerd[1523]: time="2025-05-27T18:15:19.525634966Z" level=info msg="CreateContainer within sandbox \"e3b77edf8ad92a8c66f7dd8b15a264a7338debe19df2d1bd271f837b0f9fbdf3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4048c89957c9cefa7989113d8824f352c2d22e30b58ed00842a49e3b25a20d37\"" May 27 18:15:19.527687 containerd[1523]: time="2025-05-27T18:15:19.527624814Z" level=info msg="StartContainer for \"4048c89957c9cefa7989113d8824f352c2d22e30b58ed00842a49e3b25a20d37\"" May 27 18:15:19.530745 containerd[1523]: time="2025-05-27T18:15:19.530344438Z" level=info msg="connecting to shim 4048c89957c9cefa7989113d8824f352c2d22e30b58ed00842a49e3b25a20d37" address="unix:///run/containerd/s/0937936471f0038d041d86889e9b8c1f648e4b347310a4832cc7bcaedb694c04" protocol=ttrpc version=3 May 27 18:15:19.539333 containerd[1523]: time="2025-05-27T18:15:19.539134211Z" level=info msg="CreateContainer within sandbox \"1bc7f980d54f32217342e0e94733426af8241e0d8f57db982d09caa807077392\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c9214fac7c2d95ac5b5d57493f07fe7811d96429d3becab8845410fed387bcc\"" May 27 18:15:19.541763 containerd[1523]: time="2025-05-27T18:15:19.541663348Z" level=info msg="CreateContainer within sandbox \"4ef99ba6d6eb5cc73bdcf6b8f046bffe22694e314ea104813da7cb6165b868d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b973308055465e40c68d00c86c05d198dc59d96de6acb55917c0b9223d9af632\"" May 27 18:15:19.542116 containerd[1523]: time="2025-05-27T18:15:19.542077131Z" level=info msg="StartContainer for \"5c9214fac7c2d95ac5b5d57493f07fe7811d96429d3becab8845410fed387bcc\"" May 27 18:15:19.555386 containerd[1523]: time="2025-05-27T18:15:19.555093938Z" level=info msg="connecting to shim 5c9214fac7c2d95ac5b5d57493f07fe7811d96429d3becab8845410fed387bcc" address="unix:///run/containerd/s/244ffae3e303de91909f682330ed359e45dbf31f525b71cfc73292af8f122d1f" protocol=ttrpc version=3 May 27 18:15:19.556873 containerd[1523]: time="2025-05-27T18:15:19.543279946Z" level=info msg="StartContainer for \"b973308055465e40c68d00c86c05d198dc59d96de6acb55917c0b9223d9af632\"" May 27 18:15:19.560567 containerd[1523]: time="2025-05-27T18:15:19.560497711Z" level=info msg="connecting to shim b973308055465e40c68d00c86c05d198dc59d96de6acb55917c0b9223d9af632" address="unix:///run/containerd/s/b00f5ed806f18742068cc14f9b7f6b0d31545d06093fbfc28b940b37a709c0da" protocol=ttrpc version=3 May 27 18:15:19.597100 systemd[1]: Started cri-containerd-4048c89957c9cefa7989113d8824f352c2d22e30b58ed00842a49e3b25a20d37.scope - libcontainer container 4048c89957c9cefa7989113d8824f352c2d22e30b58ed00842a49e3b25a20d37. May 27 18:15:19.626036 kubelet[2293]: E0527 18:15:19.624680 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.110.225.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-0-76b74bdce7&limit=500&resourceVersion=0\": dial tcp 143.110.225.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 18:15:19.630130 systemd[1]: Started cri-containerd-b973308055465e40c68d00c86c05d198dc59d96de6acb55917c0b9223d9af632.scope - libcontainer container b973308055465e40c68d00c86c05d198dc59d96de6acb55917c0b9223d9af632. May 27 18:15:19.641268 systemd[1]: Started cri-containerd-5c9214fac7c2d95ac5b5d57493f07fe7811d96429d3becab8845410fed387bcc.scope - libcontainer container 5c9214fac7c2d95ac5b5d57493f07fe7811d96429d3becab8845410fed387bcc. May 27 18:15:19.731231 kubelet[2293]: E0527 18:15:19.731169 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.225.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-0-76b74bdce7?timeout=10s\": dial tcp 143.110.225.216:6443: connect: connection refused" interval="1.6s" May 27 18:15:19.744325 containerd[1523]: time="2025-05-27T18:15:19.744183761Z" level=info msg="StartContainer for \"4048c89957c9cefa7989113d8824f352c2d22e30b58ed00842a49e3b25a20d37\" returns successfully" May 27 18:15:19.807226 containerd[1523]: time="2025-05-27T18:15:19.807170704Z" level=info msg="StartContainer for \"5c9214fac7c2d95ac5b5d57493f07fe7811d96429d3becab8845410fed387bcc\" returns successfully" May 27 18:15:19.809152 containerd[1523]: time="2025-05-27T18:15:19.809076942Z" level=info msg="StartContainer for \"b973308055465e40c68d00c86c05d198dc59d96de6acb55917c0b9223d9af632\" returns successfully" May 27 18:15:19.810868 kubelet[2293]: E0527 18:15:19.810668 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.110.225.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.225.216:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 18:15:19.966301 kubelet[2293]: I0527 18:15:19.966245 2293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:19.966874 kubelet[2293]: E0527 18:15:19.966834 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.225.216:6443/api/v1/nodes\": dial tcp 143.110.225.216:6443: connect: connection refused" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:20.389514 kubelet[2293]: E0527 18:15:20.389458 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:20.389706 kubelet[2293]: E0527 18:15:20.389664 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:20.396070 kubelet[2293]: E0527 18:15:20.395457 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:20.396070 kubelet[2293]: E0527 18:15:20.395684 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:20.400400 kubelet[2293]: E0527 18:15:20.400361 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:20.400598 kubelet[2293]: E0527 18:15:20.400513 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:21.402667 kubelet[2293]: E0527 18:15:21.402618 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:21.404304 kubelet[2293]: E0527 18:15:21.404262 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:21.404866 kubelet[2293]: E0527 18:15:21.404810 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:21.405069 kubelet[2293]: E0527 18:15:21.405002 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:21.571185 kubelet[2293]: I0527 18:15:21.570997 2293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:22.406825 kubelet[2293]: E0527 18:15:22.405162 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:22.408138 kubelet[2293]: E0527 18:15:22.408028 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:22.741972 kubelet[2293]: I0527 18:15:22.741826 2293 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:22.741972 kubelet[2293]: E0527 18:15:22.741872 2293 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.0.0-0-76b74bdce7\": node \"ci-4344.0.0-0-76b74bdce7\" not found" May 27 18:15:22.793240 kubelet[2293]: E0527 18:15:22.793178 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" May 27 18:15:22.802841 kubelet[2293]: I0527 18:15:22.802679 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:22.828670 kubelet[2293]: E0527 18:15:22.828599 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" May 27 18:15:22.835313 kubelet[2293]: E0527 18:15:22.835261 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-0-76b74bdce7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:22.835313 kubelet[2293]: I0527 18:15:22.835299 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:22.838136 kubelet[2293]: E0527 18:15:22.838086 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:22.838136 kubelet[2293]: I0527 18:15:22.838129 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:15:22.844753 kubelet[2293]: E0527 18:15:22.843925 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-0-76b74bdce7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:15:23.264895 kubelet[2293]: I0527 18:15:23.264826 2293 apiserver.go:52] "Watching apiserver" May 27 18:15:23.319199 kubelet[2293]: I0527 18:15:23.318976 2293 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 18:15:23.380060 kubelet[2293]: I0527 18:15:23.380012 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:15:23.382515 kubelet[2293]: E0527 18:15:23.382417 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-0-76b74bdce7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:15:23.382771 kubelet[2293]: E0527 18:15:23.382743 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:25.234555 systemd[1]: Reload requested from client PID 2576 ('systemctl') (unit session-9.scope)... May 27 18:15:25.234582 systemd[1]: Reloading... May 27 18:15:25.401888 zram_generator::config[2622]: No configuration found. May 27 18:15:25.545599 kubelet[2293]: I0527 18:15:25.545155 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:25.556144 kubelet[2293]: I0527 18:15:25.556064 2293 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 18:15:25.557081 kubelet[2293]: E0527 18:15:25.556708 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:25.578954 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:15:25.756262 systemd[1]: Reloading finished in 521 ms. May 27 18:15:25.790342 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:15:25.809378 systemd[1]: kubelet.service: Deactivated successfully. May 27 18:15:25.810081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:15:25.810169 systemd[1]: kubelet.service: Consumed 1.585s CPU time, 125.3M memory peak. May 27 18:15:25.813945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:15:26.086326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:15:26.100303 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 18:15:26.194497 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:15:26.194497 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 18:15:26.194497 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:15:26.196783 kubelet[2670]: I0527 18:15:26.196196 2670 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 18:15:26.208806 kubelet[2670]: I0527 18:15:26.208633 2670 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 18:15:26.208806 kubelet[2670]: I0527 18:15:26.208674 2670 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 18:15:26.209224 kubelet[2670]: I0527 18:15:26.209189 2670 server.go:956] "Client rotation is on, will bootstrap in background" May 27 18:15:26.213904 kubelet[2670]: I0527 18:15:26.213859 2670 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 18:15:26.231170 kubelet[2670]: I0527 18:15:26.231102 2670 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 18:15:26.246743 kubelet[2670]: I0527 18:15:26.245580 2670 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 18:15:26.257278 kubelet[2670]: I0527 18:15:26.257234 2670 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 18:15:26.258052 kubelet[2670]: I0527 18:15:26.258000 2670 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 18:15:26.258545 kubelet[2670]: I0527 18:15:26.258170 2670 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-0-76b74bdce7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 18:15:26.258744 kubelet[2670]: I0527 18:15:26.258731 2670 topology_manager.go:138] "Creating topology manager with none policy" May 27 18:15:26.258800 kubelet[2670]: I0527 18:15:26.258794 2670 container_manager_linux.go:303] "Creating device plugin manager" May 27 18:15:26.258940 kubelet[2670]: I0527 18:15:26.258928 2670 state_mem.go:36] "Initialized new in-memory state store" May 27 18:15:26.259349 kubelet[2670]: I0527 18:15:26.259328 2670 kubelet.go:480] "Attempting to sync node with API server" May 27 18:15:26.259439 kubelet[2670]: I0527 18:15:26.259430 2670 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 18:15:26.259613 kubelet[2670]: I0527 18:15:26.259601 2670 kubelet.go:386] "Adding apiserver pod source" May 27 18:15:26.259696 kubelet[2670]: I0527 18:15:26.259680 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 18:15:26.264490 kubelet[2670]: I0527 18:15:26.264450 2670 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 18:15:26.265995 kubelet[2670]: I0527 18:15:26.265411 2670 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 18:15:26.271192 sudo[2684]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 18:15:26.271698 sudo[2684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 18:15:26.272975 kubelet[2670]: I0527 18:15:26.272946 2670 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 18:15:26.274239 kubelet[2670]: I0527 18:15:26.273957 2670 server.go:1289] "Started kubelet" May 27 18:15:26.281069 kubelet[2670]: I0527 18:15:26.281005 2670 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 18:15:26.283063 kubelet[2670]: I0527 18:15:26.282490 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 18:15:26.284744 kubelet[2670]: I0527 18:15:26.284303 2670 server.go:317] "Adding debug handlers to kubelet server" May 27 18:15:26.285354 kubelet[2670]: I0527 18:15:26.285238 2670 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 18:15:26.310566 kubelet[2670]: I0527 18:15:26.310519 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 18:15:26.317086 kubelet[2670]: I0527 18:15:26.317014 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 18:15:26.323700 kubelet[2670]: I0527 18:15:26.323656 2670 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 18:15:26.325783 kubelet[2670]: E0527 18:15:26.325205 2670 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-0-76b74bdce7\" not found" May 27 18:15:26.327248 kubelet[2670]: I0527 18:15:26.327209 2670 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 18:15:26.328099 kubelet[2670]: I0527 18:15:26.328044 2670 reconciler.go:26] "Reconciler: start to sync state" May 27 18:15:26.349086 kubelet[2670]: I0527 18:15:26.347423 2670 factory.go:223] Registration of the systemd container factory successfully May 27 18:15:26.351278 kubelet[2670]: I0527 18:15:26.351208 2670 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 18:15:26.362682 update_engine[1494]: I20250527 18:15:26.361805 1494 update_attempter.cc:509] Updating boot flags... May 27 18:15:26.368854 kubelet[2670]: E0527 18:15:26.364954 2670 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 18:15:26.368854 kubelet[2670]: I0527 18:15:26.368192 2670 factory.go:223] Registration of the containerd container factory successfully May 27 18:15:26.521259 kubelet[2670]: I0527 18:15:26.521198 2670 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 18:15:26.536142 kubelet[2670]: I0527 18:15:26.536001 2670 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 18:15:26.536595 kubelet[2670]: I0527 18:15:26.536281 2670 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 18:15:26.536991 kubelet[2670]: I0527 18:15:26.536834 2670 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 18:15:26.536991 kubelet[2670]: I0527 18:15:26.536860 2670 kubelet.go:2436] "Starting kubelet main sync loop" May 27 18:15:26.546367 kubelet[2670]: E0527 18:15:26.545377 2670 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 18:15:26.624399 kubelet[2670]: I0527 18:15:26.624058 2670 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 18:15:26.626168 kubelet[2670]: I0527 18:15:26.624953 2670 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 18:15:26.626168 kubelet[2670]: I0527 18:15:26.625016 2670 state_mem.go:36] "Initialized new in-memory state store" May 27 18:15:26.626168 kubelet[2670]: I0527 18:15:26.625964 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 18:15:26.626168 kubelet[2670]: I0527 18:15:26.625983 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 18:15:26.626168 kubelet[2670]: I0527 18:15:26.626027 2670 policy_none.go:49] "None policy: Start" May 27 18:15:26.626168 kubelet[2670]: I0527 18:15:26.626044 2670 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 18:15:26.626168 kubelet[2670]: I0527 18:15:26.626064 2670 state_mem.go:35] "Initializing new in-memory state store" May 27 18:15:26.626870 kubelet[2670]: I0527 18:15:26.626504 2670 state_mem.go:75] "Updated machine memory state" May 27 18:15:26.648148 kubelet[2670]: E0527 18:15:26.648095 2670 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 18:15:26.787885 kubelet[2670]: E0527 18:15:26.780080 2670 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 18:15:26.787885 kubelet[2670]: I0527 18:15:26.780312 2670 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 18:15:26.787885 kubelet[2670]: I0527 18:15:26.780323 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 18:15:26.789507 kubelet[2670]: I0527 18:15:26.788664 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 18:15:26.804586 kubelet[2670]: E0527 18:15:26.804545 2670 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 18:15:26.856807 kubelet[2670]: I0527 18:15:26.855004 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.872811 kubelet[2670]: I0527 18:15:26.860230 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.872811 kubelet[2670]: I0527 18:15:26.860429 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.913182 kubelet[2670]: I0527 18:15:26.912953 2670 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 18:15:26.922098 kubelet[2670]: I0527 18:15:26.922060 2670 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 18:15:26.922241 kubelet[2670]: E0527 18:15:26.922121 2670 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" already exists" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.927073 kubelet[2670]: I0527 18:15:26.927033 2670 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 27 18:15:26.934787 kubelet[2670]: I0527 18:15:26.933903 2670 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.960514 kubelet[2670]: I0527 18:15:26.959850 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.960514 kubelet[2670]: I0527 18:15:26.959887 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.960514 kubelet[2670]: I0527 18:15:26.959907 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.960514 kubelet[2670]: I0527 18:15:26.959928 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.960514 kubelet[2670]: I0527 18:15:26.959949 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cc47f18f3a266b228df123ed3143bdd-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-0-76b74bdce7\" (UID: \"5cc47f18f3a266b228df123ed3143bdd\") " pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.961567 kubelet[2670]: I0527 18:15:26.959966 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78c9191f3bca4d1e3e357772826fc7bb-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-0-76b74bdce7\" (UID: \"78c9191f3bca4d1e3e357772826fc7bb\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.961567 kubelet[2670]: I0527 18:15:26.960090 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdc72edd9dc47b2d5abb4b69e3e47e3b-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-0-76b74bdce7\" (UID: \"cdc72edd9dc47b2d5abb4b69e3e47e3b\") " pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.961567 kubelet[2670]: I0527 18:15:26.960197 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdc72edd9dc47b2d5abb4b69e3e47e3b-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-0-76b74bdce7\" (UID: \"cdc72edd9dc47b2d5abb4b69e3e47e3b\") " pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.961567 kubelet[2670]: I0527 18:15:26.960245 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdc72edd9dc47b2d5abb4b69e3e47e3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-0-76b74bdce7\" (UID: \"cdc72edd9dc47b2d5abb4b69e3e47e3b\") " pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.961567 kubelet[2670]: I0527 18:15:26.961091 2670 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:26.964561 kubelet[2670]: I0527 18:15:26.961990 2670 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-0-76b74bdce7" May 27 18:15:27.218202 kubelet[2670]: E0527 18:15:27.218032 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:27.224463 kubelet[2670]: E0527 18:15:27.224061 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:27.229901 kubelet[2670]: E0527 18:15:27.229353 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:27.235324 sudo[2684]: pam_unix(sudo:session): session closed for user root May 27 18:15:27.281751 kubelet[2670]: I0527 18:15:27.280912 2670 apiserver.go:52] "Watching apiserver" May 27 18:15:27.328701 kubelet[2670]: I0527 18:15:27.328638 2670 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 18:15:27.614351 kubelet[2670]: E0527 18:15:27.614302 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:27.616260 kubelet[2670]: E0527 18:15:27.614970 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:27.616544 kubelet[2670]: E0527 18:15:27.616503 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:27.711957 kubelet[2670]: I0527 18:15:27.711886 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" podStartSLOduration=2.711859372 podStartE2EDuration="2.711859372s" podCreationTimestamp="2025-05-27 18:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:15:27.682523089 +0000 UTC m=+1.574278264" watchObservedRunningTime="2025-05-27 18:15:27.711859372 +0000 UTC m=+1.603614529" May 27 18:15:27.712209 kubelet[2670]: I0527 18:15:27.712025 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" podStartSLOduration=1.712018112 podStartE2EDuration="1.712018112s" podCreationTimestamp="2025-05-27 18:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:15:27.708157876 +0000 UTC m=+1.599913039" watchObservedRunningTime="2025-05-27 18:15:27.712018112 +0000 UTC m=+1.603773274" May 27 18:15:28.616748 kubelet[2670]: E0527 18:15:28.616273 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:28.616748 kubelet[2670]: E0527 18:15:28.616663 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:28.619576 kubelet[2670]: E0527 18:15:28.619515 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:28.816145 sudo[1788]: pam_unix(sudo:session): session closed for user root May 27 18:15:28.820009 sshd[1787]: Connection closed by 139.178.68.195 port 34062 May 27 18:15:28.821348 sshd-session[1779]: pam_unix(sshd:session): session closed for user core May 27 18:15:28.828961 systemd[1]: sshd@8-143.110.225.216:22-139.178.68.195:34062.service: Deactivated successfully. May 27 18:15:28.834023 systemd[1]: session-9.scope: Deactivated successfully. May 27 18:15:28.834526 systemd[1]: session-9.scope: Consumed 6.820s CPU time, 220.5M memory peak. May 27 18:15:28.838230 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. May 27 18:15:28.841904 systemd-logind[1493]: Removed session 9. May 27 18:15:29.490553 kubelet[2670]: I0527 18:15:29.490346 2670 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 18:15:29.491709 containerd[1523]: time="2025-05-27T18:15:29.491667720Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 18:15:29.492940 kubelet[2670]: I0527 18:15:29.492009 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 18:15:29.872016 kubelet[2670]: E0527 18:15:29.871463 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:29.896031 kubelet[2670]: I0527 18:15:29.895767 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" podStartSLOduration=3.895737063 podStartE2EDuration="3.895737063s" podCreationTimestamp="2025-05-27 18:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:15:27.749059215 +0000 UTC m=+1.640814397" watchObservedRunningTime="2025-05-27 18:15:29.895737063 +0000 UTC m=+3.787492224" May 27 18:15:29.984588 kubelet[2670]: E0527 18:15:29.984436 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:30.632340 kubelet[2670]: E0527 18:15:30.632176 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:30.635443 kubelet[2670]: E0527 18:15:30.635399 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:30.651686 systemd[1]: Created slice kubepods-besteffort-pod748846fd_4ac1_4da6_9f57_b9e284a1b276.slice - libcontainer container kubepods-besteffort-pod748846fd_4ac1_4da6_9f57_b9e284a1b276.slice. May 27 18:15:30.679488 systemd[1]: Created slice kubepods-burstable-pod5654a76c_c9e9_412c_bcc4_e3b1ba255fa3.slice - libcontainer container kubepods-burstable-pod5654a76c_c9e9_412c_bcc4_e3b1ba255fa3.slice. May 27 18:15:30.684747 kubelet[2670]: I0527 18:15:30.683833 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/748846fd-4ac1-4da6-9f57-b9e284a1b276-xtables-lock\") pod \"kube-proxy-mjrfk\" (UID: \"748846fd-4ac1-4da6-9f57-b9e284a1b276\") " pod="kube-system/kube-proxy-mjrfk" May 27 18:15:30.684747 kubelet[2670]: I0527 18:15:30.683897 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/748846fd-4ac1-4da6-9f57-b9e284a1b276-kube-proxy\") pod \"kube-proxy-mjrfk\" (UID: \"748846fd-4ac1-4da6-9f57-b9e284a1b276\") " pod="kube-system/kube-proxy-mjrfk" May 27 18:15:30.684747 kubelet[2670]: I0527 18:15:30.683917 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/748846fd-4ac1-4da6-9f57-b9e284a1b276-lib-modules\") pod \"kube-proxy-mjrfk\" (UID: \"748846fd-4ac1-4da6-9f57-b9e284a1b276\") " pod="kube-system/kube-proxy-mjrfk" May 27 18:15:30.684747 kubelet[2670]: I0527 18:15:30.683945 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmzh\" (UniqueName: \"kubernetes.io/projected/748846fd-4ac1-4da6-9f57-b9e284a1b276-kube-api-access-wpmzh\") pod \"kube-proxy-mjrfk\" (UID: \"748846fd-4ac1-4da6-9f57-b9e284a1b276\") " pod="kube-system/kube-proxy-mjrfk" May 27 18:15:30.767505 systemd[1]: Created slice kubepods-besteffort-pod9d78b455_d61d_44ef_8a35_d03c1f99de0a.slice - libcontainer container kubepods-besteffort-pod9d78b455_d61d_44ef_8a35_d03c1f99de0a.slice. May 27 18:15:30.784438 kubelet[2670]: I0527 18:15:30.784290 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-bpf-maps\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.784920 kubelet[2670]: I0527 18:15:30.784806 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-hostproc\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785177 kubelet[2670]: I0527 18:15:30.785096 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-cgroup\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785177 kubelet[2670]: I0527 18:15:30.785138 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-lib-modules\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785177 kubelet[2670]: I0527 18:15:30.785161 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-clustermesh-secrets\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785292 kubelet[2670]: I0527 18:15:30.785190 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-etc-cni-netd\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785292 kubelet[2670]: I0527 18:15:30.785220 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-host-proc-sys-kernel\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785292 kubelet[2670]: I0527 18:15:30.785243 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-hubble-tls\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785292 kubelet[2670]: I0527 18:15:30.785284 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-run\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785407 kubelet[2670]: I0527 18:15:30.785300 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-xtables-lock\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785407 kubelet[2670]: I0527 18:15:30.785331 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-config-path\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785407 kubelet[2670]: I0527 18:15:30.785360 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cni-path\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785407 kubelet[2670]: I0527 18:15:30.785401 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-host-proc-sys-net\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.785544 kubelet[2670]: I0527 18:15:30.785419 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grrzl\" (UniqueName: \"kubernetes.io/projected/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-kube-api-access-grrzl\") pod \"cilium-6qb7g\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " pod="kube-system/cilium-6qb7g" May 27 18:15:30.887161 kubelet[2670]: I0527 18:15:30.886865 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d78b455-d61d-44ef-8a35-d03c1f99de0a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-twqh9\" (UID: \"9d78b455-d61d-44ef-8a35-d03c1f99de0a\") " pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:15:30.887161 kubelet[2670]: I0527 18:15:30.886912 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll6dd\" (UniqueName: \"kubernetes.io/projected/9d78b455-d61d-44ef-8a35-d03c1f99de0a-kube-api-access-ll6dd\") pod \"cilium-operator-6c4d7847fc-twqh9\" (UID: \"9d78b455-d61d-44ef-8a35-d03c1f99de0a\") " pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:15:30.964582 kubelet[2670]: E0527 18:15:30.964523 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:30.965860 containerd[1523]: time="2025-05-27T18:15:30.965786935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mjrfk,Uid:748846fd-4ac1-4da6-9f57-b9e284a1b276,Namespace:kube-system,Attempt:0,}" May 27 18:15:30.985347 kubelet[2670]: E0527 18:15:30.985313 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:30.989053 containerd[1523]: time="2025-05-27T18:15:30.988897864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6qb7g,Uid:5654a76c-c9e9-412c-bcc4-e3b1ba255fa3,Namespace:kube-system,Attempt:0,}" May 27 18:15:30.999984 containerd[1523]: time="2025-05-27T18:15:30.999886896Z" level=info msg="connecting to shim 7d399988f046877c0b658f7e7d0b1d83a94d8c52c01af40d3324d75cc82172b5" address="unix:///run/containerd/s/0de8b5280e32abfba0f1b1266f6a4f9c0edddffc42f7466722ff31ab4da8a1a8" namespace=k8s.io protocol=ttrpc version=3 May 27 18:15:31.028786 containerd[1523]: time="2025-05-27T18:15:31.026703398Z" level=info msg="connecting to shim 624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578" address="unix:///run/containerd/s/21feebfaabcf1ec479382973d445cc00916dfbedfec52413d9eef46cda7a0283" namespace=k8s.io protocol=ttrpc version=3 May 27 18:15:31.054218 systemd[1]: Started cri-containerd-7d399988f046877c0b658f7e7d0b1d83a94d8c52c01af40d3324d75cc82172b5.scope - libcontainer container 7d399988f046877c0b658f7e7d0b1d83a94d8c52c01af40d3324d75cc82172b5. May 27 18:15:31.074755 kubelet[2670]: E0527 18:15:31.074086 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:31.078028 containerd[1523]: time="2025-05-27T18:15:31.076412369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-twqh9,Uid:9d78b455-d61d-44ef-8a35-d03c1f99de0a,Namespace:kube-system,Attempt:0,}" May 27 18:15:31.087086 systemd[1]: Started cri-containerd-624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578.scope - libcontainer container 624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578. May 27 18:15:31.147979 containerd[1523]: time="2025-05-27T18:15:31.147465288Z" level=info msg="connecting to shim bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c" address="unix:///run/containerd/s/cc41357bf92a9ed4e9944ee12460b3f3a8acd995038e060fa4c9e1d05a70c2fd" namespace=k8s.io protocol=ttrpc version=3 May 27 18:15:31.148620 containerd[1523]: time="2025-05-27T18:15:31.147601221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mjrfk,Uid:748846fd-4ac1-4da6-9f57-b9e284a1b276,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d399988f046877c0b658f7e7d0b1d83a94d8c52c01af40d3324d75cc82172b5\"" May 27 18:15:31.151407 kubelet[2670]: E0527 18:15:31.151326 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:31.162488 containerd[1523]: time="2025-05-27T18:15:31.162412378Z" level=info msg="CreateContainer within sandbox \"7d399988f046877c0b658f7e7d0b1d83a94d8c52c01af40d3324d75cc82172b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 18:15:31.173850 containerd[1523]: time="2025-05-27T18:15:31.173785232Z" level=info msg="Container f609d48ccb343c6e12365f88d6c46042b80db257c5d309620a8e86ba5c0eecd3: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:31.187997 containerd[1523]: time="2025-05-27T18:15:31.187891014Z" level=info msg="CreateContainer within sandbox \"7d399988f046877c0b658f7e7d0b1d83a94d8c52c01af40d3324d75cc82172b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f609d48ccb343c6e12365f88d6c46042b80db257c5d309620a8e86ba5c0eecd3\"" May 27 18:15:31.190290 containerd[1523]: time="2025-05-27T18:15:31.190180926Z" level=info msg="StartContainer for \"f609d48ccb343c6e12365f88d6c46042b80db257c5d309620a8e86ba5c0eecd3\"" May 27 18:15:31.197881 containerd[1523]: time="2025-05-27T18:15:31.197830839Z" level=info msg="connecting to shim f609d48ccb343c6e12365f88d6c46042b80db257c5d309620a8e86ba5c0eecd3" address="unix:///run/containerd/s/0de8b5280e32abfba0f1b1266f6a4f9c0edddffc42f7466722ff31ab4da8a1a8" protocol=ttrpc version=3 May 27 18:15:31.201011 containerd[1523]: time="2025-05-27T18:15:31.200796753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6qb7g,Uid:5654a76c-c9e9-412c-bcc4-e3b1ba255fa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\"" May 27 18:15:31.207492 kubelet[2670]: E0527 18:15:31.206860 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:31.211120 containerd[1523]: time="2025-05-27T18:15:31.211057377Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 18:15:31.216713 systemd-resolved[1401]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. May 27 18:15:31.244139 systemd[1]: Started cri-containerd-bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c.scope - libcontainer container bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c. May 27 18:15:31.270134 systemd[1]: Started cri-containerd-f609d48ccb343c6e12365f88d6c46042b80db257c5d309620a8e86ba5c0eecd3.scope - libcontainer container f609d48ccb343c6e12365f88d6c46042b80db257c5d309620a8e86ba5c0eecd3. May 27 18:15:31.361824 containerd[1523]: time="2025-05-27T18:15:31.361701211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-twqh9,Uid:9d78b455-d61d-44ef-8a35-d03c1f99de0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\"" May 27 18:15:31.364125 kubelet[2670]: E0527 18:15:31.364080 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:31.382193 containerd[1523]: time="2025-05-27T18:15:31.382015364Z" level=info msg="StartContainer for \"f609d48ccb343c6e12365f88d6c46042b80db257c5d309620a8e86ba5c0eecd3\" returns successfully" May 27 18:15:31.653874 kubelet[2670]: E0527 18:15:31.653483 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:31.653874 kubelet[2670]: E0527 18:15:31.653710 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:31.654288 kubelet[2670]: E0527 18:15:31.654007 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:36.838464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965079803.mount: Deactivated successfully. May 27 18:15:38.076813 kubelet[2670]: E0527 18:15:38.071413 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:38.096011 kubelet[2670]: I0527 18:15:38.095894 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mjrfk" podStartSLOduration=8.095870846 podStartE2EDuration="8.095870846s" podCreationTimestamp="2025-05-27 18:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:15:31.683114326 +0000 UTC m=+5.574869508" watchObservedRunningTime="2025-05-27 18:15:38.095870846 +0000 UTC m=+11.987626010" May 27 18:15:39.782525 containerd[1523]: time="2025-05-27T18:15:39.780321811Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 18:15:39.786470 containerd[1523]: time="2025-05-27T18:15:39.784527880Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.573411621s" May 27 18:15:39.786470 containerd[1523]: time="2025-05-27T18:15:39.784589775Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 18:15:39.794807 containerd[1523]: time="2025-05-27T18:15:39.792681100Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 18:15:39.800779 containerd[1523]: time="2025-05-27T18:15:39.800711611Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 18:15:39.805908 containerd[1523]: time="2025-05-27T18:15:39.804840519Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:39.805908 containerd[1523]: time="2025-05-27T18:15:39.805786781Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:39.832954 containerd[1523]: time="2025-05-27T18:15:39.832881335Z" level=info msg="Container 9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:39.839377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1960599611.mount: Deactivated successfully. May 27 18:15:39.844419 containerd[1523]: time="2025-05-27T18:15:39.844346506Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\"" May 27 18:15:39.846904 containerd[1523]: time="2025-05-27T18:15:39.846858799Z" level=info msg="StartContainer for \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\"" May 27 18:15:39.849522 containerd[1523]: time="2025-05-27T18:15:39.849473325Z" level=info msg="connecting to shim 9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518" address="unix:///run/containerd/s/21feebfaabcf1ec479382973d445cc00916dfbedfec52413d9eef46cda7a0283" protocol=ttrpc version=3 May 27 18:15:39.886375 systemd[1]: Started cri-containerd-9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518.scope - libcontainer container 9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518. May 27 18:15:39.933997 containerd[1523]: time="2025-05-27T18:15:39.933930806Z" level=info msg="StartContainer for \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\" returns successfully" May 27 18:15:39.959077 systemd[1]: cri-containerd-9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518.scope: Deactivated successfully. May 27 18:15:39.991944 containerd[1523]: time="2025-05-27T18:15:39.991855993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\" id:\"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\" pid:3107 exited_at:{seconds:1748369739 nanos:960237179}" May 27 18:15:40.010582 containerd[1523]: time="2025-05-27T18:15:40.010498760Z" level=info msg="received exit event container_id:\"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\" id:\"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\" pid:3107 exited_at:{seconds:1748369739 nanos:960237179}" May 27 18:15:40.058703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518-rootfs.mount: Deactivated successfully. May 27 18:15:40.731690 kubelet[2670]: E0527 18:15:40.731556 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:40.760265 containerd[1523]: time="2025-05-27T18:15:40.760134743Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 18:15:40.789214 containerd[1523]: time="2025-05-27T18:15:40.788750099Z" level=info msg="Container 8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:40.796251 containerd[1523]: time="2025-05-27T18:15:40.796063475Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\"" May 27 18:15:40.813760 containerd[1523]: time="2025-05-27T18:15:40.813390506Z" level=info msg="StartContainer for \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\"" May 27 18:15:40.815447 containerd[1523]: time="2025-05-27T18:15:40.815387232Z" level=info msg="connecting to shim 8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2" address="unix:///run/containerd/s/21feebfaabcf1ec479382973d445cc00916dfbedfec52413d9eef46cda7a0283" protocol=ttrpc version=3 May 27 18:15:40.853082 systemd[1]: Started cri-containerd-8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2.scope - libcontainer container 8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2. May 27 18:15:40.908483 containerd[1523]: time="2025-05-27T18:15:40.908202046Z" level=info msg="StartContainer for \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\" returns successfully" May 27 18:15:40.927749 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 18:15:40.928095 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 18:15:40.929037 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 18:15:40.933254 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:15:40.939611 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 18:15:40.941386 systemd[1]: cri-containerd-8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2.scope: Deactivated successfully. May 27 18:15:40.942825 containerd[1523]: time="2025-05-27T18:15:40.942568950Z" level=info msg="received exit event container_id:\"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\" id:\"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\" pid:3153 exited_at:{seconds:1748369740 nanos:941344083}" May 27 18:15:40.964366 containerd[1523]: time="2025-05-27T18:15:40.964289646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\" id:\"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\" pid:3153 exited_at:{seconds:1748369740 nanos:941344083}" May 27 18:15:40.992586 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:15:41.000735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2-rootfs.mount: Deactivated successfully. May 27 18:15:41.746963 kubelet[2670]: E0527 18:15:41.746842 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:41.769616 containerd[1523]: time="2025-05-27T18:15:41.768673586Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 18:15:41.793674 containerd[1523]: time="2025-05-27T18:15:41.793228731Z" level=info msg="Container 5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:41.814307 containerd[1523]: time="2025-05-27T18:15:41.814230400Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\"" May 27 18:15:41.816861 containerd[1523]: time="2025-05-27T18:15:41.816805509Z" level=info msg="StartContainer for \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\"" May 27 18:15:41.821363 containerd[1523]: time="2025-05-27T18:15:41.821236156Z" level=info msg="connecting to shim 5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667" address="unix:///run/containerd/s/21feebfaabcf1ec479382973d445cc00916dfbedfec52413d9eef46cda7a0283" protocol=ttrpc version=3 May 27 18:15:41.834327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425826619.mount: Deactivated successfully. May 27 18:15:41.887654 systemd[1]: Started cri-containerd-5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667.scope - libcontainer container 5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667. May 27 18:15:42.000205 systemd[1]: cri-containerd-5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667.scope: Deactivated successfully. May 27 18:15:42.008195 containerd[1523]: time="2025-05-27T18:15:42.008114927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\" id:\"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\" pid:3208 exited_at:{seconds:1748369742 nanos:7432988}" May 27 18:15:42.008565 containerd[1523]: time="2025-05-27T18:15:42.008441826Z" level=info msg="StartContainer for \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\" returns successfully" May 27 18:15:42.008693 containerd[1523]: time="2025-05-27T18:15:42.008630309Z" level=info msg="received exit event container_id:\"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\" id:\"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\" pid:3208 exited_at:{seconds:1748369742 nanos:7432988}" May 27 18:15:42.059177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667-rootfs.mount: Deactivated successfully. May 27 18:15:42.767905 kubelet[2670]: E0527 18:15:42.766636 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:42.788383 containerd[1523]: time="2025-05-27T18:15:42.788130474Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 18:15:42.830806 containerd[1523]: time="2025-05-27T18:15:42.827849242Z" level=info msg="Container 0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:42.840686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3433568033.mount: Deactivated successfully. May 27 18:15:42.856314 containerd[1523]: time="2025-05-27T18:15:42.856135111Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\"" May 27 18:15:42.860138 containerd[1523]: time="2025-05-27T18:15:42.860027180Z" level=info msg="StartContainer for \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\"" May 27 18:15:42.867088 containerd[1523]: time="2025-05-27T18:15:42.866764801Z" level=info msg="connecting to shim 0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9" address="unix:///run/containerd/s/21feebfaabcf1ec479382973d445cc00916dfbedfec52413d9eef46cda7a0283" protocol=ttrpc version=3 May 27 18:15:42.921126 systemd[1]: Started cri-containerd-0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9.scope - libcontainer container 0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9. May 27 18:15:43.017570 systemd[1]: cri-containerd-0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9.scope: Deactivated successfully. May 27 18:15:43.027068 containerd[1523]: time="2025-05-27T18:15:43.026751779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\" id:\"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\" pid:3253 exited_at:{seconds:1748369743 nanos:20984781}" May 27 18:15:43.030097 containerd[1523]: time="2025-05-27T18:15:43.029843700Z" level=info msg="received exit event container_id:\"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\" id:\"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\" pid:3253 exited_at:{seconds:1748369743 nanos:20984781}" May 27 18:15:43.032830 containerd[1523]: time="2025-05-27T18:15:43.032768322Z" level=info msg="StartContainer for \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\" returns successfully" May 27 18:15:43.106128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9-rootfs.mount: Deactivated successfully. May 27 18:15:43.291834 containerd[1523]: time="2025-05-27T18:15:43.291611364Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:43.294573 containerd[1523]: time="2025-05-27T18:15:43.294496096Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 18:15:43.295554 containerd[1523]: time="2025-05-27T18:15:43.295492223Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:15:43.297574 containerd[1523]: time="2025-05-27T18:15:43.297405429Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.504645793s" May 27 18:15:43.297574 containerd[1523]: time="2025-05-27T18:15:43.297458124Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 18:15:43.304419 containerd[1523]: time="2025-05-27T18:15:43.304257947Z" level=info msg="CreateContainer within sandbox \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 18:15:43.318763 containerd[1523]: time="2025-05-27T18:15:43.316469502Z" level=info msg="Container 7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:43.332219 containerd[1523]: time="2025-05-27T18:15:43.332115621Z" level=info msg="CreateContainer within sandbox \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\"" May 27 18:15:43.334440 containerd[1523]: time="2025-05-27T18:15:43.334220123Z" level=info msg="StartContainer for \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\"" May 27 18:15:43.337378 containerd[1523]: time="2025-05-27T18:15:43.337323016Z" level=info msg="connecting to shim 7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805" address="unix:///run/containerd/s/cc41357bf92a9ed4e9944ee12460b3f3a8acd995038e060fa4c9e1d05a70c2fd" protocol=ttrpc version=3 May 27 18:15:43.372318 systemd[1]: Started cri-containerd-7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805.scope - libcontainer container 7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805. May 27 18:15:43.437668 containerd[1523]: time="2025-05-27T18:15:43.437583928Z" level=info msg="StartContainer for \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" returns successfully" May 27 18:15:43.801643 kubelet[2670]: E0527 18:15:43.792632 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:43.824827 kubelet[2670]: E0527 18:15:43.823314 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:43.834315 containerd[1523]: time="2025-05-27T18:15:43.834249920Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 18:15:43.865770 containerd[1523]: time="2025-05-27T18:15:43.863515608Z" level=info msg="Container d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:43.885348 containerd[1523]: time="2025-05-27T18:15:43.885267704Z" level=info msg="CreateContainer within sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\"" May 27 18:15:43.886671 containerd[1523]: time="2025-05-27T18:15:43.886608697Z" level=info msg="StartContainer for \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\"" May 27 18:15:43.890713 containerd[1523]: time="2025-05-27T18:15:43.890564269Z" level=info msg="connecting to shim d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5" address="unix:///run/containerd/s/21feebfaabcf1ec479382973d445cc00916dfbedfec52413d9eef46cda7a0283" protocol=ttrpc version=3 May 27 18:15:43.902782 kubelet[2670]: I0527 18:15:43.899299 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-twqh9" podStartSLOduration=1.966990503 podStartE2EDuration="13.899275184s" podCreationTimestamp="2025-05-27 18:15:30 +0000 UTC" firstStartedPulling="2025-05-27 18:15:31.367048967 +0000 UTC m=+5.258813248" lastFinishedPulling="2025-05-27 18:15:43.299342771 +0000 UTC m=+17.191097929" observedRunningTime="2025-05-27 18:15:43.899033026 +0000 UTC m=+17.790788191" watchObservedRunningTime="2025-05-27 18:15:43.899275184 +0000 UTC m=+17.791030351" May 27 18:15:43.972086 systemd[1]: Started cri-containerd-d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5.scope - libcontainer container d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5. May 27 18:15:44.151790 containerd[1523]: time="2025-05-27T18:15:44.149279203Z" level=info msg="StartContainer for \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" returns successfully" May 27 18:15:44.566759 kubelet[2670]: I0527 18:15:44.565745 2670 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 18:15:44.567303 containerd[1523]: time="2025-05-27T18:15:44.567073592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" id:\"6f78b363489c3fda7c3f1641f88080815ea112ccc061fa83350acba033837b0f\" pid:3358 exited_at:{seconds:1748369744 nanos:564916954}" May 27 18:15:44.842858 kubelet[2670]: E0527 18:15:44.842679 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:44.844429 kubelet[2670]: E0527 18:15:44.844381 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:44.914752 systemd[1]: Created slice kubepods-burstable-pod384e3133_6637_4b5e_bb12_1a3655ecad79.slice - libcontainer container kubepods-burstable-pod384e3133_6637_4b5e_bb12_1a3655ecad79.slice. May 27 18:15:44.940073 systemd[1]: Created slice kubepods-burstable-pod867f581c_f4d1_4563_871f_bd0e26929937.slice - libcontainer container kubepods-burstable-pod867f581c_f4d1_4563_871f_bd0e26929937.slice. May 27 18:15:45.019552 kubelet[2670]: I0527 18:15:45.019464 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swz2c\" (UniqueName: \"kubernetes.io/projected/384e3133-6637-4b5e-bb12-1a3655ecad79-kube-api-access-swz2c\") pod \"coredns-674b8bbfcf-nd5gg\" (UID: \"384e3133-6637-4b5e-bb12-1a3655ecad79\") " pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:15:45.019552 kubelet[2670]: I0527 18:15:45.019536 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/384e3133-6637-4b5e-bb12-1a3655ecad79-config-volume\") pod \"coredns-674b8bbfcf-nd5gg\" (UID: \"384e3133-6637-4b5e-bb12-1a3655ecad79\") " pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:15:45.019892 kubelet[2670]: I0527 18:15:45.019573 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55vhp\" (UniqueName: \"kubernetes.io/projected/867f581c-f4d1-4563-871f-bd0e26929937-kube-api-access-55vhp\") pod \"coredns-674b8bbfcf-zv4dh\" (UID: \"867f581c-f4d1-4563-871f-bd0e26929937\") " pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:15:45.019892 kubelet[2670]: I0527 18:15:45.019600 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/867f581c-f4d1-4563-871f-bd0e26929937-config-volume\") pod \"coredns-674b8bbfcf-zv4dh\" (UID: \"867f581c-f4d1-4563-871f-bd0e26929937\") " pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:15:45.136777 kubelet[2670]: I0527 18:15:45.136574 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6qb7g" podStartSLOduration=6.55583769 podStartE2EDuration="15.136523873s" podCreationTimestamp="2025-05-27 18:15:30 +0000 UTC" firstStartedPulling="2025-05-27 18:15:31.209730385 +0000 UTC m=+5.101485527" lastFinishedPulling="2025-05-27 18:15:39.790416552 +0000 UTC m=+13.682171710" observedRunningTime="2025-05-27 18:15:45.073272476 +0000 UTC m=+18.965027633" watchObservedRunningTime="2025-05-27 18:15:45.136523873 +0000 UTC m=+19.028279100" May 27 18:15:45.233911 kubelet[2670]: E0527 18:15:45.230663 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:45.237462 containerd[1523]: time="2025-05-27T18:15:45.237395646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd5gg,Uid:384e3133-6637-4b5e-bb12-1a3655ecad79,Namespace:kube-system,Attempt:0,}" May 27 18:15:45.548593 kubelet[2670]: E0527 18:15:45.548194 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:45.549478 containerd[1523]: time="2025-05-27T18:15:45.549057434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zv4dh,Uid:867f581c-f4d1-4563-871f-bd0e26929937,Namespace:kube-system,Attempt:0,}" May 27 18:15:45.848910 kubelet[2670]: E0527 18:15:45.847999 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:46.850636 kubelet[2670]: E0527 18:15:46.850524 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:47.042224 kubelet[2670]: I0527 18:15:47.042145 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:15:47.042224 kubelet[2670]: I0527 18:15:47.042206 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:15:47.051696 kubelet[2670]: I0527 18:15:47.051623 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:15:47.082282 kubelet[2670]: I0527 18:15:47.082240 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:15:47.082659 kubelet[2670]: I0527 18:15:47.082630 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/cilium-operator-6c4d7847fc-twqh9","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-proxy-mjrfk","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7","kube-system/cilium-6qb7g"] May 27 18:15:47.082877 kubelet[2670]: E0527 18:15:47.082859 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:15:47.083309 kubelet[2670]: E0527 18:15:47.083137 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:15:47.083309 kubelet[2670]: E0527 18:15:47.083169 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:15:47.083309 kubelet[2670]: E0527 18:15:47.083188 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:47.083309 kubelet[2670]: E0527 18:15:47.083217 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:15:47.083309 kubelet[2670]: E0527 18:15:47.083230 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:47.083309 kubelet[2670]: E0527 18:15:47.083245 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:15:47.083309 kubelet[2670]: E0527 18:15:47.083270 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-6qb7g" May 27 18:15:47.083309 kubelet[2670]: I0527 18:15:47.083288 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:15:47.580335 systemd-networkd[1457]: cilium_host: Link UP May 27 18:15:47.581866 systemd-networkd[1457]: cilium_net: Link UP May 27 18:15:47.583526 systemd-networkd[1457]: cilium_host: Gained carrier May 27 18:15:47.583699 systemd-networkd[1457]: cilium_net: Gained carrier May 27 18:15:47.759630 systemd-networkd[1457]: cilium_vxlan: Link UP May 27 18:15:47.759663 systemd-networkd[1457]: cilium_vxlan: Gained carrier May 27 18:15:48.182846 systemd-networkd[1457]: cilium_net: Gained IPv6LL May 27 18:15:48.184232 systemd-networkd[1457]: cilium_host: Gained IPv6LL May 27 18:15:48.277885 kernel: NET: Registered PF_ALG protocol family May 27 18:15:49.427807 systemd-networkd[1457]: lxc_health: Link UP May 27 18:15:49.453043 systemd-networkd[1457]: lxc_health: Gained carrier May 27 18:15:49.717970 systemd-networkd[1457]: cilium_vxlan: Gained IPv6LL May 27 18:15:49.853925 kernel: eth0: renamed from tmp8df03 May 27 18:15:49.856126 systemd-networkd[1457]: lxc4a848eccb6d2: Link UP May 27 18:15:49.859922 systemd-networkd[1457]: lxc4a848eccb6d2: Gained carrier May 27 18:15:50.114817 kernel: eth0: renamed from tmp71939 May 27 18:15:50.116694 systemd-networkd[1457]: lxcb6325f9fa953: Link UP May 27 18:15:50.117298 systemd-networkd[1457]: lxcb6325f9fa953: Gained carrier May 27 18:15:50.613909 systemd-networkd[1457]: lxc_health: Gained IPv6LL May 27 18:15:50.989910 kubelet[2670]: E0527 18:15:50.989366 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:51.509972 systemd-networkd[1457]: lxc4a848eccb6d2: Gained IPv6LL May 27 18:15:51.510810 systemd-networkd[1457]: lxcb6325f9fa953: Gained IPv6LL May 27 18:15:55.091084 kubelet[2670]: I0527 18:15:55.090922 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:15:55.100761 kubelet[2670]: E0527 18:15:55.100153 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:55.128650 containerd[1523]: time="2025-05-27T18:15:55.128275547Z" level=info msg="connecting to shim 8df038f7153fce4229e4fa08e288dd2bc5c42f413be7b7267880f2a389b32153" address="unix:///run/containerd/s/499e77e8ed82396ab4a68c1fcf09002ee3c0e5e68341e22e9bbedc88ce2ccb25" namespace=k8s.io protocol=ttrpc version=3 May 27 18:15:55.198625 systemd[1]: Started cri-containerd-8df038f7153fce4229e4fa08e288dd2bc5c42f413be7b7267880f2a389b32153.scope - libcontainer container 8df038f7153fce4229e4fa08e288dd2bc5c42f413be7b7267880f2a389b32153. May 27 18:15:55.295780 containerd[1523]: time="2025-05-27T18:15:55.295705357Z" level=info msg="connecting to shim 719397c97e0e82d5797c4d7e342637e2371b270342cbe1b32ef80494b0dab7bd" address="unix:///run/containerd/s/9c8d7aa78db3e62bad11646f1fe6636178c71d0f51284ac94657b90793b78707" namespace=k8s.io protocol=ttrpc version=3 May 27 18:15:55.364986 containerd[1523]: time="2025-05-27T18:15:55.363936639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd5gg,Uid:384e3133-6637-4b5e-bb12-1a3655ecad79,Namespace:kube-system,Attempt:0,} returns sandbox id \"8df038f7153fce4229e4fa08e288dd2bc5c42f413be7b7267880f2a389b32153\"" May 27 18:15:55.366034 systemd[1]: Started cri-containerd-719397c97e0e82d5797c4d7e342637e2371b270342cbe1b32ef80494b0dab7bd.scope - libcontainer container 719397c97e0e82d5797c4d7e342637e2371b270342cbe1b32ef80494b0dab7bd. May 27 18:15:55.371477 kubelet[2670]: E0527 18:15:55.371432 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:55.396415 containerd[1523]: time="2025-05-27T18:15:55.396289656Z" level=info msg="CreateContainer within sandbox \"8df038f7153fce4229e4fa08e288dd2bc5c42f413be7b7267880f2a389b32153\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 18:15:55.437074 containerd[1523]: time="2025-05-27T18:15:55.436218241Z" level=info msg="Container a3c1ccbbf7e49eaa75bf13abd91bf6204a7c12c3e3b7915c2432f4d29e072549: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:55.454951 containerd[1523]: time="2025-05-27T18:15:55.453419932Z" level=info msg="CreateContainer within sandbox \"8df038f7153fce4229e4fa08e288dd2bc5c42f413be7b7267880f2a389b32153\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3c1ccbbf7e49eaa75bf13abd91bf6204a7c12c3e3b7915c2432f4d29e072549\"" May 27 18:15:55.456125 containerd[1523]: time="2025-05-27T18:15:55.456071498Z" level=info msg="StartContainer for \"a3c1ccbbf7e49eaa75bf13abd91bf6204a7c12c3e3b7915c2432f4d29e072549\"" May 27 18:15:55.458612 containerd[1523]: time="2025-05-27T18:15:55.458504641Z" level=info msg="connecting to shim a3c1ccbbf7e49eaa75bf13abd91bf6204a7c12c3e3b7915c2432f4d29e072549" address="unix:///run/containerd/s/499e77e8ed82396ab4a68c1fcf09002ee3c0e5e68341e22e9bbedc88ce2ccb25" protocol=ttrpc version=3 May 27 18:15:55.500273 systemd[1]: Started cri-containerd-a3c1ccbbf7e49eaa75bf13abd91bf6204a7c12c3e3b7915c2432f4d29e072549.scope - libcontainer container a3c1ccbbf7e49eaa75bf13abd91bf6204a7c12c3e3b7915c2432f4d29e072549. May 27 18:15:55.534821 containerd[1523]: time="2025-05-27T18:15:55.534700944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zv4dh,Uid:867f581c-f4d1-4563-871f-bd0e26929937,Namespace:kube-system,Attempt:0,} returns sandbox id \"719397c97e0e82d5797c4d7e342637e2371b270342cbe1b32ef80494b0dab7bd\"" May 27 18:15:55.537074 kubelet[2670]: E0527 18:15:55.537015 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:55.547360 containerd[1523]: time="2025-05-27T18:15:55.547289923Z" level=info msg="CreateContainer within sandbox \"719397c97e0e82d5797c4d7e342637e2371b270342cbe1b32ef80494b0dab7bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 18:15:55.563622 containerd[1523]: time="2025-05-27T18:15:55.563565565Z" level=info msg="Container 2415eb642f1d6f216d66ddb8a9682b9ad7454259b175fe61b7bcf25e2cdd67b9: CDI devices from CRI Config.CDIDevices: []" May 27 18:15:55.574451 containerd[1523]: time="2025-05-27T18:15:55.574359470Z" level=info msg="CreateContainer within sandbox \"719397c97e0e82d5797c4d7e342637e2371b270342cbe1b32ef80494b0dab7bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2415eb642f1d6f216d66ddb8a9682b9ad7454259b175fe61b7bcf25e2cdd67b9\"" May 27 18:15:55.577254 containerd[1523]: time="2025-05-27T18:15:55.576239025Z" level=info msg="StartContainer for \"2415eb642f1d6f216d66ddb8a9682b9ad7454259b175fe61b7bcf25e2cdd67b9\"" May 27 18:15:55.579568 containerd[1523]: time="2025-05-27T18:15:55.579515278Z" level=info msg="connecting to shim 2415eb642f1d6f216d66ddb8a9682b9ad7454259b175fe61b7bcf25e2cdd67b9" address="unix:///run/containerd/s/9c8d7aa78db3e62bad11646f1fe6636178c71d0f51284ac94657b90793b78707" protocol=ttrpc version=3 May 27 18:15:55.607807 containerd[1523]: time="2025-05-27T18:15:55.606743253Z" level=info msg="StartContainer for \"a3c1ccbbf7e49eaa75bf13abd91bf6204a7c12c3e3b7915c2432f4d29e072549\" returns successfully" May 27 18:15:55.630042 systemd[1]: Started cri-containerd-2415eb642f1d6f216d66ddb8a9682b9ad7454259b175fe61b7bcf25e2cdd67b9.scope - libcontainer container 2415eb642f1d6f216d66ddb8a9682b9ad7454259b175fe61b7bcf25e2cdd67b9. May 27 18:15:55.698270 containerd[1523]: time="2025-05-27T18:15:55.697841748Z" level=info msg="StartContainer for \"2415eb642f1d6f216d66ddb8a9682b9ad7454259b175fe61b7bcf25e2cdd67b9\" returns successfully" May 27 18:15:55.888604 kubelet[2670]: E0527 18:15:55.888016 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:55.893412 kubelet[2670]: E0527 18:15:55.892703 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:55.893921 kubelet[2670]: E0527 18:15:55.893890 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:55.909004 kubelet[2670]: I0527 18:15:55.908478 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zv4dh" podStartSLOduration=25.908440842 podStartE2EDuration="25.908440842s" podCreationTimestamp="2025-05-27 18:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:15:55.907825071 +0000 UTC m=+29.799580246" watchObservedRunningTime="2025-05-27 18:15:55.908440842 +0000 UTC m=+29.800196007" May 27 18:15:55.937765 kubelet[2670]: I0527 18:15:55.937659 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nd5gg" podStartSLOduration=25.937637535 podStartE2EDuration="25.937637535s" podCreationTimestamp="2025-05-27 18:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:15:55.927808738 +0000 UTC m=+29.819563891" watchObservedRunningTime="2025-05-27 18:15:55.937637535 +0000 UTC m=+29.829392700" May 27 18:15:56.092608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244559891.mount: Deactivated successfully. May 27 18:15:56.895796 kubelet[2670]: E0527 18:15:56.894284 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:56.897093 kubelet[2670]: E0527 18:15:56.897049 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:57.109212 kubelet[2670]: I0527 18:15:57.109153 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:15:57.109212 kubelet[2670]: I0527 18:15:57.109215 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:15:57.116289 kubelet[2670]: I0527 18:15:57.116098 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:15:57.152462 kubelet[2670]: I0527 18:15:57.152321 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:15:57.153111 kubelet[2670]: I0527 18:15:57.153078 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-twqh9","kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/cilium-6qb7g","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-proxy-mjrfk","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7"] May 27 18:15:57.153282 kubelet[2670]: E0527 18:15:57.153265 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:15:57.153359 kubelet[2670]: E0527 18:15:57.153350 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:15:57.153485 kubelet[2670]: E0527 18:15:57.153402 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:15:57.153485 kubelet[2670]: E0527 18:15:57.153416 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-6qb7g" May 27 18:15:57.153485 kubelet[2670]: E0527 18:15:57.153427 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:15:57.153485 kubelet[2670]: E0527 18:15:57.153435 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:15:57.153485 kubelet[2670]: E0527 18:15:57.153444 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:15:57.153485 kubelet[2670]: E0527 18:15:57.153455 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:15:57.153485 kubelet[2670]: I0527 18:15:57.153470 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:15:57.897529 kubelet[2670]: E0527 18:15:57.897083 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:15:57.899338 kubelet[2670]: E0527 18:15:57.897477 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:16:07.181614 kubelet[2670]: I0527 18:16:07.181545 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:16:07.183073 kubelet[2670]: I0527 18:16:07.182326 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:16:07.188831 kubelet[2670]: I0527 18:16:07.188639 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:16:07.213038 kubelet[2670]: I0527 18:16:07.212976 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:16:07.213285 kubelet[2670]: I0527 18:16:07.213244 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-twqh9","kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/cilium-6qb7g","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-proxy-mjrfk","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7"] May 27 18:16:07.213379 kubelet[2670]: E0527 18:16:07.213306 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:16:07.213379 kubelet[2670]: E0527 18:16:07.213328 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:16:07.213379 kubelet[2670]: E0527 18:16:07.213338 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:16:07.213379 kubelet[2670]: E0527 18:16:07.213354 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-6qb7g" May 27 18:16:07.213379 kubelet[2670]: E0527 18:16:07.213371 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:16:07.213654 kubelet[2670]: E0527 18:16:07.213386 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:16:07.213654 kubelet[2670]: E0527 18:16:07.213396 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:16:07.213654 kubelet[2670]: E0527 18:16:07.213404 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:16:07.213654 kubelet[2670]: I0527 18:16:07.213416 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:16:11.626041 systemd[1]: Started sshd@9-143.110.225.216:22-139.178.68.195:48420.service - OpenSSH per-connection server daemon (139.178.68.195:48420). May 27 18:16:11.738451 sshd[4008]: Accepted publickey for core from 139.178.68.195 port 48420 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:11.741061 sshd-session[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:11.748065 systemd-logind[1493]: New session 10 of user core. May 27 18:16:11.758142 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 18:16:12.445586 sshd[4010]: Connection closed by 139.178.68.195 port 48420 May 27 18:16:12.446534 sshd-session[4008]: pam_unix(sshd:session): session closed for user core May 27 18:16:12.451999 systemd[1]: sshd@9-143.110.225.216:22-139.178.68.195:48420.service: Deactivated successfully. May 27 18:16:12.455017 systemd[1]: session-10.scope: Deactivated successfully. May 27 18:16:12.458107 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. May 27 18:16:12.460903 systemd-logind[1493]: Removed session 10. May 27 18:16:17.239484 kubelet[2670]: I0527 18:16:17.239402 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:16:17.239484 kubelet[2670]: I0527 18:16:17.239468 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:16:17.244749 kubelet[2670]: I0527 18:16:17.244521 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:16:17.263741 kubelet[2670]: I0527 18:16:17.263513 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:16:17.263741 kubelet[2670]: I0527 18:16:17.263690 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-twqh9","kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/cilium-6qb7g","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-proxy-mjrfk","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7"] May 27 18:16:17.263971 kubelet[2670]: E0527 18:16:17.263956 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:16:17.264112 kubelet[2670]: E0527 18:16:17.264019 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:16:17.264112 kubelet[2670]: E0527 18:16:17.264037 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:16:17.264112 kubelet[2670]: E0527 18:16:17.264050 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-6qb7g" May 27 18:16:17.264112 kubelet[2670]: E0527 18:16:17.264059 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:16:17.264112 kubelet[2670]: E0527 18:16:17.264069 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:16:17.264112 kubelet[2670]: E0527 18:16:17.264077 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:16:17.264112 kubelet[2670]: E0527 18:16:17.264087 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:16:17.264112 kubelet[2670]: I0527 18:16:17.264100 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:16:17.463554 systemd[1]: Started sshd@10-143.110.225.216:22-139.178.68.195:42014.service - OpenSSH per-connection server daemon (139.178.68.195:42014). May 27 18:16:17.558863 sshd[4023]: Accepted publickey for core from 139.178.68.195 port 42014 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:17.560850 sshd-session[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:17.568787 systemd-logind[1493]: New session 11 of user core. May 27 18:16:17.577032 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 18:16:17.735209 sshd[4025]: Connection closed by 139.178.68.195 port 42014 May 27 18:16:17.735071 sshd-session[4023]: pam_unix(sshd:session): session closed for user core May 27 18:16:17.740982 systemd[1]: sshd@10-143.110.225.216:22-139.178.68.195:42014.service: Deactivated successfully. May 27 18:16:17.744305 systemd[1]: session-11.scope: Deactivated successfully. May 27 18:16:17.748196 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. May 27 18:16:17.750064 systemd-logind[1493]: Removed session 11. May 27 18:16:22.755881 systemd[1]: Started sshd@11-143.110.225.216:22-139.178.68.195:42018.service - OpenSSH per-connection server daemon (139.178.68.195:42018). May 27 18:16:22.820784 sshd[4038]: Accepted publickey for core from 139.178.68.195 port 42018 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:22.822593 sshd-session[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:22.830900 systemd-logind[1493]: New session 12 of user core. May 27 18:16:22.836027 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 18:16:22.997625 sshd[4040]: Connection closed by 139.178.68.195 port 42018 May 27 18:16:22.998270 sshd-session[4038]: pam_unix(sshd:session): session closed for user core May 27 18:16:23.005283 systemd[1]: sshd@11-143.110.225.216:22-139.178.68.195:42018.service: Deactivated successfully. May 27 18:16:23.009159 systemd[1]: session-12.scope: Deactivated successfully. May 27 18:16:23.011330 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. May 27 18:16:23.013979 systemd-logind[1493]: Removed session 12. May 27 18:16:27.285274 kubelet[2670]: I0527 18:16:27.285235 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:16:27.285274 kubelet[2670]: I0527 18:16:27.285288 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:16:27.289644 kubelet[2670]: I0527 18:16:27.289588 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:16:27.314099 kubelet[2670]: I0527 18:16:27.314040 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:16:27.319533 kubelet[2670]: I0527 18:16:27.319468 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-twqh9","kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/cilium-6qb7g","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-proxy-mjrfk","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7"] May 27 18:16:27.320092 kubelet[2670]: E0527 18:16:27.319935 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:16:27.320092 kubelet[2670]: E0527 18:16:27.319979 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:16:27.320092 kubelet[2670]: E0527 18:16:27.319999 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:16:27.320092 kubelet[2670]: E0527 18:16:27.320017 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-6qb7g" May 27 18:16:27.320092 kubelet[2670]: E0527 18:16:27.320031 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:16:27.320092 kubelet[2670]: E0527 18:16:27.320043 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:16:27.320092 kubelet[2670]: E0527 18:16:27.320053 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:16:27.320092 kubelet[2670]: E0527 18:16:27.320062 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:16:27.320092 kubelet[2670]: I0527 18:16:27.320073 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:16:28.022087 systemd[1]: Started sshd@12-143.110.225.216:22-139.178.68.195:55704.service - OpenSSH per-connection server daemon (139.178.68.195:55704). May 27 18:16:28.098266 sshd[4055]: Accepted publickey for core from 139.178.68.195 port 55704 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:28.100330 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:28.109054 systemd-logind[1493]: New session 13 of user core. May 27 18:16:28.117057 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 18:16:28.303071 sshd[4057]: Connection closed by 139.178.68.195 port 55704 May 27 18:16:28.303869 sshd-session[4055]: pam_unix(sshd:session): session closed for user core May 27 18:16:28.316921 systemd[1]: sshd@12-143.110.225.216:22-139.178.68.195:55704.service: Deactivated successfully. May 27 18:16:28.320214 systemd[1]: session-13.scope: Deactivated successfully. May 27 18:16:28.321829 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. May 27 18:16:28.327524 systemd[1]: Started sshd@13-143.110.225.216:22-139.178.68.195:55706.service - OpenSSH per-connection server daemon (139.178.68.195:55706). May 27 18:16:28.329094 systemd-logind[1493]: Removed session 13. May 27 18:16:28.397563 sshd[4069]: Accepted publickey for core from 139.178.68.195 port 55706 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:28.400281 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:28.409895 systemd-logind[1493]: New session 14 of user core. May 27 18:16:28.418523 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 18:16:28.643091 sshd[4071]: Connection closed by 139.178.68.195 port 55706 May 27 18:16:28.644457 sshd-session[4069]: pam_unix(sshd:session): session closed for user core May 27 18:16:28.658370 systemd[1]: sshd@13-143.110.225.216:22-139.178.68.195:55706.service: Deactivated successfully. May 27 18:16:28.664818 systemd[1]: session-14.scope: Deactivated successfully. May 27 18:16:28.666290 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. May 27 18:16:28.670951 systemd-logind[1493]: Removed session 14. May 27 18:16:28.674018 systemd[1]: Started sshd@14-143.110.225.216:22-139.178.68.195:55720.service - OpenSSH per-connection server daemon (139.178.68.195:55720). May 27 18:16:28.748981 sshd[4081]: Accepted publickey for core from 139.178.68.195 port 55720 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:28.751205 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:28.759874 systemd-logind[1493]: New session 15 of user core. May 27 18:16:28.765535 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 18:16:28.920530 sshd[4083]: Connection closed by 139.178.68.195 port 55720 May 27 18:16:28.921387 sshd-session[4081]: pam_unix(sshd:session): session closed for user core May 27 18:16:28.928463 systemd[1]: sshd@14-143.110.225.216:22-139.178.68.195:55720.service: Deactivated successfully. May 27 18:16:28.932007 systemd[1]: session-15.scope: Deactivated successfully. May 27 18:16:28.934906 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. May 27 18:16:28.938651 systemd-logind[1493]: Removed session 15. May 27 18:16:33.946425 systemd[1]: Started sshd@15-143.110.225.216:22-139.178.68.195:38302.service - OpenSSH per-connection server daemon (139.178.68.195:38302). May 27 18:16:34.026827 sshd[4098]: Accepted publickey for core from 139.178.68.195 port 38302 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:34.030034 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:34.039991 systemd-logind[1493]: New session 16 of user core. May 27 18:16:34.060350 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 18:16:34.235667 sshd[4100]: Connection closed by 139.178.68.195 port 38302 May 27 18:16:34.237030 sshd-session[4098]: pam_unix(sshd:session): session closed for user core May 27 18:16:34.244752 systemd[1]: sshd@15-143.110.225.216:22-139.178.68.195:38302.service: Deactivated successfully. May 27 18:16:34.249661 systemd[1]: session-16.scope: Deactivated successfully. May 27 18:16:34.251990 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. May 27 18:16:34.254420 systemd-logind[1493]: Removed session 16. May 27 18:16:37.347783 kubelet[2670]: I0527 18:16:37.347712 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:16:37.348615 kubelet[2670]: I0527 18:16:37.348218 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:16:37.353957 kubelet[2670]: I0527 18:16:37.353214 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:16:37.374029 kubelet[2670]: I0527 18:16:37.373988 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:16:37.374563 kubelet[2670]: I0527 18:16:37.374524 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-twqh9","kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/cilium-6qb7g","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-proxy-mjrfk","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7"] May 27 18:16:37.374897 kubelet[2670]: E0527 18:16:37.374871 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:16:37.375037 kubelet[2670]: E0527 18:16:37.375024 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:16:37.375114 kubelet[2670]: E0527 18:16:37.375103 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:16:37.375296 kubelet[2670]: E0527 18:16:37.375194 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-6qb7g" May 27 18:16:37.375296 kubelet[2670]: E0527 18:16:37.375213 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:16:37.375296 kubelet[2670]: E0527 18:16:37.375227 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:16:37.375296 kubelet[2670]: E0527 18:16:37.375245 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:16:37.375296 kubelet[2670]: E0527 18:16:37.375261 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:16:37.375296 kubelet[2670]: I0527 18:16:37.375277 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:16:38.539321 kubelet[2670]: E0527 18:16:38.537850 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:16:39.252913 systemd[1]: Started sshd@16-143.110.225.216:22-139.178.68.195:38316.service - OpenSSH per-connection server daemon (139.178.68.195:38316). May 27 18:16:39.334524 sshd[4112]: Accepted publickey for core from 139.178.68.195 port 38316 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:39.337111 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:39.346828 systemd-logind[1493]: New session 17 of user core. May 27 18:16:39.351028 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 18:16:39.515341 sshd[4114]: Connection closed by 139.178.68.195 port 38316 May 27 18:16:39.516296 sshd-session[4112]: pam_unix(sshd:session): session closed for user core May 27 18:16:39.521794 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. May 27 18:16:39.522161 systemd[1]: sshd@16-143.110.225.216:22-139.178.68.195:38316.service: Deactivated successfully. May 27 18:16:39.525513 systemd[1]: session-17.scope: Deactivated successfully. May 27 18:16:39.531054 systemd-logind[1493]: Removed session 17. May 27 18:16:44.534363 systemd[1]: Started sshd@17-143.110.225.216:22-139.178.68.195:59830.service - OpenSSH per-connection server daemon (139.178.68.195:59830). May 27 18:16:44.627115 sshd[4126]: Accepted publickey for core from 139.178.68.195 port 59830 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:44.630661 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:44.648015 systemd-logind[1493]: New session 18 of user core. May 27 18:16:44.658079 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 18:16:44.815962 sshd[4128]: Connection closed by 139.178.68.195 port 59830 May 27 18:16:44.814545 sshd-session[4126]: pam_unix(sshd:session): session closed for user core May 27 18:16:44.829472 systemd[1]: sshd@17-143.110.225.216:22-139.178.68.195:59830.service: Deactivated successfully. May 27 18:16:44.833226 systemd[1]: session-18.scope: Deactivated successfully. May 27 18:16:44.835381 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. May 27 18:16:44.840660 systemd[1]: Started sshd@18-143.110.225.216:22-139.178.68.195:59844.service - OpenSSH per-connection server daemon (139.178.68.195:59844). May 27 18:16:44.843089 systemd-logind[1493]: Removed session 18. May 27 18:16:44.922011 sshd[4139]: Accepted publickey for core from 139.178.68.195 port 59844 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:44.924596 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:44.933995 systemd-logind[1493]: New session 19 of user core. May 27 18:16:44.939209 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 18:16:45.413794 sshd[4141]: Connection closed by 139.178.68.195 port 59844 May 27 18:16:45.415635 sshd-session[4139]: pam_unix(sshd:session): session closed for user core May 27 18:16:45.435688 systemd[1]: sshd@18-143.110.225.216:22-139.178.68.195:59844.service: Deactivated successfully. May 27 18:16:45.441830 systemd[1]: session-19.scope: Deactivated successfully. May 27 18:16:45.447982 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. May 27 18:16:45.451966 systemd[1]: Started sshd@19-143.110.225.216:22-139.178.68.195:59848.service - OpenSSH per-connection server daemon (139.178.68.195:59848). May 27 18:16:45.455575 systemd-logind[1493]: Removed session 19. May 27 18:16:45.584868 sshd[4151]: Accepted publickey for core from 139.178.68.195 port 59848 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:45.586916 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:45.595848 systemd-logind[1493]: New session 20 of user core. May 27 18:16:45.608486 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 18:16:46.726889 sshd[4153]: Connection closed by 139.178.68.195 port 59848 May 27 18:16:46.728211 sshd-session[4151]: pam_unix(sshd:session): session closed for user core May 27 18:16:46.753226 systemd[1]: sshd@19-143.110.225.216:22-139.178.68.195:59848.service: Deactivated successfully. May 27 18:16:46.760420 systemd[1]: session-20.scope: Deactivated successfully. May 27 18:16:46.767286 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. May 27 18:16:46.770610 systemd-logind[1493]: Removed session 20. May 27 18:16:46.774911 systemd[1]: Started sshd@20-143.110.225.216:22-139.178.68.195:59860.service - OpenSSH per-connection server daemon (139.178.68.195:59860). May 27 18:16:46.864982 sshd[4168]: Accepted publickey for core from 139.178.68.195 port 59860 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:46.867412 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:46.873955 systemd-logind[1493]: New session 21 of user core. May 27 18:16:46.883437 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 18:16:47.263880 sshd[4172]: Connection closed by 139.178.68.195 port 59860 May 27 18:16:47.268803 sshd-session[4168]: pam_unix(sshd:session): session closed for user core May 27 18:16:47.281471 systemd[1]: sshd@20-143.110.225.216:22-139.178.68.195:59860.service: Deactivated successfully. May 27 18:16:47.286459 systemd[1]: session-21.scope: Deactivated successfully. May 27 18:16:47.291065 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. May 27 18:16:47.298278 systemd[1]: Started sshd@21-143.110.225.216:22-139.178.68.195:59872.service - OpenSSH per-connection server daemon (139.178.68.195:59872). May 27 18:16:47.302214 systemd-logind[1493]: Removed session 21. May 27 18:16:47.382753 sshd[4182]: Accepted publickey for core from 139.178.68.195 port 59872 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:47.388760 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:47.399056 systemd-logind[1493]: New session 22 of user core. May 27 18:16:47.407104 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 18:16:47.415112 kubelet[2670]: I0527 18:16:47.414626 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:16:47.417878 kubelet[2670]: I0527 18:16:47.417836 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:16:47.421214 kubelet[2670]: I0527 18:16:47.420886 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:16:47.446525 kubelet[2670]: I0527 18:16:47.446047 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:16:47.446525 kubelet[2670]: I0527 18:16:47.446324 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-twqh9","kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/cilium-6qb7g","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-proxy-mjrfk","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7"] May 27 18:16:47.446525 kubelet[2670]: E0527 18:16:47.446378 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:16:47.446525 kubelet[2670]: E0527 18:16:47.446399 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:16:47.446525 kubelet[2670]: E0527 18:16:47.446413 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:16:47.446525 kubelet[2670]: E0527 18:16:47.446429 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-6qb7g" May 27 18:16:47.446525 kubelet[2670]: E0527 18:16:47.446442 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:16:47.446525 kubelet[2670]: E0527 18:16:47.446456 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:16:47.446525 kubelet[2670]: E0527 18:16:47.446467 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:16:47.446525 kubelet[2670]: E0527 18:16:47.446480 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:16:47.446525 kubelet[2670]: I0527 18:16:47.446496 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:16:47.562469 sshd[4184]: Connection closed by 139.178.68.195 port 59872 May 27 18:16:47.563207 sshd-session[4182]: pam_unix(sshd:session): session closed for user core May 27 18:16:47.570388 systemd[1]: sshd@21-143.110.225.216:22-139.178.68.195:59872.service: Deactivated successfully. May 27 18:16:47.573836 systemd[1]: session-22.scope: Deactivated successfully. May 27 18:16:47.575253 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. May 27 18:16:47.578412 systemd-logind[1493]: Removed session 22. May 27 18:16:51.539735 kubelet[2670]: E0527 18:16:51.539671 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:16:52.586205 systemd[1]: Started sshd@22-143.110.225.216:22-139.178.68.195:59882.service - OpenSSH per-connection server daemon (139.178.68.195:59882). May 27 18:16:52.678121 sshd[4198]: Accepted publickey for core from 139.178.68.195 port 59882 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:52.680367 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:52.687858 systemd-logind[1493]: New session 23 of user core. May 27 18:16:52.692043 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 18:16:52.840870 sshd[4200]: Connection closed by 139.178.68.195 port 59882 May 27 18:16:52.841910 sshd-session[4198]: pam_unix(sshd:session): session closed for user core May 27 18:16:52.849262 systemd[1]: sshd@22-143.110.225.216:22-139.178.68.195:59882.service: Deactivated successfully. May 27 18:16:52.853393 systemd[1]: session-23.scope: Deactivated successfully. May 27 18:16:52.855850 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. May 27 18:16:52.858407 systemd-logind[1493]: Removed session 23. May 27 18:16:53.538328 kubelet[2670]: E0527 18:16:53.538113 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:16:55.538929 kubelet[2670]: E0527 18:16:55.538414 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:16:57.496430 kubelet[2670]: I0527 18:16:57.495447 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:16:57.496430 kubelet[2670]: I0527 18:16:57.495512 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:16:57.513307 kubelet[2670]: I0527 18:16:57.503363 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:16:57.532447 kubelet[2670]: I0527 18:16:57.532394 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:16:57.533053 kubelet[2670]: I0527 18:16:57.533028 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-twqh9","kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/cilium-6qb7g","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-proxy-mjrfk","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7"] May 27 18:16:57.533433 kubelet[2670]: E0527 18:16:57.533319 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-twqh9" May 27 18:16:57.533433 kubelet[2670]: E0527 18:16:57.533390 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:16:57.533433 kubelet[2670]: E0527 18:16:57.533406 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:16:57.533433 kubelet[2670]: E0527 18:16:57.533420 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-6qb7g" May 27 18:16:57.533876 kubelet[2670]: E0527 18:16:57.533657 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:16:57.533876 kubelet[2670]: E0527 18:16:57.533682 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:16:57.533876 kubelet[2670]: E0527 18:16:57.533691 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:16:57.533876 kubelet[2670]: E0527 18:16:57.533714 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:16:57.533876 kubelet[2670]: I0527 18:16:57.533752 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:16:57.864248 systemd[1]: Started sshd@23-143.110.225.216:22-139.178.68.195:50850.service - OpenSSH per-connection server daemon (139.178.68.195:50850). May 27 18:16:57.936781 sshd[4212]: Accepted publickey for core from 139.178.68.195 port 50850 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:16:57.942231 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:57.949074 systemd-logind[1493]: New session 24 of user core. May 27 18:16:57.958206 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 18:16:58.148777 sshd[4214]: Connection closed by 139.178.68.195 port 50850 May 27 18:16:58.147615 sshd-session[4212]: pam_unix(sshd:session): session closed for user core May 27 18:16:58.156600 systemd[1]: sshd@23-143.110.225.216:22-139.178.68.195:50850.service: Deactivated successfully. May 27 18:16:58.160932 systemd[1]: session-24.scope: Deactivated successfully. May 27 18:16:58.163646 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. May 27 18:16:58.167417 systemd-logind[1493]: Removed session 24. May 27 18:16:59.542077 kubelet[2670]: E0527 18:16:59.542025 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:03.169478 systemd[1]: Started sshd@24-143.110.225.216:22-139.178.68.195:50854.service - OpenSSH per-connection server daemon (139.178.68.195:50854). May 27 18:17:03.269757 sshd[4230]: Accepted publickey for core from 139.178.68.195 port 50854 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:17:03.273123 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:17:03.285207 systemd-logind[1493]: New session 25 of user core. May 27 18:17:03.298697 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 18:17:03.480816 sshd[4232]: Connection closed by 139.178.68.195 port 50854 May 27 18:17:03.482108 sshd-session[4230]: pam_unix(sshd:session): session closed for user core May 27 18:17:03.500133 systemd[1]: sshd@24-143.110.225.216:22-139.178.68.195:50854.service: Deactivated successfully. May 27 18:17:03.506028 systemd[1]: session-25.scope: Deactivated successfully. May 27 18:17:03.510104 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. May 27 18:17:03.519020 systemd[1]: Started sshd@25-143.110.225.216:22-139.178.68.195:50868.service - OpenSSH per-connection server daemon (139.178.68.195:50868). May 27 18:17:03.521891 systemd-logind[1493]: Removed session 25. May 27 18:17:03.603767 sshd[4244]: Accepted publickey for core from 139.178.68.195 port 50868 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:17:03.606339 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:17:03.617835 systemd-logind[1493]: New session 26 of user core. May 27 18:17:03.623293 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 18:17:05.110360 containerd[1523]: time="2025-05-27T18:17:05.110258571Z" level=info msg="StopContainer for \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" with timeout 30 (s)" May 27 18:17:05.112932 containerd[1523]: time="2025-05-27T18:17:05.112893839Z" level=info msg="Stop container \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" with signal terminated" May 27 18:17:05.139437 systemd[1]: cri-containerd-7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805.scope: Deactivated successfully. May 27 18:17:05.142921 containerd[1523]: time="2025-05-27T18:17:05.142841225Z" level=info msg="received exit event container_id:\"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" id:\"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" pid:3292 exited_at:{seconds:1748369825 nanos:142244308}" May 27 18:17:05.144670 containerd[1523]: time="2025-05-27T18:17:05.144625457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" id:\"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" pid:3292 exited_at:{seconds:1748369825 nanos:142244308}" May 27 18:17:05.166972 containerd[1523]: time="2025-05-27T18:17:05.166696331Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 18:17:05.175711 containerd[1523]: time="2025-05-27T18:17:05.175615116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" id:\"df27e07f543a78a80c58fe92f72fd4db72c7007e09a9670d72651aeb1b1c61a9\" pid:4272 exited_at:{seconds:1748369825 nanos:175222627}" May 27 18:17:05.188579 containerd[1523]: time="2025-05-27T18:17:05.188448290Z" level=info msg="StopContainer for \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" with timeout 2 (s)" May 27 18:17:05.190220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805-rootfs.mount: Deactivated successfully. May 27 18:17:05.191878 containerd[1523]: time="2025-05-27T18:17:05.189593446Z" level=info msg="Stop container \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" with signal terminated" May 27 18:17:05.199710 containerd[1523]: time="2025-05-27T18:17:05.199574254Z" level=info msg="StopContainer for \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" returns successfully" May 27 18:17:05.201047 containerd[1523]: time="2025-05-27T18:17:05.200558946Z" level=info msg="StopPodSandbox for \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\"" May 27 18:17:05.201047 containerd[1523]: time="2025-05-27T18:17:05.200661828Z" level=info msg="Container to stop \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:17:05.217939 systemd-networkd[1457]: lxc_health: Link DOWN May 27 18:17:05.217952 systemd-networkd[1457]: lxc_health: Lost carrier May 27 18:17:05.224259 systemd[1]: cri-containerd-bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c.scope: Deactivated successfully. May 27 18:17:05.239006 containerd[1523]: time="2025-05-27T18:17:05.236176068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" id:\"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" pid:2899 exit_status:137 exited_at:{seconds:1748369825 nanos:226592166}" May 27 18:17:05.259912 systemd[1]: cri-containerd-d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5.scope: Deactivated successfully. May 27 18:17:05.260335 systemd[1]: cri-containerd-d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5.scope: Consumed 9.726s CPU time, 192.5M memory peak, 70.3M read from disk, 13.3M written to disk. May 27 18:17:05.267418 containerd[1523]: time="2025-05-27T18:17:05.267218949Z" level=info msg="received exit event container_id:\"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" id:\"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" pid:3330 exited_at:{seconds:1748369825 nanos:266461997}" May 27 18:17:05.314574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c-rootfs.mount: Deactivated successfully. May 27 18:17:05.323735 containerd[1523]: time="2025-05-27T18:17:05.321375163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" id:\"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" pid:3330 exited_at:{seconds:1748369825 nanos:266461997}" May 27 18:17:05.323735 containerd[1523]: time="2025-05-27T18:17:05.323047140Z" level=info msg="received exit event sandbox_id:\"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" exit_status:137 exited_at:{seconds:1748369825 nanos:226592166}" May 27 18:17:05.322608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5-rootfs.mount: Deactivated successfully. May 27 18:17:05.329437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c-shm.mount: Deactivated successfully. May 27 18:17:05.331909 containerd[1523]: time="2025-05-27T18:17:05.330714584Z" level=info msg="shim disconnected" id=bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c namespace=k8s.io May 27 18:17:05.331909 containerd[1523]: time="2025-05-27T18:17:05.330917985Z" level=warning msg="cleaning up after shim disconnected" id=bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c namespace=k8s.io May 27 18:17:05.331909 containerd[1523]: time="2025-05-27T18:17:05.330945238Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 18:17:05.343761 containerd[1523]: time="2025-05-27T18:17:05.341991591Z" level=info msg="TearDown network for sandbox \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" successfully" May 27 18:17:05.343761 containerd[1523]: time="2025-05-27T18:17:05.343675281Z" level=info msg="StopPodSandbox for \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" returns successfully" May 27 18:17:05.347762 containerd[1523]: time="2025-05-27T18:17:05.347577294Z" level=info msg="StopContainer for \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" returns successfully" May 27 18:17:05.348485 containerd[1523]: time="2025-05-27T18:17:05.348453014Z" level=info msg="StopPodSandbox for \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\"" May 27 18:17:05.349243 containerd[1523]: time="2025-05-27T18:17:05.349205884Z" level=info msg="Container to stop \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:17:05.349623 containerd[1523]: time="2025-05-27T18:17:05.349601945Z" level=info msg="Container to stop \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:17:05.349841 containerd[1523]: time="2025-05-27T18:17:05.349821860Z" level=info msg="Container to stop \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:17:05.350686 containerd[1523]: time="2025-05-27T18:17:05.350226224Z" level=info msg="Container to stop \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:17:05.350686 containerd[1523]: time="2025-05-27T18:17:05.350248063Z" level=info msg="Container to stop \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 18:17:05.381132 systemd[1]: cri-containerd-624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578.scope: Deactivated successfully. May 27 18:17:05.387578 containerd[1523]: time="2025-05-27T18:17:05.387439253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" id:\"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" pid:2838 exit_status:137 exited_at:{seconds:1748369825 nanos:386668675}" May 27 18:17:05.431610 kubelet[2670]: I0527 18:17:05.428575 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d78b455-d61d-44ef-8a35-d03c1f99de0a-cilium-config-path\") pod \"9d78b455-d61d-44ef-8a35-d03c1f99de0a\" (UID: \"9d78b455-d61d-44ef-8a35-d03c1f99de0a\") " May 27 18:17:05.432633 kubelet[2670]: I0527 18:17:05.432595 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll6dd\" (UniqueName: \"kubernetes.io/projected/9d78b455-d61d-44ef-8a35-d03c1f99de0a-kube-api-access-ll6dd\") pod \"9d78b455-d61d-44ef-8a35-d03c1f99de0a\" (UID: \"9d78b455-d61d-44ef-8a35-d03c1f99de0a\") " May 27 18:17:05.436355 kubelet[2670]: I0527 18:17:05.436176 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d78b455-d61d-44ef-8a35-d03c1f99de0a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d78b455-d61d-44ef-8a35-d03c1f99de0a" (UID: "9d78b455-d61d-44ef-8a35-d03c1f99de0a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 18:17:05.463498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578-rootfs.mount: Deactivated successfully. May 27 18:17:05.464037 containerd[1523]: time="2025-05-27T18:17:05.463566128Z" level=info msg="shim disconnected" id=624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578 namespace=k8s.io May 27 18:17:05.464037 containerd[1523]: time="2025-05-27T18:17:05.463626441Z" level=warning msg="cleaning up after shim disconnected" id=624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578 namespace=k8s.io May 27 18:17:05.464037 containerd[1523]: time="2025-05-27T18:17:05.463641348Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 18:17:05.465951 kubelet[2670]: I0527 18:17:05.465861 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d78b455-d61d-44ef-8a35-d03c1f99de0a-kube-api-access-ll6dd" (OuterVolumeSpecName: "kube-api-access-ll6dd") pod "9d78b455-d61d-44ef-8a35-d03c1f99de0a" (UID: "9d78b455-d61d-44ef-8a35-d03c1f99de0a"). InnerVolumeSpecName "kube-api-access-ll6dd". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 18:17:05.488751 containerd[1523]: time="2025-05-27T18:17:05.488529171Z" level=info msg="received exit event sandbox_id:\"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" exit_status:137 exited_at:{seconds:1748369825 nanos:386668675}" May 27 18:17:05.489786 containerd[1523]: time="2025-05-27T18:17:05.489045134Z" level=info msg="TearDown network for sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" successfully" May 27 18:17:05.489786 containerd[1523]: time="2025-05-27T18:17:05.489162119Z" level=info msg="StopPodSandbox for \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" returns successfully" May 27 18:17:05.533665 kubelet[2670]: I0527 18:17:05.533601 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.533915 kubelet[2670]: I0527 18:17:05.533690 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-etc-cni-netd\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.533915 kubelet[2670]: I0527 18:17:05.533768 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-bpf-maps\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.533915 kubelet[2670]: I0527 18:17:05.533792 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-host-proc-sys-kernel\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.533915 kubelet[2670]: I0527 18:17:05.533829 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grrzl\" (UniqueName: \"kubernetes.io/projected/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-kube-api-access-grrzl\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.533915 kubelet[2670]: I0527 18:17:05.533849 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-run\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.533915 kubelet[2670]: I0527 18:17:05.533908 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-xtables-lock\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.534237 kubelet[2670]: I0527 18:17:05.533935 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-cgroup\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.534237 kubelet[2670]: I0527 18:17:05.533957 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-lib-modules\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.534237 kubelet[2670]: I0527 18:17:05.533980 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-hubble-tls\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.534237 kubelet[2670]: I0527 18:17:05.534005 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-config-path\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.534237 kubelet[2670]: I0527 18:17:05.534025 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-host-proc-sys-net\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.534237 kubelet[2670]: I0527 18:17:05.534046 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cni-path\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.534476 kubelet[2670]: I0527 18:17:05.534066 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-hostproc\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.534476 kubelet[2670]: I0527 18:17:05.534094 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-clustermesh-secrets\") pod \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\" (UID: \"5654a76c-c9e9-412c-bcc4-e3b1ba255fa3\") " May 27 18:17:05.534476 kubelet[2670]: I0527 18:17:05.534151 2670 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ll6dd\" (UniqueName: \"kubernetes.io/projected/9d78b455-d61d-44ef-8a35-d03c1f99de0a-kube-api-access-ll6dd\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.534476 kubelet[2670]: I0527 18:17:05.534166 2670 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-etc-cni-netd\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.534476 kubelet[2670]: I0527 18:17:05.534181 2670 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d78b455-d61d-44ef-8a35-d03c1f99de0a-cilium-config-path\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.534981 kubelet[2670]: I0527 18:17:05.534901 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.534981 kubelet[2670]: I0527 18:17:05.534987 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.535148 kubelet[2670]: I0527 18:17:05.535008 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.537748 kubelet[2670]: I0527 18:17:05.536957 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.537748 kubelet[2670]: I0527 18:17:05.537029 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.538046 kubelet[2670]: I0527 18:17:05.538017 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.538164 kubelet[2670]: I0527 18:17:05.538146 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cni-path" (OuterVolumeSpecName: "cni-path") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.538259 kubelet[2670]: I0527 18:17:05.538243 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-hostproc" (OuterVolumeSpecName: "hostproc") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.538857 kubelet[2670]: I0527 18:17:05.538826 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 18:17:05.541360 kubelet[2670]: I0527 18:17:05.541314 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 18:17:05.544119 kubelet[2670]: I0527 18:17:05.544060 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 18:17:05.544459 kubelet[2670]: I0527 18:17:05.544396 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 18:17:05.544839 kubelet[2670]: I0527 18:17:05.544794 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-kube-api-access-grrzl" (OuterVolumeSpecName: "kube-api-access-grrzl") pod "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" (UID: "5654a76c-c9e9-412c-bcc4-e3b1ba255fa3"). InnerVolumeSpecName "kube-api-access-grrzl". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 18:17:05.637284 kubelet[2670]: I0527 18:17:05.634904 2670 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-clustermesh-secrets\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637284 kubelet[2670]: I0527 18:17:05.636987 2670 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-bpf-maps\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637284 kubelet[2670]: I0527 18:17:05.637050 2670 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-host-proc-sys-kernel\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637284 kubelet[2670]: I0527 18:17:05.637063 2670 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grrzl\" (UniqueName: \"kubernetes.io/projected/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-kube-api-access-grrzl\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637284 kubelet[2670]: I0527 18:17:05.637073 2670 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-run\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637284 kubelet[2670]: I0527 18:17:05.637087 2670 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-xtables-lock\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637284 kubelet[2670]: I0527 18:17:05.637096 2670 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-cgroup\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637284 kubelet[2670]: I0527 18:17:05.637104 2670 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-lib-modules\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637619 kubelet[2670]: I0527 18:17:05.637112 2670 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-hubble-tls\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637619 kubelet[2670]: I0527 18:17:05.637120 2670 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cilium-config-path\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637619 kubelet[2670]: I0527 18:17:05.637130 2670 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-host-proc-sys-net\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637619 kubelet[2670]: I0527 18:17:05.637143 2670 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-cni-path\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:05.637619 kubelet[2670]: I0527 18:17:05.637157 2670 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3-hostproc\") on node \"ci-4344.0.0-0-76b74bdce7\" DevicePath \"\"" May 27 18:17:06.139934 kubelet[2670]: I0527 18:17:06.139693 2670 scope.go:117] "RemoveContainer" containerID="d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5" May 27 18:17:06.147320 containerd[1523]: time="2025-05-27T18:17:06.146619162Z" level=info msg="RemoveContainer for \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\"" May 27 18:17:06.148198 systemd[1]: Removed slice kubepods-burstable-pod5654a76c_c9e9_412c_bcc4_e3b1ba255fa3.slice - libcontainer container kubepods-burstable-pod5654a76c_c9e9_412c_bcc4_e3b1ba255fa3.slice. May 27 18:17:06.148348 systemd[1]: kubepods-burstable-pod5654a76c_c9e9_412c_bcc4_e3b1ba255fa3.slice: Consumed 9.881s CPU time, 192.8M memory peak, 70.4M read from disk, 13.3M written to disk. May 27 18:17:06.159277 containerd[1523]: time="2025-05-27T18:17:06.159206776Z" level=info msg="RemoveContainer for \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" returns successfully" May 27 18:17:06.163581 kubelet[2670]: I0527 18:17:06.163545 2670 scope.go:117] "RemoveContainer" containerID="0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9" May 27 18:17:06.164765 systemd[1]: Removed slice kubepods-besteffort-pod9d78b455_d61d_44ef_8a35_d03c1f99de0a.slice - libcontainer container kubepods-besteffort-pod9d78b455_d61d_44ef_8a35_d03c1f99de0a.slice. May 27 18:17:06.176550 containerd[1523]: time="2025-05-27T18:17:06.175026371Z" level=info msg="RemoveContainer for \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\"" May 27 18:17:06.186688 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578-shm.mount: Deactivated successfully. May 27 18:17:06.187810 containerd[1523]: time="2025-05-27T18:17:06.187499677Z" level=info msg="RemoveContainer for \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\" returns successfully" May 27 18:17:06.188316 systemd[1]: var-lib-kubelet-pods-9d78b455\x2dd61d\x2d44ef\x2d8a35\x2dd03c1f99de0a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dll6dd.mount: Deactivated successfully. May 27 18:17:06.188644 systemd[1]: var-lib-kubelet-pods-5654a76c\x2dc9e9\x2d412c\x2dbcc4\x2de3b1ba255fa3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgrrzl.mount: Deactivated successfully. May 27 18:17:06.189568 kubelet[2670]: I0527 18:17:06.188644 2670 scope.go:117] "RemoveContainer" containerID="5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667" May 27 18:17:06.188766 systemd[1]: var-lib-kubelet-pods-5654a76c\x2dc9e9\x2d412c\x2dbcc4\x2de3b1ba255fa3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 18:17:06.188860 systemd[1]: var-lib-kubelet-pods-5654a76c\x2dc9e9\x2d412c\x2dbcc4\x2de3b1ba255fa3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 18:17:06.196832 containerd[1523]: time="2025-05-27T18:17:06.196593270Z" level=info msg="RemoveContainer for \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\"" May 27 18:17:06.201544 containerd[1523]: time="2025-05-27T18:17:06.201497801Z" level=info msg="RemoveContainer for \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\" returns successfully" May 27 18:17:06.202186 kubelet[2670]: I0527 18:17:06.201970 2670 scope.go:117] "RemoveContainer" containerID="8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2" May 27 18:17:06.206980 containerd[1523]: time="2025-05-27T18:17:06.206929048Z" level=info msg="RemoveContainer for \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\"" May 27 18:17:06.213349 containerd[1523]: time="2025-05-27T18:17:06.213299778Z" level=info msg="RemoveContainer for \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\" returns successfully" May 27 18:17:06.213674 kubelet[2670]: I0527 18:17:06.213592 2670 scope.go:117] "RemoveContainer" containerID="9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518" May 27 18:17:06.215923 containerd[1523]: time="2025-05-27T18:17:06.215684604Z" level=info msg="RemoveContainer for \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\"" May 27 18:17:06.222432 containerd[1523]: time="2025-05-27T18:17:06.222346314Z" level=info msg="RemoveContainer for \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\" returns successfully" May 27 18:17:06.223089 kubelet[2670]: I0527 18:17:06.222978 2670 scope.go:117] "RemoveContainer" containerID="d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5" May 27 18:17:06.223660 containerd[1523]: time="2025-05-27T18:17:06.223575747Z" level=error msg="ContainerStatus for \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\": not found" May 27 18:17:06.224034 kubelet[2670]: E0527 18:17:06.223871 2670 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\": not found" containerID="d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5" May 27 18:17:06.224034 kubelet[2670]: I0527 18:17:06.223911 2670 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5"} err="failed to get container status \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d728d9cf258192827f489eb0cececb5037ca82b6aad51e2b984d6d992d98f1e5\": not found" May 27 18:17:06.224034 kubelet[2670]: I0527 18:17:06.223953 2670 scope.go:117] "RemoveContainer" containerID="0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9" May 27 18:17:06.224340 containerd[1523]: time="2025-05-27T18:17:06.224298302Z" level=error msg="ContainerStatus for \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\": not found" May 27 18:17:06.224592 kubelet[2670]: E0527 18:17:06.224445 2670 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\": not found" containerID="0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9" May 27 18:17:06.224592 kubelet[2670]: I0527 18:17:06.224481 2670 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9"} err="failed to get container status \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c0526a1cbacbd2331250dc8ce3fe9895b58403d790852a743a6f149e5cbfeb9\": not found" May 27 18:17:06.224592 kubelet[2670]: I0527 18:17:06.224501 2670 scope.go:117] "RemoveContainer" containerID="5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667" May 27 18:17:06.224882 containerd[1523]: time="2025-05-27T18:17:06.224855109Z" level=error msg="ContainerStatus for \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\": not found" May 27 18:17:06.225072 kubelet[2670]: E0527 18:17:06.225045 2670 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\": not found" containerID="5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667" May 27 18:17:06.225130 kubelet[2670]: I0527 18:17:06.225084 2670 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667"} err="failed to get container status \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f3a7951115bb61044eb45db9ff762561826d1d40e90ef64f51cfdfe37e6d667\": not found" May 27 18:17:06.225130 kubelet[2670]: I0527 18:17:06.225107 2670 scope.go:117] "RemoveContainer" containerID="8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2" May 27 18:17:06.225347 containerd[1523]: time="2025-05-27T18:17:06.225311718Z" level=error msg="ContainerStatus for \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\": not found" May 27 18:17:06.225477 kubelet[2670]: E0527 18:17:06.225458 2670 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\": not found" containerID="8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2" May 27 18:17:06.225855 kubelet[2670]: I0527 18:17:06.225696 2670 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2"} err="failed to get container status \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c1091f95927227f253ac17def19b30d99b578c5950a774faabe535ca80b74d2\": not found" May 27 18:17:06.225855 kubelet[2670]: I0527 18:17:06.225758 2670 scope.go:117] "RemoveContainer" containerID="9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518" May 27 18:17:06.226152 containerd[1523]: time="2025-05-27T18:17:06.226120913Z" level=error msg="ContainerStatus for \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\": not found" May 27 18:17:06.226465 kubelet[2670]: E0527 18:17:06.226440 2670 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\": not found" containerID="9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518" May 27 18:17:06.226896 kubelet[2670]: I0527 18:17:06.226778 2670 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518"} err="failed to get container status \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c43d21963f8cbdd2d48fe3cd119861fed77f2c848657b73cbad7e625fb45518\": not found" May 27 18:17:06.226896 kubelet[2670]: I0527 18:17:06.226810 2670 scope.go:117] "RemoveContainer" containerID="7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805" May 27 18:17:06.228921 containerd[1523]: time="2025-05-27T18:17:06.228865623Z" level=info msg="RemoveContainer for \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\"" May 27 18:17:06.232593 containerd[1523]: time="2025-05-27T18:17:06.232544109Z" level=info msg="RemoveContainer for \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" returns successfully" May 27 18:17:06.233020 kubelet[2670]: I0527 18:17:06.232984 2670 scope.go:117] "RemoveContainer" containerID="7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805" May 27 18:17:06.233440 containerd[1523]: time="2025-05-27T18:17:06.233401121Z" level=error msg="ContainerStatus for \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\": not found" May 27 18:17:06.233699 kubelet[2670]: E0527 18:17:06.233624 2670 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\": not found" containerID="7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805" May 27 18:17:06.233699 kubelet[2670]: I0527 18:17:06.233673 2670 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805"} err="failed to get container status \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d5a7d47535ba19020cd2c7862037a9aa88b6a5f533a686ebc823a2de267f805\": not found" May 27 18:17:06.541846 kubelet[2670]: I0527 18:17:06.541170 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5654a76c-c9e9-412c-bcc4-e3b1ba255fa3" path="/var/lib/kubelet/pods/5654a76c-c9e9-412c-bcc4-e3b1ba255fa3/volumes" May 27 18:17:06.542296 kubelet[2670]: I0527 18:17:06.542160 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d78b455-d61d-44ef-8a35-d03c1f99de0a" path="/var/lib/kubelet/pods/9d78b455-d61d-44ef-8a35-d03c1f99de0a/volumes" May 27 18:17:06.833382 kubelet[2670]: E0527 18:17:06.833241 2670 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 18:17:07.016825 sshd[4246]: Connection closed by 139.178.68.195 port 50868 May 27 18:17:07.019070 sshd-session[4244]: pam_unix(sshd:session): session closed for user core May 27 18:17:07.035167 systemd[1]: sshd@25-143.110.225.216:22-139.178.68.195:50868.service: Deactivated successfully. May 27 18:17:07.039303 systemd[1]: session-26.scope: Deactivated successfully. May 27 18:17:07.043993 systemd-logind[1493]: Session 26 logged out. Waiting for processes to exit. May 27 18:17:07.048162 systemd-logind[1493]: Removed session 26. May 27 18:17:07.052430 systemd[1]: Started sshd@26-143.110.225.216:22-139.178.68.195:55026.service - OpenSSH per-connection server daemon (139.178.68.195:55026). May 27 18:17:07.159350 sshd[4403]: Accepted publickey for core from 139.178.68.195 port 55026 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:17:07.161217 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:17:07.169703 systemd-logind[1493]: New session 27 of user core. May 27 18:17:07.179294 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 18:17:07.567751 kubelet[2670]: I0527 18:17:07.567249 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:17:07.567751 kubelet[2670]: I0527 18:17:07.567307 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:17:07.572546 containerd[1523]: time="2025-05-27T18:17:07.572497556Z" level=info msg="StopPodSandbox for \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\"" May 27 18:17:07.574413 containerd[1523]: time="2025-05-27T18:17:07.573003983Z" level=info msg="TearDown network for sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" successfully" May 27 18:17:07.574413 containerd[1523]: time="2025-05-27T18:17:07.573027997Z" level=info msg="StopPodSandbox for \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" returns successfully" May 27 18:17:07.574413 containerd[1523]: time="2025-05-27T18:17:07.573414929Z" level=info msg="RemovePodSandbox for \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\"" May 27 18:17:07.574413 containerd[1523]: time="2025-05-27T18:17:07.573439959Z" level=info msg="Forcibly stopping sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\"" May 27 18:17:07.574413 containerd[1523]: time="2025-05-27T18:17:07.573536442Z" level=info msg="TearDown network for sandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" successfully" May 27 18:17:07.577130 containerd[1523]: time="2025-05-27T18:17:07.576630788Z" level=info msg="Ensure that sandbox 624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578 in task-service has been cleanup successfully" May 27 18:17:07.579352 containerd[1523]: time="2025-05-27T18:17:07.579267264Z" level=info msg="RemovePodSandbox \"624238e9fd0be724413311c1cee899676f581cd959b7b62057d2230471c8c578\" returns successfully" May 27 18:17:07.580070 containerd[1523]: time="2025-05-27T18:17:07.580031289Z" level=info msg="StopPodSandbox for \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\"" May 27 18:17:07.580225 containerd[1523]: time="2025-05-27T18:17:07.580202656Z" level=info msg="TearDown network for sandbox \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" successfully" May 27 18:17:07.580225 containerd[1523]: time="2025-05-27T18:17:07.580221549Z" level=info msg="StopPodSandbox for \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" returns successfully" May 27 18:17:07.580778 containerd[1523]: time="2025-05-27T18:17:07.580753717Z" level=info msg="RemovePodSandbox for \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\"" May 27 18:17:07.580778 containerd[1523]: time="2025-05-27T18:17:07.580782295Z" level=info msg="Forcibly stopping sandbox \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\"" May 27 18:17:07.581042 containerd[1523]: time="2025-05-27T18:17:07.580860070Z" level=info msg="TearDown network for sandbox \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" successfully" May 27 18:17:07.582290 containerd[1523]: time="2025-05-27T18:17:07.582257325Z" level=info msg="Ensure that sandbox bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c in task-service has been cleanup successfully" May 27 18:17:07.584486 containerd[1523]: time="2025-05-27T18:17:07.584439792Z" level=info msg="RemovePodSandbox \"bd623c03cba278656b002429589a65f20db4904b6c6f6d85cbaf92ebaecbb59c\" returns successfully" May 27 18:17:07.585377 kubelet[2670]: I0527 18:17:07.585335 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:17:07.613108 kubelet[2670]: I0527 18:17:07.613067 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:17:07.613321 kubelet[2670]: I0527 18:17:07.613291 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-proxy-mjrfk","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7"] May 27 18:17:07.613387 kubelet[2670]: E0527 18:17:07.613336 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:17:07.613387 kubelet[2670]: E0527 18:17:07.613374 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:17:07.613387 kubelet[2670]: E0527 18:17:07.613385 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:17:07.613534 kubelet[2670]: E0527 18:17:07.613394 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:17:07.613534 kubelet[2670]: E0527 18:17:07.613403 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:17:07.613534 kubelet[2670]: E0527 18:17:07.613414 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:17:07.613534 kubelet[2670]: I0527 18:17:07.613429 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:17:07.960793 sshd[4405]: Connection closed by 139.178.68.195 port 55026 May 27 18:17:07.965055 sshd-session[4403]: pam_unix(sshd:session): session closed for user core May 27 18:17:07.978970 systemd[1]: sshd@26-143.110.225.216:22-139.178.68.195:55026.service: Deactivated successfully. May 27 18:17:07.985956 systemd[1]: session-27.scope: Deactivated successfully. May 27 18:17:07.988637 systemd-logind[1493]: Session 27 logged out. Waiting for processes to exit. May 27 18:17:07.999063 systemd[1]: Started sshd@27-143.110.225.216:22-139.178.68.195:55030.service - OpenSSH per-connection server daemon (139.178.68.195:55030). May 27 18:17:08.002325 systemd-logind[1493]: Removed session 27. May 27 18:17:08.046255 systemd[1]: Created slice kubepods-burstable-pod502d08a6_ab53_43fb_8d49_b8fd6e571eb9.slice - libcontainer container kubepods-burstable-pod502d08a6_ab53_43fb_8d49_b8fd6e571eb9.slice. May 27 18:17:08.056771 kubelet[2670]: I0527 18:17:08.056665 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-hostproc\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.056771 kubelet[2670]: I0527 18:17:08.056707 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-cilium-cgroup\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.060926 kubelet[2670]: I0527 18:17:08.058783 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-etc-cni-netd\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.060926 kubelet[2670]: I0527 18:17:08.058887 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-lib-modules\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.060926 kubelet[2670]: I0527 18:17:08.058933 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-xtables-lock\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.060926 kubelet[2670]: I0527 18:17:08.058956 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-cilium-run\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.060926 kubelet[2670]: I0527 18:17:08.058982 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-bpf-maps\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.060926 kubelet[2670]: I0527 18:17:08.059023 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-clustermesh-secrets\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.061589 kubelet[2670]: I0527 18:17:08.059045 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-cilium-config-path\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.061589 kubelet[2670]: I0527 18:17:08.059099 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-cilium-ipsec-secrets\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.061589 kubelet[2670]: I0527 18:17:08.059127 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-host-proc-sys-net\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.061589 kubelet[2670]: I0527 18:17:08.059818 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5mx\" (UniqueName: \"kubernetes.io/projected/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-kube-api-access-4s5mx\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.061589 kubelet[2670]: I0527 18:17:08.059875 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-hubble-tls\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.062228 kubelet[2670]: I0527 18:17:08.059929 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-cni-path\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.062228 kubelet[2670]: I0527 18:17:08.059959 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/502d08a6-ab53-43fb-8d49-b8fd6e571eb9-host-proc-sys-kernel\") pod \"cilium-j7brc\" (UID: \"502d08a6-ab53-43fb-8d49-b8fd6e571eb9\") " pod="kube-system/cilium-j7brc" May 27 18:17:08.121440 sshd[4415]: Accepted publickey for core from 139.178.68.195 port 55030 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:17:08.125189 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:17:08.137917 systemd-logind[1493]: New session 28 of user core. May 27 18:17:08.144064 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 18:17:08.227887 sshd[4417]: Connection closed by 139.178.68.195 port 55030 May 27 18:17:08.225722 sshd-session[4415]: pam_unix(sshd:session): session closed for user core May 27 18:17:08.253233 systemd[1]: sshd@27-143.110.225.216:22-139.178.68.195:55030.service: Deactivated successfully. May 27 18:17:08.256927 systemd[1]: session-28.scope: Deactivated successfully. May 27 18:17:08.259080 systemd-logind[1493]: Session 28 logged out. Waiting for processes to exit. May 27 18:17:08.265294 systemd[1]: Started sshd@28-143.110.225.216:22-139.178.68.195:55032.service - OpenSSH per-connection server daemon (139.178.68.195:55032). May 27 18:17:08.267221 systemd-logind[1493]: Removed session 28. May 27 18:17:08.341803 sshd[4428]: Accepted publickey for core from 139.178.68.195 port 55032 ssh2: RSA SHA256:4XUDqK0eZl9/JoHWa9cgZT5JQIr/TJd1ha4IPbi4WlY May 27 18:17:08.344662 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:17:08.355207 systemd-logind[1493]: New session 29 of user core. May 27 18:17:08.360073 systemd[1]: Started session-29.scope - Session 29 of User core. May 27 18:17:08.361909 kubelet[2670]: E0527 18:17:08.360456 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:08.363798 containerd[1523]: time="2025-05-27T18:17:08.362703954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7brc,Uid:502d08a6-ab53-43fb-8d49-b8fd6e571eb9,Namespace:kube-system,Attempt:0,}" May 27 18:17:08.401654 containerd[1523]: time="2025-05-27T18:17:08.401579506Z" level=info msg="connecting to shim fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9" address="unix:///run/containerd/s/066e562923cda4fa20a20b3571d3ffea7839c97bbc3b79e67dea260ce6cf47ad" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:08.446386 systemd[1]: Started cri-containerd-fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9.scope - libcontainer container fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9. May 27 18:17:08.529654 containerd[1523]: time="2025-05-27T18:17:08.529509868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7brc,Uid:502d08a6-ab53-43fb-8d49-b8fd6e571eb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\"" May 27 18:17:08.532497 kubelet[2670]: E0527 18:17:08.532425 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:08.543588 containerd[1523]: time="2025-05-27T18:17:08.543515734Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 18:17:08.575192 containerd[1523]: time="2025-05-27T18:17:08.575032752Z" level=info msg="Container d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:08.595768 containerd[1523]: time="2025-05-27T18:17:08.595669499Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd\"" May 27 18:17:08.600083 containerd[1523]: time="2025-05-27T18:17:08.599970089Z" level=info msg="StartContainer for \"d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd\"" May 27 18:17:08.602041 containerd[1523]: time="2025-05-27T18:17:08.601667618Z" level=info msg="connecting to shim d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd" address="unix:///run/containerd/s/066e562923cda4fa20a20b3571d3ffea7839c97bbc3b79e67dea260ce6cf47ad" protocol=ttrpc version=3 May 27 18:17:08.633526 systemd[1]: Started cri-containerd-d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd.scope - libcontainer container d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd. May 27 18:17:08.699772 containerd[1523]: time="2025-05-27T18:17:08.699677829Z" level=info msg="StartContainer for \"d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd\" returns successfully" May 27 18:17:08.716761 systemd[1]: cri-containerd-d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd.scope: Deactivated successfully. May 27 18:17:08.717094 systemd[1]: cri-containerd-d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd.scope: Consumed 31ms CPU time, 9.2M memory peak, 2.8M read from disk. May 27 18:17:08.720162 containerd[1523]: time="2025-05-27T18:17:08.720005226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd\" id:\"d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd\" pid:4498 exited_at:{seconds:1748369828 nanos:719451257}" May 27 18:17:08.720729 containerd[1523]: time="2025-05-27T18:17:08.720608978Z" level=info msg="received exit event container_id:\"d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd\" id:\"d236fe284e3d1f7ab8ef2d267bd50f3a8ae9d7c36405b191438614846a1dcecd\" pid:4498 exited_at:{seconds:1748369828 nanos:719451257}" May 27 18:17:09.185626 kubelet[2670]: E0527 18:17:09.184939 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:09.200893 containerd[1523]: time="2025-05-27T18:17:09.199279564Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 18:17:09.212879 containerd[1523]: time="2025-05-27T18:17:09.212818433Z" level=info msg="Container ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:09.227675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018877048.mount: Deactivated successfully. May 27 18:17:09.233923 containerd[1523]: time="2025-05-27T18:17:09.233769032Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80\"" May 27 18:17:09.235963 containerd[1523]: time="2025-05-27T18:17:09.235819505Z" level=info msg="StartContainer for \"ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80\"" May 27 18:17:09.240207 containerd[1523]: time="2025-05-27T18:17:09.240145257Z" level=info msg="connecting to shim ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80" address="unix:///run/containerd/s/066e562923cda4fa20a20b3571d3ffea7839c97bbc3b79e67dea260ce6cf47ad" protocol=ttrpc version=3 May 27 18:17:09.272759 kubelet[2670]: I0527 18:17:09.271336 2670 setters.go:618] "Node became not ready" node="ci-4344.0.0-0-76b74bdce7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T18:17:09Z","lastTransitionTime":"2025-05-27T18:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 18:17:09.311711 systemd[1]: Started cri-containerd-ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80.scope - libcontainer container ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80. May 27 18:17:09.372145 containerd[1523]: time="2025-05-27T18:17:09.372089219Z" level=info msg="StartContainer for \"ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80\" returns successfully" May 27 18:17:09.388133 systemd[1]: cri-containerd-ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80.scope: Deactivated successfully. May 27 18:17:09.389106 systemd[1]: cri-containerd-ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80.scope: Consumed 34ms CPU time, 6.8M memory peak, 1.7M read from disk. May 27 18:17:09.392031 containerd[1523]: time="2025-05-27T18:17:09.391976778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80\" id:\"ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80\" pid:4543 exited_at:{seconds:1748369829 nanos:391113944}" May 27 18:17:09.392257 containerd[1523]: time="2025-05-27T18:17:09.392119069Z" level=info msg="received exit event container_id:\"ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80\" id:\"ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80\" pid:4543 exited_at:{seconds:1748369829 nanos:391113944}" May 27 18:17:09.430813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce398eb846fddd9bf07717187a731569b8c8a160dddb1a57efa9ba2629782d80-rootfs.mount: Deactivated successfully. May 27 18:17:10.191371 kubelet[2670]: E0527 18:17:10.191328 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:10.197680 containerd[1523]: time="2025-05-27T18:17:10.197476966Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 18:17:10.218203 containerd[1523]: time="2025-05-27T18:17:10.218150490Z" level=info msg="Container 7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:10.229296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678742103.mount: Deactivated successfully. May 27 18:17:10.241633 containerd[1523]: time="2025-05-27T18:17:10.241514063Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de\"" May 27 18:17:10.243811 containerd[1523]: time="2025-05-27T18:17:10.243168239Z" level=info msg="StartContainer for \"7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de\"" May 27 18:17:10.243811 containerd[1523]: time="2025-05-27T18:17:10.245193600Z" level=info msg="connecting to shim 7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de" address="unix:///run/containerd/s/066e562923cda4fa20a20b3571d3ffea7839c97bbc3b79e67dea260ce6cf47ad" protocol=ttrpc version=3 May 27 18:17:10.298021 systemd[1]: Started cri-containerd-7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de.scope - libcontainer container 7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de. May 27 18:17:10.371032 containerd[1523]: time="2025-05-27T18:17:10.370909538Z" level=info msg="StartContainer for \"7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de\" returns successfully" May 27 18:17:10.381346 systemd[1]: cri-containerd-7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de.scope: Deactivated successfully. May 27 18:17:10.385227 containerd[1523]: time="2025-05-27T18:17:10.385145512Z" level=info msg="received exit event container_id:\"7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de\" id:\"7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de\" pid:4588 exited_at:{seconds:1748369830 nanos:384857907}" May 27 18:17:10.386329 containerd[1523]: time="2025-05-27T18:17:10.386280171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de\" id:\"7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de\" pid:4588 exited_at:{seconds:1748369830 nanos:384857907}" May 27 18:17:10.421155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c6de9ee14089acaea80c8e5cf2947d9bd0504120ade122668dbc82a93aa84de-rootfs.mount: Deactivated successfully. May 27 18:17:11.199483 kubelet[2670]: E0527 18:17:11.199431 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:11.209667 containerd[1523]: time="2025-05-27T18:17:11.209609801Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 18:17:11.224833 containerd[1523]: time="2025-05-27T18:17:11.224762357Z" level=info msg="Container 2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:11.250486 containerd[1523]: time="2025-05-27T18:17:11.249482270Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c\"" May 27 18:17:11.251365 containerd[1523]: time="2025-05-27T18:17:11.251046084Z" level=info msg="StartContainer for \"2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c\"" May 27 18:17:11.257145 containerd[1523]: time="2025-05-27T18:17:11.257024756Z" level=info msg="connecting to shim 2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c" address="unix:///run/containerd/s/066e562923cda4fa20a20b3571d3ffea7839c97bbc3b79e67dea260ce6cf47ad" protocol=ttrpc version=3 May 27 18:17:11.306179 systemd[1]: Started cri-containerd-2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c.scope - libcontainer container 2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c. May 27 18:17:11.362891 systemd[1]: cri-containerd-2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c.scope: Deactivated successfully. May 27 18:17:11.365449 containerd[1523]: time="2025-05-27T18:17:11.365339064Z" level=info msg="received exit event container_id:\"2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c\" id:\"2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c\" pid:4628 exited_at:{seconds:1748369831 nanos:363960510}" May 27 18:17:11.367198 containerd[1523]: time="2025-05-27T18:17:11.366929098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c\" id:\"2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c\" pid:4628 exited_at:{seconds:1748369831 nanos:363960510}" May 27 18:17:11.369524 containerd[1523]: time="2025-05-27T18:17:11.369220833Z" level=info msg="StartContainer for \"2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c\" returns successfully" May 27 18:17:11.408226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f06a4bc1b810db95758a1cbaaf43e86699464187dfd96511df62036992dc26c-rootfs.mount: Deactivated successfully. May 27 18:17:11.834884 kubelet[2670]: E0527 18:17:11.834677 2670 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 18:17:12.207860 kubelet[2670]: E0527 18:17:12.207488 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:12.217910 containerd[1523]: time="2025-05-27T18:17:12.217801372Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 18:17:12.236766 containerd[1523]: time="2025-05-27T18:17:12.234015691Z" level=info msg="Container 52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:12.243199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602002352.mount: Deactivated successfully. May 27 18:17:12.248631 containerd[1523]: time="2025-05-27T18:17:12.248566001Z" level=info msg="CreateContainer within sandbox \"fea26c2eeabe2b04d7394043166c86562d7f524ba8796ce094e1705b7000c2f9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3\"" May 27 18:17:12.251061 containerd[1523]: time="2025-05-27T18:17:12.249534944Z" level=info msg="StartContainer for \"52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3\"" May 27 18:17:12.255498 containerd[1523]: time="2025-05-27T18:17:12.255415338Z" level=info msg="connecting to shim 52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3" address="unix:///run/containerd/s/066e562923cda4fa20a20b3571d3ffea7839c97bbc3b79e67dea260ce6cf47ad" protocol=ttrpc version=3 May 27 18:17:12.322149 systemd[1]: Started cri-containerd-52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3.scope - libcontainer container 52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3. May 27 18:17:12.395766 containerd[1523]: time="2025-05-27T18:17:12.395678651Z" level=info msg="StartContainer for \"52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3\" returns successfully" May 27 18:17:12.520386 containerd[1523]: time="2025-05-27T18:17:12.520172962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3\" id:\"bd1ed91364b74befffa48e1919c65b7243d4c1197ca55ac4954b830f535f31f8\" pid:4697 exited_at:{seconds:1748369832 nanos:519420286}" May 27 18:17:12.981849 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 18:17:13.219275 kubelet[2670]: E0527 18:17:13.218171 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:13.252271 kubelet[2670]: I0527 18:17:13.250028 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j7brc" podStartSLOduration=6.250010245 podStartE2EDuration="6.250010245s" podCreationTimestamp="2025-05-27 18:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:17:13.248417531 +0000 UTC m=+107.140172694" watchObservedRunningTime="2025-05-27 18:17:13.250010245 +0000 UTC m=+107.141765407" May 27 18:17:14.363967 kubelet[2670]: E0527 18:17:14.363904 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:14.941650 containerd[1523]: time="2025-05-27T18:17:14.941538623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3\" id:\"84d0b358327c2b4bf7b205f58ce66a5be53f35ca39411fe2a126cce291569246\" pid:4841 exit_status:1 exited_at:{seconds:1748369834 nanos:940965051}" May 27 18:17:16.539088 kubelet[2670]: E0527 18:17:16.539016 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-nd5gg" podUID="384e3133-6637-4b5e-bb12-1a3655ecad79" May 27 18:17:16.608695 systemd-networkd[1457]: lxc_health: Link UP May 27 18:17:16.620358 systemd-networkd[1457]: lxc_health: Gained carrier May 27 18:17:17.366752 containerd[1523]: time="2025-05-27T18:17:17.366636818Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3\" id:\"5d154418c51af6defd57d347c58dace07aadf43a3fcd27945746eea529746825\" pid:5219 exited_at:{seconds:1748369837 nanos:366079559}" May 27 18:17:17.654892 kubelet[2670]: I0527 18:17:17.654691 2670 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 18:17:17.655997 kubelet[2670]: I0527 18:17:17.655844 2670 container_gc.go:86] "Attempting to delete unused containers" May 27 18:17:17.660995 kubelet[2670]: I0527 18:17:17.660947 2670 image_gc_manager.go:447] "Attempting to delete unused images" May 27 18:17:17.698835 kubelet[2670]: I0527 18:17:17.698689 2670 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 18:17:17.699313 kubelet[2670]: I0527 18:17:17.699250 2670 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-nd5gg","kube-system/coredns-674b8bbfcf-zv4dh","kube-system/kube-proxy-mjrfk","kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7","kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7","kube-system/cilium-j7brc","kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7"] May 27 18:17:17.699911 kubelet[2670]: E0527 18:17:17.699831 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-nd5gg" May 27 18:17:17.699911 kubelet[2670]: E0527 18:17:17.699865 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-zv4dh" May 27 18:17:17.699911 kubelet[2670]: E0527 18:17:17.699881 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjrfk" May 27 18:17:17.701734 kubelet[2670]: E0527 18:17:17.700163 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-0-76b74bdce7" May 27 18:17:17.701734 kubelet[2670]: E0527 18:17:17.700201 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ci-4344.0.0-0-76b74bdce7" May 27 18:17:17.701734 kubelet[2670]: E0527 18:17:17.700247 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-j7brc" May 27 18:17:17.701734 kubelet[2670]: E0527 18:17:17.700262 2670 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ci-4344.0.0-0-76b74bdce7" May 27 18:17:17.701734 kubelet[2670]: I0527 18:17:17.700281 2670 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 18:17:18.366220 kubelet[2670]: E0527 18:17:18.366113 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:18.539154 kubelet[2670]: E0527 18:17:18.538683 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:18.614026 systemd-networkd[1457]: lxc_health: Gained IPv6LL May 27 18:17:19.239754 kubelet[2670]: E0527 18:17:19.239527 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:19.594446 containerd[1523]: time="2025-05-27T18:17:19.594375696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3\" id:\"da3811c43e15bd4df9d96434ad9cc42d3e87415af72df75ae6622361d9df3808\" pid:5257 exited_at:{seconds:1748369839 nanos:593710062}" May 27 18:17:20.243503 kubelet[2670]: E0527 18:17:20.243395 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 27 18:17:21.919666 containerd[1523]: time="2025-05-27T18:17:21.919516941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52aad71ca2e8830381a0f05446beed580025961e9651d7148230ff84ebaf83a3\" id:\"7956b2fa23ec456efd33aa25e7ada6b9633b0af0f2a018f76b50a7bfd3b732c9\" pid:5282 exited_at:{seconds:1748369841 nanos:919048705}" May 27 18:17:22.020754 sshd[4430]: Connection closed by 139.178.68.195 port 55032 May 27 18:17:22.021695 sshd-session[4428]: pam_unix(sshd:session): session closed for user core May 27 18:17:22.028019 systemd-logind[1493]: Session 29 logged out. Waiting for processes to exit. May 27 18:17:22.028327 systemd[1]: sshd@28-143.110.225.216:22-139.178.68.195:55032.service: Deactivated successfully. May 27 18:17:22.035084 systemd[1]: session-29.scope: Deactivated successfully. May 27 18:17:22.041862 systemd-logind[1493]: Removed session 29. May 27 18:17:24.538866 kubelet[2670]: E0527 18:17:24.538306 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"