May 14 18:08:25.927757 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 16:37:27 -00 2025 May 14 18:08:25.927808 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:08:25.927819 kernel: BIOS-provided physical RAM map: May 14 18:08:25.927826 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 18:08:25.927833 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 18:08:25.927839 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 18:08:25.927847 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 14 18:08:25.927859 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 14 18:08:25.927869 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:08:25.927875 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 18:08:25.927882 kernel: NX (Execute Disable) protection: active May 14 18:08:25.927888 kernel: APIC: Static calls initialized May 14 18:08:25.927895 kernel: SMBIOS 2.8 present. May 14 18:08:25.927902 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 14 18:08:25.927914 kernel: DMI: Memory slots populated: 1/1 May 14 18:08:25.927921 kernel: Hypervisor detected: KVM May 14 18:08:25.927932 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 18:08:25.927939 kernel: kvm-clock: using sched offset of 5175619601 cycles May 14 18:08:25.927961 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 18:08:25.927968 kernel: tsc: Detected 1995.307 MHz processor May 14 18:08:25.927976 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 18:08:25.927984 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 18:08:25.927991 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 14 18:08:25.928001 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 18:08:25.928009 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 18:08:25.928016 kernel: ACPI: Early table checksum verification disabled May 14 18:08:25.928024 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 14 18:08:25.928031 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:08:25.928038 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:08:25.928046 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:08:25.928053 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 14 18:08:25.928060 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:08:25.928069 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:08:25.928077 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:08:25.928091 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:08:25.928099 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 14 18:08:25.928106 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 14 18:08:25.928113 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 14 18:08:25.928121 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 14 18:08:25.928128 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 14 18:08:25.928141 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 14 18:08:25.928149 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 14 18:08:25.928156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 14 18:08:25.928164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 14 18:08:25.928171 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 14 18:08:25.928181 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 14 18:08:25.928189 kernel: Zone ranges: May 14 18:08:25.928197 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 18:08:25.928204 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 14 18:08:25.928212 kernel: Normal empty May 14 18:08:25.928219 kernel: Device empty May 14 18:08:25.928227 kernel: Movable zone start for each node May 14 18:08:25.928235 kernel: Early memory node ranges May 14 18:08:25.928242 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 18:08:25.928250 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 14 18:08:25.928272 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 14 18:08:25.928280 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:08:25.928288 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 18:08:25.928295 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 14 18:08:25.928303 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 18:08:25.928310 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 18:08:25.928322 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 18:08:25.928330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 18:08:25.928341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 18:08:25.928352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 18:08:25.928363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 18:08:25.928371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 18:08:25.928378 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 18:08:25.928390 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 18:08:25.928410 kernel: TSC deadline timer available May 14 18:08:25.928418 kernel: CPU topo: Max. logical packages: 1 May 14 18:08:25.928425 kernel: CPU topo: Max. logical dies: 1 May 14 18:08:25.928433 kernel: CPU topo: Max. dies per package: 1 May 14 18:08:25.928440 kernel: CPU topo: Max. threads per core: 1 May 14 18:08:25.928450 kernel: CPU topo: Num. cores per package: 2 May 14 18:08:25.928458 kernel: CPU topo: Num. threads per package: 2 May 14 18:08:25.928465 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 14 18:08:25.928473 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 18:08:25.928481 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 14 18:08:25.928488 kernel: Booting paravirtualized kernel on KVM May 14 18:08:25.928496 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 18:08:25.928504 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 18:08:25.928512 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 14 18:08:25.928522 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 14 18:08:25.928529 kernel: pcpu-alloc: [0] 0 1 May 14 18:08:25.928537 kernel: kvm-guest: PV spinlocks disabled, no host support May 14 18:08:25.928553 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:08:25.928561 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:08:25.928569 kernel: random: crng init done May 14 18:08:25.928577 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:08:25.928584 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 14 18:08:25.928594 kernel: Fallback order for Node 0: 0 May 14 18:08:25.928602 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 14 18:08:25.928609 kernel: Policy zone: DMA32 May 14 18:08:25.928617 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:08:25.928624 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 18:08:25.928632 kernel: Kernel/User page tables isolation: enabled May 14 18:08:25.928640 kernel: ftrace: allocating 40065 entries in 157 pages May 14 18:08:25.928648 kernel: ftrace: allocated 157 pages with 5 groups May 14 18:08:25.928655 kernel: Dynamic Preempt: voluntary May 14 18:08:25.928665 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:08:25.928675 kernel: rcu: RCU event tracing is enabled. May 14 18:08:25.928683 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 18:08:25.928691 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:08:25.928699 kernel: Rude variant of Tasks RCU enabled. May 14 18:08:25.928707 kernel: Tracing variant of Tasks RCU enabled. May 14 18:08:25.928714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:08:25.928722 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 18:08:25.928730 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:08:25.928744 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:08:25.928752 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:08:25.928760 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 14 18:08:25.928768 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:08:25.928775 kernel: Console: colour VGA+ 80x25 May 14 18:08:25.928783 kernel: printk: legacy console [tty0] enabled May 14 18:08:25.928790 kernel: printk: legacy console [ttyS0] enabled May 14 18:08:25.928798 kernel: ACPI: Core revision 20240827 May 14 18:08:25.928806 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 18:08:25.928823 kernel: APIC: Switch to symmetric I/O mode setup May 14 18:08:25.928832 kernel: x2apic enabled May 14 18:08:25.928840 kernel: APIC: Switched APIC routing to: physical x2apic May 14 18:08:25.928851 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 18:08:25.928863 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns May 14 18:08:25.928872 kernel: Calibrating delay loop (skipped) preset value.. 3990.61 BogoMIPS (lpj=1995307) May 14 18:08:25.928880 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 14 18:08:25.928888 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 14 18:08:25.928896 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 18:08:25.928907 kernel: Spectre V2 : Mitigation: Retpolines May 14 18:08:25.928916 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 18:08:25.928924 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 18:08:25.928933 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 14 18:08:25.928941 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 18:08:25.928961 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 18:08:25.928970 kernel: MDS: Mitigation: Clear CPU buffers May 14 18:08:25.928980 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 14 18:08:25.928989 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 18:08:25.928997 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 18:08:25.929005 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 18:08:25.929013 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 18:08:25.929022 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 14 18:08:25.929030 kernel: Freeing SMP alternatives memory: 32K May 14 18:08:25.929038 kernel: pid_max: default: 32768 minimum: 301 May 14 18:08:25.929050 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:08:25.929069 kernel: landlock: Up and running. May 14 18:08:25.929081 kernel: SELinux: Initializing. May 14 18:08:25.929094 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 18:08:25.929106 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 18:08:25.929117 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 14 18:08:25.929129 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 14 18:08:25.929141 kernel: signal: max sigframe size: 1776 May 14 18:08:25.929154 kernel: rcu: Hierarchical SRCU implementation. May 14 18:08:25.929167 kernel: rcu: Max phase no-delay instances is 400. May 14 18:08:25.929182 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:08:25.929191 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 14 18:08:25.929199 kernel: smp: Bringing up secondary CPUs ... May 14 18:08:25.929209 kernel: smpboot: x86: Booting SMP configuration: May 14 18:08:25.929230 kernel: .... node #0, CPUs: #1 May 14 18:08:25.929244 kernel: smp: Brought up 1 node, 2 CPUs May 14 18:08:25.929256 kernel: smpboot: Total of 2 processors activated (7981.22 BogoMIPS) May 14 18:08:25.929270 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 125140K reserved, 0K cma-reserved) May 14 18:08:25.929283 kernel: devtmpfs: initialized May 14 18:08:25.929302 kernel: x86/mm: Memory block size: 128MB May 14 18:08:25.929315 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:08:25.929328 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 18:08:25.929341 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:08:25.929354 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:08:25.929367 kernel: audit: initializing netlink subsys (disabled) May 14 18:08:25.929379 kernel: audit: type=2000 audit(1747246101.918:1): state=initialized audit_enabled=0 res=1 May 14 18:08:25.929391 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:08:25.929404 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 18:08:25.929421 kernel: cpuidle: using governor menu May 14 18:08:25.929433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:08:25.929447 kernel: dca service started, version 1.12.1 May 14 18:08:25.929461 kernel: PCI: Using configuration type 1 for base access May 14 18:08:25.929474 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 18:08:25.929487 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:08:25.929500 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:08:25.929513 kernel: ACPI: Added _OSI(Module Device) May 14 18:08:25.929526 kernel: ACPI: Added _OSI(Processor Device) May 14 18:08:25.929543 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:08:25.929557 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:08:25.929572 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:08:25.929586 kernel: ACPI: Interpreter enabled May 14 18:08:25.929602 kernel: ACPI: PM: (supports S0 S5) May 14 18:08:25.929617 kernel: ACPI: Using IOAPIC for interrupt routing May 14 18:08:25.929631 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 18:08:25.929644 kernel: PCI: Using E820 reservations for host bridge windows May 14 18:08:25.929657 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 14 18:08:25.929675 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:08:25.929997 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 14 18:08:25.930110 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 14 18:08:25.930201 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 14 18:08:25.930218 kernel: acpiphp: Slot [3] registered May 14 18:08:25.930227 kernel: acpiphp: Slot [4] registered May 14 18:08:25.930235 kernel: acpiphp: Slot [5] registered May 14 18:08:25.930248 kernel: acpiphp: Slot [6] registered May 14 18:08:25.930257 kernel: acpiphp: Slot [7] registered May 14 18:08:25.930265 kernel: acpiphp: Slot [8] registered May 14 18:08:25.930273 kernel: acpiphp: Slot [9] registered May 14 18:08:25.930281 kernel: acpiphp: Slot [10] registered May 14 18:08:25.930290 kernel: acpiphp: Slot [11] registered May 14 18:08:25.930298 kernel: acpiphp: Slot [12] registered May 14 18:08:25.930307 kernel: acpiphp: Slot [13] registered May 14 18:08:25.930315 kernel: acpiphp: Slot [14] registered May 14 18:08:25.930323 kernel: acpiphp: Slot [15] registered May 14 18:08:25.930334 kernel: acpiphp: Slot [16] registered May 14 18:08:25.930342 kernel: acpiphp: Slot [17] registered May 14 18:08:25.930351 kernel: acpiphp: Slot [18] registered May 14 18:08:25.930359 kernel: acpiphp: Slot [19] registered May 14 18:08:25.930367 kernel: acpiphp: Slot [20] registered May 14 18:08:25.930375 kernel: acpiphp: Slot [21] registered May 14 18:08:25.930384 kernel: acpiphp: Slot [22] registered May 14 18:08:25.930392 kernel: acpiphp: Slot [23] registered May 14 18:08:25.930400 kernel: acpiphp: Slot [24] registered May 14 18:08:25.930412 kernel: acpiphp: Slot [25] registered May 14 18:08:25.930421 kernel: acpiphp: Slot [26] registered May 14 18:08:25.930429 kernel: acpiphp: Slot [27] registered May 14 18:08:25.930437 kernel: acpiphp: Slot [28] registered May 14 18:08:25.930445 kernel: acpiphp: Slot [29] registered May 14 18:08:25.930453 kernel: acpiphp: Slot [30] registered May 14 18:08:25.930462 kernel: acpiphp: Slot [31] registered May 14 18:08:25.930470 kernel: PCI host bridge to bus 0000:00 May 14 18:08:25.930607 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 18:08:25.930695 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 18:08:25.930775 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 18:08:25.930854 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 14 18:08:25.930933 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 14 18:08:25.931025 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:08:25.931153 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 14 18:08:25.931281 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 14 18:08:25.931401 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 14 18:08:25.931493 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 14 18:08:25.931584 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 14 18:08:25.931673 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 14 18:08:25.931763 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 14 18:08:25.931886 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 14 18:08:25.932028 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 14 18:08:25.932125 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 14 18:08:25.932241 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 14 18:08:25.932337 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 14 18:08:25.932438 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 14 18:08:25.932556 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 14 18:08:25.932652 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 14 18:08:25.932741 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 14 18:08:25.932839 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 14 18:08:25.932935 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 14 18:08:25.933048 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 18:08:25.933160 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:08:25.933251 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 14 18:08:25.933354 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 14 18:08:25.933447 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 14 18:08:25.933596 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:08:25.933688 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 14 18:08:25.933776 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 14 18:08:25.933864 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 14 18:08:25.936123 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 14 18:08:25.936272 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 14 18:08:25.936390 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 14 18:08:25.936521 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 14 18:08:25.936640 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:08:25.936733 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 14 18:08:25.936823 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 14 18:08:25.936928 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 14 18:08:25.939223 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:08:25.939347 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 14 18:08:25.939445 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 14 18:08:25.939536 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 14 18:08:25.939650 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:08:25.939743 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 14 18:08:25.939874 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 14 18:08:25.939886 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 18:08:25.939895 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 18:08:25.939904 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 18:08:25.939912 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 18:08:25.939921 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 14 18:08:25.939929 kernel: iommu: Default domain type: Translated May 14 18:08:25.939938 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 18:08:25.939970 kernel: PCI: Using ACPI for IRQ routing May 14 18:08:25.939979 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 18:08:25.939987 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 18:08:25.939996 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 14 18:08:25.940096 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 14 18:08:25.940204 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 14 18:08:25.940334 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 18:08:25.940347 kernel: vgaarb: loaded May 14 18:08:25.940356 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 18:08:25.940368 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 18:08:25.940377 kernel: clocksource: Switched to clocksource kvm-clock May 14 18:08:25.940385 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:08:25.940394 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:08:25.940403 kernel: pnp: PnP ACPI init May 14 18:08:25.940411 kernel: pnp: PnP ACPI: found 4 devices May 14 18:08:25.940420 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 18:08:25.940429 kernel: NET: Registered PF_INET protocol family May 14 18:08:25.940437 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:08:25.940449 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 14 18:08:25.940457 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:08:25.940466 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 14 18:08:25.940475 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 14 18:08:25.940484 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 14 18:08:25.940492 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 18:08:25.940500 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 18:08:25.940509 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:08:25.940520 kernel: NET: Registered PF_XDP protocol family May 14 18:08:25.940615 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 18:08:25.940697 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 18:08:25.940778 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 18:08:25.940858 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 14 18:08:25.940938 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 14 18:08:25.941374 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 14 18:08:25.941520 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 14 18:08:25.941540 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 14 18:08:25.941659 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28658 usecs May 14 18:08:25.941671 kernel: PCI: CLS 0 bytes, default 64 May 14 18:08:25.941680 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 14 18:08:25.941689 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns May 14 18:08:25.941698 kernel: Initialise system trusted keyrings May 14 18:08:25.941713 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 14 18:08:25.941721 kernel: Key type asymmetric registered May 14 18:08:25.941730 kernel: Asymmetric key parser 'x509' registered May 14 18:08:25.941742 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 18:08:25.941751 kernel: io scheduler mq-deadline registered May 14 18:08:25.941760 kernel: io scheduler kyber registered May 14 18:08:25.941768 kernel: io scheduler bfq registered May 14 18:08:25.941776 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 18:08:25.941792 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 14 18:08:25.941801 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 14 18:08:25.941809 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 14 18:08:25.941818 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:08:25.941829 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:08:25.941837 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 18:08:25.941846 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 18:08:25.941855 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 18:08:25.941863 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 18:08:25.942032 kernel: rtc_cmos 00:03: RTC can wake from S4 May 14 18:08:25.942208 kernel: rtc_cmos 00:03: registered as rtc0 May 14 18:08:25.942302 kernel: rtc_cmos 00:03: setting system clock to 2025-05-14T18:08:25 UTC (1747246105) May 14 18:08:25.942392 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 14 18:08:25.942403 kernel: intel_pstate: CPU model not supported May 14 18:08:25.942411 kernel: NET: Registered PF_INET6 protocol family May 14 18:08:25.942420 kernel: Segment Routing with IPv6 May 14 18:08:25.942429 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:08:25.942437 kernel: NET: Registered PF_PACKET protocol family May 14 18:08:25.942447 kernel: Key type dns_resolver registered May 14 18:08:25.942455 kernel: IPI shorthand broadcast: enabled May 14 18:08:25.942464 kernel: sched_clock: Marking stable (4137004429, 166050258)->(4337971556, -34916869) May 14 18:08:25.944661 kernel: registered taskstats version 1 May 14 18:08:25.944679 kernel: Loading compiled-in X.509 certificates May 14 18:08:25.944690 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 41e2a150aa08ec2528be2394819b3db677e5f4ef' May 14 18:08:25.944699 kernel: Demotion targets for Node 0: null May 14 18:08:25.944707 kernel: Key type .fscrypt registered May 14 18:08:25.944716 kernel: Key type fscrypt-provisioning registered May 14 18:08:25.944747 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:08:25.944758 kernel: ima: Allocated hash algorithm: sha1 May 14 18:08:25.944769 kernel: ima: No architecture policies found May 14 18:08:25.944778 kernel: clk: Disabling unused clocks May 14 18:08:25.944787 kernel: Warning: unable to open an initial console. May 14 18:08:25.944796 kernel: Freeing unused kernel image (initmem) memory: 54424K May 14 18:08:25.944805 kernel: Write protecting the kernel read-only data: 24576k May 14 18:08:25.944814 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 18:08:25.944823 kernel: Run /init as init process May 14 18:08:25.944832 kernel: with arguments: May 14 18:08:25.944841 kernel: /init May 14 18:08:25.944852 kernel: with environment: May 14 18:08:25.944860 kernel: HOME=/ May 14 18:08:25.944869 kernel: TERM=linux May 14 18:08:25.944877 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:08:25.944888 systemd[1]: Successfully made /usr/ read-only. May 14 18:08:25.944902 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:08:25.944913 systemd[1]: Detected virtualization kvm. May 14 18:08:25.944922 systemd[1]: Detected architecture x86-64. May 14 18:08:25.944933 systemd[1]: Running in initrd. May 14 18:08:25.944943 systemd[1]: No hostname configured, using default hostname. May 14 18:08:25.944979 systemd[1]: Hostname set to . May 14 18:08:25.944988 systemd[1]: Initializing machine ID from VM UUID. May 14 18:08:25.944997 systemd[1]: Queued start job for default target initrd.target. May 14 18:08:25.945007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:08:25.945017 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:08:25.945029 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:08:25.945044 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:08:25.945058 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:08:25.945077 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:08:25.945092 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:08:25.945109 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:08:25.945123 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:08:25.945137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:08:25.945152 systemd[1]: Reached target paths.target - Path Units. May 14 18:08:25.945167 systemd[1]: Reached target slices.target - Slice Units. May 14 18:08:25.945182 systemd[1]: Reached target swap.target - Swaps. May 14 18:08:25.945197 systemd[1]: Reached target timers.target - Timer Units. May 14 18:08:25.945213 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:08:25.945229 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:08:25.945239 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:08:25.945249 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:08:25.945258 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:08:25.945267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:08:25.945277 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:08:25.945286 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:08:25.945295 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:08:25.945304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:08:25.945316 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:08:25.945326 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:08:25.945336 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:08:25.945345 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:08:25.945354 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:08:25.945364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:08:25.945373 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:08:25.945433 systemd-journald[212]: Collecting audit messages is disabled. May 14 18:08:25.945463 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:08:25.945473 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:08:25.945485 systemd-journald[212]: Journal started May 14 18:08:25.945518 systemd-journald[212]: Runtime Journal (/run/log/journal/65b6c17f684043889ef8eb3c134b5391) is 4.9M, max 39.5M, 34.6M free. May 14 18:08:25.949988 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:08:25.951352 systemd-modules-load[213]: Inserted module 'overlay' May 14 18:08:25.963998 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:08:25.985996 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:08:25.988689 systemd-modules-load[213]: Inserted module 'br_netfilter' May 14 18:08:26.017286 kernel: Bridge firewalling registered May 14 18:08:26.016494 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:08:26.024313 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:08:26.030214 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:08:26.034160 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:08:26.038369 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:08:26.041005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:08:26.051047 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:08:26.065675 systemd-tmpfiles[230]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:08:26.070485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:08:26.077009 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:08:26.082219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:08:26.083997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:08:26.088620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:08:26.093267 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:08:26.128998 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:08:26.147152 systemd-resolved[247]: Positive Trust Anchors: May 14 18:08:26.148028 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:08:26.148069 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:08:26.155227 systemd-resolved[247]: Defaulting to hostname 'linux'. May 14 18:08:26.158313 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:08:26.161370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:08:26.258997 kernel: SCSI subsystem initialized May 14 18:08:26.271026 kernel: Loading iSCSI transport class v2.0-870. May 14 18:08:26.285011 kernel: iscsi: registered transport (tcp) May 14 18:08:26.312439 kernel: iscsi: registered transport (qla4xxx) May 14 18:08:26.312553 kernel: QLogic iSCSI HBA Driver May 14 18:08:26.337366 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:08:26.397898 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:08:26.401646 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:08:26.466447 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:08:26.469700 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:08:26.535027 kernel: raid6: avx2x4 gen() 22882 MB/s May 14 18:08:26.552021 kernel: raid6: avx2x2 gen() 27666 MB/s May 14 18:08:26.569209 kernel: raid6: avx2x1 gen() 15820 MB/s May 14 18:08:26.569314 kernel: raid6: using algorithm avx2x2 gen() 27666 MB/s May 14 18:08:26.588176 kernel: raid6: .... xor() 15975 MB/s, rmw enabled May 14 18:08:26.588264 kernel: raid6: using avx2x2 recovery algorithm May 14 18:08:26.613013 kernel: xor: automatically using best checksumming function avx May 14 18:08:26.786002 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:08:26.795040 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:08:26.798165 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:08:26.831428 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 14 18:08:26.838333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:08:26.843167 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:08:26.873453 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation May 14 18:08:26.906682 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:08:26.909461 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:08:26.983640 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:08:26.988583 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:08:27.062563 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 14 18:08:27.120074 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 14 18:08:27.121248 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 14 18:08:27.121424 kernel: scsi host0: Virtio SCSI HBA May 14 18:08:27.121567 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:08:27.121586 kernel: GPT:9289727 != 125829119 May 14 18:08:27.121615 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:08:27.121632 kernel: GPT:9289727 != 125829119 May 14 18:08:27.121648 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:08:27.121665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:08:27.121677 kernel: cryptd: max_cpu_qlen set to 1000 May 14 18:08:27.121691 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 14 18:08:27.148723 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) May 14 18:08:27.148862 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 14 18:08:27.148883 kernel: AES CTR mode by8 optimization enabled May 14 18:08:27.160996 kernel: libata version 3.00 loaded. May 14 18:08:27.166983 kernel: ata_piix 0000:00:01.1: version 2.13 May 14 18:08:27.225899 kernel: scsi host1: ata_piix May 14 18:08:27.226105 kernel: scsi host2: ata_piix May 14 18:08:27.226238 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 14 18:08:27.226251 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 14 18:08:27.226269 kernel: ACPI: bus type USB registered May 14 18:08:27.226293 kernel: usbcore: registered new interface driver usbfs May 14 18:08:27.226307 kernel: usbcore: registered new interface driver hub May 14 18:08:27.226325 kernel: usbcore: registered new device driver usb May 14 18:08:27.193400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:08:27.203988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:08:27.209994 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:08:27.213990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:08:27.217303 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:08:27.287632 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:08:27.319375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:08:27.331524 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:08:27.341484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:08:27.349250 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:08:27.350026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:08:27.352391 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:08:27.373847 disk-uuid[606]: Primary Header is updated. May 14 18:08:27.373847 disk-uuid[606]: Secondary Entries is updated. May 14 18:08:27.373847 disk-uuid[606]: Secondary Header is updated. May 14 18:08:27.380987 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:08:27.391031 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:08:27.426212 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 14 18:08:27.436298 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 14 18:08:27.436605 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 14 18:08:27.436838 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 14 18:08:27.438023 kernel: hub 1-0:1.0: USB hub found May 14 18:08:27.438276 kernel: hub 1-0:1.0: 2 ports detected May 14 18:08:27.574749 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:08:27.595078 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:08:27.596223 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:08:27.597857 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:08:27.600578 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:08:27.645661 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:08:28.400022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:08:28.404422 disk-uuid[607]: The operation has completed successfully. May 14 18:08:28.472525 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:08:28.472694 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:08:28.508821 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:08:28.523477 sh[631]: Success May 14 18:08:28.550545 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:08:28.550654 kernel: device-mapper: uevent: version 1.0.3 May 14 18:08:28.551741 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:08:28.569004 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 14 18:08:28.642578 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:08:28.648124 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:08:28.659514 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:08:28.675378 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:08:28.675473 kernel: BTRFS: device fsid dedcf745-d4ff-44ac-b61c-5ec1bad114c7 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (643) May 14 18:08:28.678413 kernel: BTRFS info (device dm-0): first mount of filesystem dedcf745-d4ff-44ac-b61c-5ec1bad114c7 May 14 18:08:28.680144 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 18:08:28.682238 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:08:28.691700 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:08:28.693008 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:08:28.694629 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:08:28.695575 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:08:28.700190 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:08:28.730012 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (671) May 14 18:08:28.734104 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:08:28.734212 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:08:28.736514 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:08:28.748026 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:08:28.749214 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:08:28.751375 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:08:28.930896 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:08:28.936131 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:08:28.985105 systemd-networkd[820]: lo: Link UP May 14 18:08:28.986261 systemd-networkd[820]: lo: Gained carrier May 14 18:08:28.989566 ignition[715]: Ignition 2.21.0 May 14 18:08:28.989585 ignition[715]: Stage: fetch-offline May 14 18:08:28.989632 ignition[715]: no configs at "/usr/lib/ignition/base.d" May 14 18:08:28.989642 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:08:28.994173 systemd-networkd[820]: Enumeration completed May 14 18:08:28.989737 ignition[715]: parsed url from cmdline: "" May 14 18:08:28.994827 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 14 18:08:28.989741 ignition[715]: no config URL provided May 14 18:08:28.994834 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 14 18:08:28.989746 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:08:28.995372 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:08:28.989753 ignition[715]: no config at "/usr/lib/ignition/user.ign" May 14 18:08:28.997659 systemd-networkd[820]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:08:28.989760 ignition[715]: failed to fetch config: resource requires networking May 14 18:08:28.997665 systemd-networkd[820]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:08:28.990076 ignition[715]: Ignition finished successfully May 14 18:08:28.998668 systemd-networkd[820]: eth0: Link UP May 14 18:08:28.998674 systemd-networkd[820]: eth0: Gained carrier May 14 18:08:28.998694 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 14 18:08:28.998832 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:08:29.001516 systemd[1]: Reached target network.target - Network. May 14 18:08:29.003410 systemd-networkd[820]: eth1: Link UP May 14 18:08:29.003416 systemd-networkd[820]: eth1: Gained carrier May 14 18:08:29.003439 systemd-networkd[820]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:08:29.004518 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 18:08:29.014052 systemd-networkd[820]: eth0: DHCPv4 address 164.90.152.250/20, gateway 164.90.144.1 acquired from 169.254.169.253 May 14 18:08:29.018080 systemd-networkd[820]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 May 14 18:08:29.052731 ignition[825]: Ignition 2.21.0 May 14 18:08:29.052749 ignition[825]: Stage: fetch May 14 18:08:29.053115 ignition[825]: no configs at "/usr/lib/ignition/base.d" May 14 18:08:29.053132 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:08:29.053330 ignition[825]: parsed url from cmdline: "" May 14 18:08:29.053337 ignition[825]: no config URL provided May 14 18:08:29.053345 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:08:29.053360 ignition[825]: no config at "/usr/lib/ignition/user.ign" May 14 18:08:29.053409 ignition[825]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 14 18:08:29.071483 ignition[825]: GET result: OK May 14 18:08:29.072922 ignition[825]: parsing config with SHA512: 2905a3a11330c98a75bd0649e403cd41144d5a8d6790366c63e374538e1dcf6e2820a23056635f4aa7b4714e75c92d69a51f76aa7dfb5b30baf94db31069e3a2 May 14 18:08:29.086520 unknown[825]: fetched base config from "system" May 14 18:08:29.088161 unknown[825]: fetched base config from "system" May 14 18:08:29.088819 ignition[825]: fetch: fetch complete May 14 18:08:29.088192 unknown[825]: fetched user config from "digitalocean" May 14 18:08:29.088827 ignition[825]: fetch: fetch passed May 14 18:08:29.088921 ignition[825]: Ignition finished successfully May 14 18:08:29.094415 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 18:08:29.104230 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:08:29.149432 ignition[833]: Ignition 2.21.0 May 14 18:08:29.149455 ignition[833]: Stage: kargs May 14 18:08:29.149725 ignition[833]: no configs at "/usr/lib/ignition/base.d" May 14 18:08:29.149740 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:08:29.151606 ignition[833]: kargs: kargs passed May 14 18:08:29.152151 ignition[833]: Ignition finished successfully May 14 18:08:29.156028 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:08:29.159327 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:08:29.196215 ignition[839]: Ignition 2.21.0 May 14 18:08:29.196236 ignition[839]: Stage: disks May 14 18:08:29.196474 ignition[839]: no configs at "/usr/lib/ignition/base.d" May 14 18:08:29.196485 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:08:29.199748 ignition[839]: disks: disks passed May 14 18:08:29.199850 ignition[839]: Ignition finished successfully May 14 18:08:29.201777 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:08:29.203629 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:08:29.204798 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:08:29.206328 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:08:29.207687 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:08:29.209037 systemd[1]: Reached target basic.target - Basic System. May 14 18:08:29.212317 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:08:29.242750 systemd-fsck[848]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:08:29.246188 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:08:29.249788 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:08:29.418989 kernel: EXT4-fs (vda9): mounted filesystem d6072e19-4548-4806-a012-87bb17c59f4c r/w with ordered data mode. Quota mode: none. May 14 18:08:29.420816 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:08:29.422973 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:08:29.428919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:08:29.432933 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:08:29.440306 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 14 18:08:29.451513 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 18:08:29.456399 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:08:29.458100 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:08:29.468680 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:08:29.482272 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:08:29.487054 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (856) May 14 18:08:29.499586 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:08:29.499701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:08:29.499719 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:08:29.547646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:08:29.575332 initrd-setup-root[887]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:08:29.587520 coreos-metadata[859]: May 14 18:08:29.587 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:08:29.589562 coreos-metadata[858]: May 14 18:08:29.589 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:08:29.594233 initrd-setup-root[894]: cut: /sysroot/etc/group: No such file or directory May 14 18:08:29.599788 coreos-metadata[859]: May 14 18:08:29.599 INFO Fetch successful May 14 18:08:29.602686 coreos-metadata[858]: May 14 18:08:29.600 INFO Fetch successful May 14 18:08:29.608581 initrd-setup-root[901]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:08:29.611674 coreos-metadata[859]: May 14 18:08:29.609 INFO wrote hostname ci-4334.0.0-a-3f9ee7d7d0 to /sysroot/etc/hostname May 14 18:08:29.612047 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 18:08:29.616757 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 14 18:08:29.616963 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 14 18:08:29.621987 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:08:29.753450 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:08:29.756602 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:08:29.758629 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:08:29.779708 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:08:29.781114 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:08:29.809184 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:08:29.819992 ignition[978]: INFO : Ignition 2.21.0 May 14 18:08:29.819992 ignition[978]: INFO : Stage: mount May 14 18:08:29.819992 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:08:29.819992 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:08:29.822790 ignition[978]: INFO : mount: mount passed May 14 18:08:29.822790 ignition[978]: INFO : Ignition finished successfully May 14 18:08:29.823124 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:08:29.825927 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:08:29.846287 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:08:29.881381 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (989) May 14 18:08:29.881451 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:08:29.883027 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:08:29.884296 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:08:29.891431 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:08:29.938801 ignition[1005]: INFO : Ignition 2.21.0 May 14 18:08:29.938801 ignition[1005]: INFO : Stage: files May 14 18:08:29.942431 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:08:29.942431 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:08:29.942431 ignition[1005]: DEBUG : files: compiled without relabeling support, skipping May 14 18:08:29.945231 ignition[1005]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:08:29.945231 ignition[1005]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:08:29.950156 ignition[1005]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:08:29.951261 ignition[1005]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:08:29.951261 ignition[1005]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:08:29.951065 unknown[1005]: wrote ssh authorized keys file for user: core May 14 18:08:29.954295 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:08:29.954295 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 18:08:30.018582 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:08:30.086227 systemd-networkd[820]: eth0: Gained IPv6LL May 14 18:08:30.150344 systemd-networkd[820]: eth1: Gained IPv6LL May 14 18:08:30.517693 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:08:30.517693 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:08:30.517693 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 18:08:30.967106 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 18:08:31.032766 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:08:31.034983 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 18:08:31.034983 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:08:31.034983 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:08:31.038521 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:08:31.038521 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:08:31.038521 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:08:31.038521 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:08:31.038521 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:08:31.038521 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:08:31.044228 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:08:31.044228 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:08:31.044228 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:08:31.044228 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:08:31.044228 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 18:08:31.458661 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 18:08:31.780126 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:08:31.780126 ignition[1005]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 18:08:31.783244 ignition[1005]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:08:31.786563 ignition[1005]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:08:31.786563 ignition[1005]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 18:08:31.786563 ignition[1005]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 14 18:08:31.786563 ignition[1005]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:08:31.786563 ignition[1005]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:08:31.786563 ignition[1005]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:08:31.786563 ignition[1005]: INFO : files: files passed May 14 18:08:31.786563 ignition[1005]: INFO : Ignition finished successfully May 14 18:08:31.787638 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:08:31.789763 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:08:31.796651 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:08:31.808214 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:08:31.813318 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:08:31.826254 initrd-setup-root-after-ignition[1036]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:08:31.827721 initrd-setup-root-after-ignition[1036]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:08:31.830215 initrd-setup-root-after-ignition[1040]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:08:31.832246 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:08:31.834481 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:08:31.837302 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:08:31.911776 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:08:31.912671 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:08:31.914773 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:08:31.916158 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:08:31.917061 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:08:31.918271 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:08:31.942531 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:08:31.945402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:08:31.968504 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:08:31.970483 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:08:31.971553 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:08:31.973149 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:08:31.973440 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:08:31.974944 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:08:31.975999 systemd[1]: Stopped target basic.target - Basic System. May 14 18:08:31.977347 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:08:31.978496 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:08:31.979736 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:08:31.981140 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:08:31.982493 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:08:31.984156 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:08:31.985581 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:08:31.987016 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:08:31.988417 systemd[1]: Stopped target swap.target - Swaps. May 14 18:08:31.989732 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:08:31.990009 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:08:31.991535 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:08:31.993186 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:08:31.994415 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:08:31.994712 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:08:31.995972 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:08:31.996229 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:08:31.997653 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:08:31.997906 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:08:31.999538 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:08:31.999765 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:08:32.000905 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 18:08:32.001119 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 18:08:32.005085 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:08:32.006361 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:08:32.006638 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:08:32.016359 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:08:32.017223 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:08:32.017508 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:08:32.018532 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:08:32.018715 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:08:32.034900 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:08:32.037145 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:08:32.050773 ignition[1060]: INFO : Ignition 2.21.0 May 14 18:08:32.050773 ignition[1060]: INFO : Stage: umount May 14 18:08:32.055106 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:08:32.055106 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:08:32.055106 ignition[1060]: INFO : umount: umount passed May 14 18:08:32.055106 ignition[1060]: INFO : Ignition finished successfully May 14 18:08:32.055537 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:08:32.055771 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:08:32.058692 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:08:32.058908 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:08:32.062428 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:08:32.062545 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:08:32.063307 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 18:08:32.063417 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 18:08:32.064243 systemd[1]: Stopped target network.target - Network. May 14 18:08:32.065457 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:08:32.065563 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:08:32.069895 systemd[1]: Stopped target paths.target - Path Units. May 14 18:08:32.070575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:08:32.074110 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:08:32.075231 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:08:32.076999 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:08:32.078352 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:08:32.078419 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:08:32.079735 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:08:32.079865 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:08:32.081143 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:08:32.081252 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:08:32.082585 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:08:32.082652 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:08:32.084251 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:08:32.085267 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:08:32.087595 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:08:32.088945 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:08:32.089224 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:08:32.092635 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:08:32.092821 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:08:32.098918 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:08:32.101906 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:08:32.102106 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:08:32.105814 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:08:32.106809 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:08:32.108577 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:08:32.108642 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:08:32.109944 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:08:32.110137 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:08:32.113150 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:08:32.114215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:08:32.114297 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:08:32.116399 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:08:32.116479 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:08:32.117853 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:08:32.118898 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:08:32.120177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:08:32.120233 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:08:32.122060 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:08:32.125900 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:08:32.129191 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:08:32.144519 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:08:32.145262 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:08:32.147540 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:08:32.147681 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:08:32.150520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:08:32.150653 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:08:32.151564 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:08:32.151623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:08:32.152907 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:08:32.153065 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:08:32.154965 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:08:32.155031 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:08:32.156341 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:08:32.156428 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:08:32.158920 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:08:32.161311 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:08:32.161418 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:08:32.167091 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:08:32.167192 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:08:32.169383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:08:32.170194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:08:32.174005 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 18:08:32.174127 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 18:08:32.174208 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:08:32.187036 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:08:32.187240 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:08:32.189598 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:08:32.191759 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:08:32.224246 systemd[1]: Switching root. May 14 18:08:32.305530 systemd-journald[212]: Journal stopped May 14 18:08:33.755344 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). May 14 18:08:33.755447 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:08:33.755468 kernel: SELinux: policy capability open_perms=1 May 14 18:08:33.755485 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:08:33.755518 kernel: SELinux: policy capability always_check_network=0 May 14 18:08:33.755554 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:08:33.755575 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:08:33.755601 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:08:33.755620 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:08:33.755640 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:08:33.755668 kernel: audit: type=1403 audit(1747246112.512:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:08:33.755694 systemd[1]: Successfully loaded SELinux policy in 58.898ms. May 14 18:08:33.755733 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.621ms. May 14 18:08:33.755773 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:08:33.755798 systemd[1]: Detected virtualization kvm. May 14 18:08:33.755834 systemd[1]: Detected architecture x86-64. May 14 18:08:33.755853 systemd[1]: Detected first boot. May 14 18:08:33.755877 systemd[1]: Hostname set to . May 14 18:08:33.755900 systemd[1]: Initializing machine ID from VM UUID. May 14 18:08:33.755920 zram_generator::config[1104]: No configuration found. May 14 18:08:33.756003 kernel: Guest personality initialized and is inactive May 14 18:08:33.756026 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 18:08:33.756038 kernel: Initialized host personality May 14 18:08:33.756048 kernel: NET: Registered PF_VSOCK protocol family May 14 18:08:33.756060 systemd[1]: Populated /etc with preset unit settings. May 14 18:08:33.756079 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:08:33.756092 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:08:33.756104 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:08:33.756117 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:08:33.756130 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:08:33.756149 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:08:33.756174 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:08:33.756187 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:08:33.756199 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:08:33.756211 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:08:33.756222 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:08:33.756234 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:08:33.756246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:08:33.756265 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:08:33.756276 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:08:33.756288 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:08:33.756301 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:08:33.756313 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:08:33.756324 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 18:08:33.756343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:08:33.756354 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:08:33.756366 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:08:33.756377 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:08:33.756389 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:08:33.756401 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:08:33.756413 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:08:33.756426 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:08:33.756446 systemd[1]: Reached target slices.target - Slice Units. May 14 18:08:33.756460 systemd[1]: Reached target swap.target - Swaps. May 14 18:08:33.756483 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:08:33.756495 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:08:33.756507 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:08:33.756519 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:08:33.756531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:08:33.756542 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:08:33.756553 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:08:33.756565 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:08:33.756577 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:08:33.756594 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:08:33.756606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:08:33.756633 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:08:33.756645 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:08:33.756656 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:08:33.756669 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:08:33.756680 systemd[1]: Reached target machines.target - Containers. May 14 18:08:33.756697 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:08:33.756716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:08:33.756728 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:08:33.756747 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:08:33.756759 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:08:33.756770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:08:33.756781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:08:33.756793 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:08:33.756805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:08:33.756817 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:08:33.756834 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:08:33.756846 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:08:33.756858 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:08:33.756869 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:08:33.756881 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:08:33.756893 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:08:33.756911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:08:33.756922 kernel: fuse: init (API version 7.41) May 14 18:08:33.756933 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:08:33.756945 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:08:33.762244 kernel: loop: module loaded May 14 18:08:33.762269 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:08:33.762283 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:08:33.762312 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:08:33.762326 systemd[1]: Stopped verity-setup.service. May 14 18:08:33.762338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:08:33.762350 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:08:33.762362 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:08:33.762379 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:08:33.762399 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:08:33.762411 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:08:33.762424 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:08:33.762441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:08:33.762454 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:08:33.762466 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:08:33.762479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:08:33.762491 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:08:33.762509 kernel: ACPI: bus type drm_connector registered May 14 18:08:33.762522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:08:33.762533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:08:33.762546 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:08:33.762557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:08:33.762569 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:08:33.762580 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:08:33.762592 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:08:33.762603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:08:33.762621 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:08:33.762633 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:08:33.762644 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:08:33.762662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:08:33.762674 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:08:33.762686 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:08:33.762698 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:08:33.762709 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:08:33.762721 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:08:33.762739 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:08:33.762751 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:08:33.762762 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:08:33.762774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:08:33.762787 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:08:33.762799 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:08:33.762874 systemd-journald[1174]: Collecting audit messages is disabled. May 14 18:08:33.762908 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:08:33.762921 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:08:33.762937 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:08:33.762964 systemd-journald[1174]: Journal started May 14 18:08:33.764712 systemd-journald[1174]: Runtime Journal (/run/log/journal/65b6c17f684043889ef8eb3c134b5391) is 4.9M, max 39.5M, 34.6M free. May 14 18:08:33.764794 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:08:33.268713 systemd[1]: Queued start job for default target multi-user.target. May 14 18:08:33.767938 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:08:33.290758 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:08:33.291453 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:08:33.790505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:08:33.805838 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:08:33.819542 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:08:33.826253 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:08:33.827467 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:08:33.831279 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:08:33.836894 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:08:33.840633 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:08:33.856191 systemd-journald[1174]: Time spent on flushing to /var/log/journal/65b6c17f684043889ef8eb3c134b5391 is 69.969ms for 1017 entries. May 14 18:08:33.856191 systemd-journald[1174]: System Journal (/var/log/journal/65b6c17f684043889ef8eb3c134b5391) is 8M, max 195.6M, 187.6M free. May 14 18:08:33.946692 systemd-journald[1174]: Received client request to flush runtime journal. May 14 18:08:33.946765 kernel: loop0: detected capacity change from 0 to 205544 May 14 18:08:33.946783 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:08:33.946796 kernel: loop1: detected capacity change from 0 to 146240 May 14 18:08:33.854723 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:08:33.914225 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:08:33.948359 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:08:33.954280 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:08:33.956428 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:08:33.989231 kernel: loop2: detected capacity change from 0 to 113872 May 14 18:08:34.021775 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 14 18:08:34.021796 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 14 18:08:34.039222 kernel: loop3: detected capacity change from 0 to 8 May 14 18:08:34.046705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:08:34.050560 kernel: loop4: detected capacity change from 0 to 205544 May 14 18:08:34.072259 kernel: loop5: detected capacity change from 0 to 146240 May 14 18:08:34.102991 kernel: loop6: detected capacity change from 0 to 113872 May 14 18:08:34.149041 kernel: loop7: detected capacity change from 0 to 8 May 14 18:08:34.153772 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 14 18:08:34.155287 (sd-merge)[1252]: Merged extensions into '/usr'. May 14 18:08:34.173232 systemd[1]: Reload requested from client PID 1208 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:08:34.173260 systemd[1]: Reloading... May 14 18:08:34.307980 zram_generator::config[1274]: No configuration found. May 14 18:08:34.474845 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:08:34.607189 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:08:34.607373 systemd[1]: Reloading finished in 433 ms. May 14 18:08:34.629056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:08:34.632945 ldconfig[1203]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:08:34.641214 systemd[1]: Starting ensure-sysext.service... May 14 18:08:34.647769 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:08:34.661916 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:08:34.680015 systemd[1]: Reload requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... May 14 18:08:34.680043 systemd[1]: Reloading... May 14 18:08:34.709827 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:08:34.709882 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:08:34.712331 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:08:34.712653 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:08:34.714472 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:08:34.714799 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. May 14 18:08:34.714864 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. May 14 18:08:34.721887 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:08:34.721909 systemd-tmpfiles[1321]: Skipping /boot May 14 18:08:34.771390 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:08:34.771408 systemd-tmpfiles[1321]: Skipping /boot May 14 18:08:34.773982 zram_generator::config[1345]: No configuration found. May 14 18:08:34.938861 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:08:35.074348 systemd[1]: Reloading finished in 393 ms. May 14 18:08:35.095921 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:08:35.103052 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:08:35.112248 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:08:35.119183 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:08:35.128625 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:08:35.133797 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:08:35.137252 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:08:35.142250 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:08:35.150813 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:08:35.151515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:08:35.156666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:08:35.165871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:08:35.174379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:08:35.177526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:08:35.177739 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:08:35.177845 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:08:35.186130 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:08:35.188254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:08:35.192069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:08:35.199308 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:08:35.199539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:08:35.212319 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:08:35.214200 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:08:35.214447 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:08:35.214608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:08:35.225098 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:08:35.233376 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:08:35.233645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:08:35.237284 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:08:35.238136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:08:35.238185 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:08:35.238280 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:08:35.238305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:08:35.239877 systemd[1]: Finished ensure-sysext.service. May 14 18:08:35.254314 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:08:35.265387 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:08:35.266899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:08:35.267996 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:08:35.271633 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:08:35.294325 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:08:35.300738 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:08:35.304144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:08:35.304347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:08:35.306711 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:08:35.306936 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:08:35.312632 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:08:35.313014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:08:35.314088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:08:35.320467 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:08:35.332308 systemd-udevd[1398]: Using default interface naming scheme 'v255'. May 14 18:08:35.359122 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:08:35.382329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:08:35.389601 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:08:35.399514 augenrules[1446]: No rules May 14 18:08:35.401916 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:08:35.402194 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:08:35.575293 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:08:35.576559 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:08:35.604411 systemd-resolved[1397]: Positive Trust Anchors: May 14 18:08:35.608133 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:08:35.608179 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:08:35.623981 systemd-resolved[1397]: Using system hostname 'ci-4334.0.0-a-3f9ee7d7d0'. May 14 18:08:35.631993 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:08:35.634160 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:08:35.636060 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:08:35.636707 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:08:35.637941 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:08:35.640061 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 18:08:35.641171 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:08:35.642979 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:08:35.644689 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:08:35.645965 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:08:35.646008 systemd[1]: Reached target paths.target - Path Units. May 14 18:08:35.647010 systemd[1]: Reached target timers.target - Timer Units. May 14 18:08:35.650374 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:08:35.656546 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:08:35.664848 systemd-networkd[1447]: lo: Link UP May 14 18:08:35.666554 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:08:35.668620 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:08:35.669429 systemd-networkd[1447]: lo: Gained carrier May 14 18:08:35.670056 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:08:35.675406 systemd-networkd[1447]: Enumeration completed May 14 18:08:35.680550 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:08:35.683426 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:08:35.688177 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:08:35.689603 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:08:35.694500 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 14 18:08:35.698289 systemd[1]: Reached target network.target - Network. May 14 18:08:35.698974 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:08:35.699628 systemd[1]: Reached target basic.target - Basic System. May 14 18:08:35.704103 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 14 18:08:35.706065 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:08:35.706102 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:08:35.709176 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:08:35.714715 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 18:08:35.724303 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:08:35.730302 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:08:35.735997 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:08:35.742350 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:08:35.743130 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:08:35.749479 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 18:08:35.758423 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:08:35.764898 kernel: ISO 9660 Extensions: RRIP_1991A May 14 18:08:35.775045 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:08:35.778346 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:08:35.782823 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:08:35.790271 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:08:35.804647 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:08:35.812762 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:08:35.814919 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:08:35.816627 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:08:35.817658 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:08:35.822398 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:08:35.828257 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 14 18:08:35.832456 jq[1490]: false May 14 18:08:35.833670 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:08:35.836526 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:08:35.836737 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:08:35.837097 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:08:35.837284 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:08:35.846832 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 18:08:35.860553 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Refreshing passwd entry cache May 14 18:08:35.856292 oslogin_cache_refresh[1495]: Refreshing passwd entry cache May 14 18:08:35.875222 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Failure getting users, quitting May 14 18:08:35.875222 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:08:35.875206 oslogin_cache_refresh[1495]: Failure getting users, quitting May 14 18:08:35.875404 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Refreshing group entry cache May 14 18:08:35.875227 oslogin_cache_refresh[1495]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:08:35.875281 oslogin_cache_refresh[1495]: Refreshing group entry cache May 14 18:08:35.876523 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Failure getting groups, quitting May 14 18:08:35.876523 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:08:35.875917 oslogin_cache_refresh[1495]: Failure getting groups, quitting May 14 18:08:35.875929 oslogin_cache_refresh[1495]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:08:35.891297 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 18:08:35.891552 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 18:08:35.907159 jq[1506]: true May 14 18:08:35.907702 coreos-metadata[1487]: May 14 18:08:35.907 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:08:35.910041 coreos-metadata[1487]: May 14 18:08:35.909 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 14 18:08:35.920393 (ntainerd)[1525]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:08:35.968785 extend-filesystems[1494]: Found loop4 May 14 18:08:35.974222 extend-filesystems[1494]: Found loop5 May 14 18:08:35.974222 extend-filesystems[1494]: Found loop6 May 14 18:08:35.974222 extend-filesystems[1494]: Found loop7 May 14 18:08:35.974222 extend-filesystems[1494]: Found vda May 14 18:08:35.974222 extend-filesystems[1494]: Found vda1 May 14 18:08:35.974222 extend-filesystems[1494]: Found vda2 May 14 18:08:35.974222 extend-filesystems[1494]: Found vda3 May 14 18:08:35.974222 extend-filesystems[1494]: Found usr May 14 18:08:35.974222 extend-filesystems[1494]: Found vda4 May 14 18:08:35.974222 extend-filesystems[1494]: Found vda6 May 14 18:08:35.974222 extend-filesystems[1494]: Found vda7 May 14 18:08:35.974222 extend-filesystems[1494]: Found vda9 May 14 18:08:35.974222 extend-filesystems[1494]: Checking size of /dev/vda9 May 14 18:08:36.090113 kernel: mousedev: PS/2 mouse device common for all mice May 14 18:08:36.090151 tar[1516]: linux-amd64/helm May 14 18:08:36.091611 extend-filesystems[1494]: Resized partition /dev/vda9 May 14 18:08:35.998880 dbus-daemon[1488]: [system] SELinux support is enabled May 14 18:08:35.990985 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:08:36.102155 update_engine[1505]: I20250514 18:08:35.994166 1505 main.cc:92] Flatcar Update Engine starting May 14 18:08:36.102155 update_engine[1505]: I20250514 18:08:36.053519 1505 update_check_scheduler.cc:74] Next update check in 5m30s May 14 18:08:36.102403 jq[1526]: true May 14 18:08:36.102585 extend-filesystems[1543]: resize2fs 1.47.2 (1-Jan-2025) May 14 18:08:36.127744 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 14 18:08:36.004103 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:08:36.027062 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:08:36.027362 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:08:36.029484 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:08:36.029551 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:08:36.030348 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:08:36.030467 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 14 18:08:36.030498 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:08:36.053839 systemd[1]: Started update-engine.service - Update Engine. May 14 18:08:36.109257 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:08:36.117723 systemd-networkd[1447]: eth0: Configuring with /run/systemd/network/10-7e:7e:ff:49:dd:10.network. May 14 18:08:36.127224 systemd-networkd[1447]: eth0: Link UP May 14 18:08:36.134254 systemd-networkd[1447]: eth0: Gained carrier May 14 18:08:36.152854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:08:36.158107 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 18:08:36.173389 systemd-networkd[1447]: eth1: Configuring with /run/systemd/network/10-ba:d6:f2:ea:83:3a.network. May 14 18:08:36.177565 systemd-networkd[1447]: eth1: Link UP May 14 18:08:36.180324 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 18:08:36.182194 systemd-networkd[1447]: eth1: Gained carrier May 14 18:08:36.194046 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 18:08:36.213470 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:08:36.286598 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 18:08:36.286700 bash[1562]: Updated "/home/core/.ssh/authorized_keys" May 14 18:08:36.295942 kernel: ACPI: button: Power Button [PWRF] May 14 18:08:36.287048 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:08:36.300259 systemd[1]: Starting sshkeys.service... May 14 18:08:36.308296 systemd-logind[1501]: New seat seat0. May 14 18:08:36.356002 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 14 18:08:36.406677 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:08:36.418168 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 18:08:36.418168 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 8 May 14 18:08:36.418168 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 14 18:08:36.430732 extend-filesystems[1494]: Resized filesystem in /dev/vda9 May 14 18:08:36.430732 extend-filesystems[1494]: Found vdb May 14 18:08:36.421111 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:08:36.421481 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:08:36.451153 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:08:36.466785 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 18:08:36.473102 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 18:08:36.546986 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 14 18:08:36.553316 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 14 18:08:36.554493 kernel: Console: switching to colour dummy device 80x25 May 14 18:08:36.556143 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 18:08:36.556218 kernel: [drm] features: -context_init May 14 18:08:36.556451 locksmithd[1540]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:08:36.563986 kernel: [drm] number of scanouts: 1 May 14 18:08:36.564124 kernel: [drm] number of cap sets: 0 May 14 18:08:36.566986 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 14 18:08:36.598979 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 14 18:08:36.603182 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 18:08:36.618557 coreos-metadata[1573]: May 14 18:08:36.618 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:08:36.634296 coreos-metadata[1573]: May 14 18:08:36.633 INFO Fetch successful May 14 18:08:36.641622 unknown[1573]: wrote ssh authorized keys file for user: core May 14 18:08:36.698355 containerd[1525]: time="2025-05-14T18:08:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:08:36.703897 containerd[1525]: time="2025-05-14T18:08:36.703673761Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:08:36.711006 update-ssh-keys[1586]: Updated "/home/core/.ssh/authorized_keys" May 14 18:08:36.712772 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 18:08:36.716916 systemd[1]: Finished sshkeys.service. May 14 18:08:36.753343 containerd[1525]: time="2025-05-14T18:08:36.753291126Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.153µs" May 14 18:08:36.754125 containerd[1525]: time="2025-05-14T18:08:36.754086992Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:08:36.754255 containerd[1525]: time="2025-05-14T18:08:36.754239409Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:08:36.754476 containerd[1525]: time="2025-05-14T18:08:36.754458232Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.754986320Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.755045207Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.755152205Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.755169790Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.755419734Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.755442562Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.755458749Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.755472349Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.755594366Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:08:36.756001 containerd[1525]: time="2025-05-14T18:08:36.755907959Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:08:36.758068 containerd[1525]: time="2025-05-14T18:08:36.758021161Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:08:36.758211 containerd[1525]: time="2025-05-14T18:08:36.758190175Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:08:36.758332 containerd[1525]: time="2025-05-14T18:08:36.758311034Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:08:36.760988 containerd[1525]: time="2025-05-14T18:08:36.758978676Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:08:36.761474 containerd[1525]: time="2025-05-14T18:08:36.761230471Z" level=info msg="metadata content store policy set" policy=shared May 14 18:08:36.766042 containerd[1525]: time="2025-05-14T18:08:36.765993982Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768106270Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768204843Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768225254Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768254145Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768269296Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768290995Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768309141Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768340358Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768357573Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768371773Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768390186Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768590441Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768618648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:08:36.769011 containerd[1525]: time="2025-05-14T18:08:36.768647603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768663253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768678447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768694298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768711388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768725659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768741247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768770496Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768795427Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768886056Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:08:36.769521 containerd[1525]: time="2025-05-14T18:08:36.768906527Z" level=info msg="Start snapshots syncer" May 14 18:08:36.770121 containerd[1525]: time="2025-05-14T18:08:36.770071648Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:08:36.771257 containerd[1525]: time="2025-05-14T18:08:36.771179457Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:08:36.773033 containerd[1525]: time="2025-05-14T18:08:36.771523571Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:08:36.773033 containerd[1525]: time="2025-05-14T18:08:36.772840146Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:08:36.773353 containerd[1525]: time="2025-05-14T18:08:36.773324659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:08:36.773515 containerd[1525]: time="2025-05-14T18:08:36.773492870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:08:36.773614 containerd[1525]: time="2025-05-14T18:08:36.773595460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:08:36.773702 containerd[1525]: time="2025-05-14T18:08:36.773687803Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:08:36.773801 containerd[1525]: time="2025-05-14T18:08:36.773786692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:08:36.773891 containerd[1525]: time="2025-05-14T18:08:36.773877588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:08:36.774028 containerd[1525]: time="2025-05-14T18:08:36.774002018Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:08:36.774131 containerd[1525]: time="2025-05-14T18:08:36.774115495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:08:36.774409 containerd[1525]: time="2025-05-14T18:08:36.774386621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:08:36.774496 containerd[1525]: time="2025-05-14T18:08:36.774477569Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775675101Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775719959Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775735560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775765364Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775780761Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775796235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775813864Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775855429Z" level=info msg="runtime interface created" May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775864700Z" level=info msg="created NRI interface" May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775877191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.775903572Z" level=info msg="Connect containerd service" May 14 18:08:36.776069 containerd[1525]: time="2025-05-14T18:08:36.776007310Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:08:36.790842 containerd[1525]: time="2025-05-14T18:08:36.790646972Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:08:36.914080 coreos-metadata[1487]: May 14 18:08:36.913 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 14 18:08:36.926794 coreos-metadata[1487]: May 14 18:08:36.924 INFO Fetch successful May 14 18:08:37.020600 containerd[1525]: time="2025-05-14T18:08:37.020536126Z" level=info msg="Start subscribing containerd event" May 14 18:08:37.020806 containerd[1525]: time="2025-05-14T18:08:37.020786644Z" level=info msg="Start recovering state" May 14 18:08:37.021090 containerd[1525]: time="2025-05-14T18:08:37.021068185Z" level=info msg="Start event monitor" May 14 18:08:37.021164 containerd[1525]: time="2025-05-14T18:08:37.021154929Z" level=info msg="Start cni network conf syncer for default" May 14 18:08:37.021201 containerd[1525]: time="2025-05-14T18:08:37.021193687Z" level=info msg="Start streaming server" May 14 18:08:37.021238 containerd[1525]: time="2025-05-14T18:08:37.021231002Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:08:37.021274 containerd[1525]: time="2025-05-14T18:08:37.021266892Z" level=info msg="runtime interface starting up..." May 14 18:08:37.021307 containerd[1525]: time="2025-05-14T18:08:37.021300611Z" level=info msg="starting plugins..." May 14 18:08:37.021350 containerd[1525]: time="2025-05-14T18:08:37.021341743Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:08:37.021439 containerd[1525]: time="2025-05-14T18:08:37.021402487Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:08:37.021501 containerd[1525]: time="2025-05-14T18:08:37.021485713Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:08:37.021620 containerd[1525]: time="2025-05-14T18:08:37.021605879Z" level=info msg="containerd successfully booted in 0.324070s" May 14 18:08:37.022176 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:08:37.058998 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 18:08:37.061479 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:08:37.180214 kernel: EDAC MC: Ver: 3.0.0 May 14 18:08:37.229057 systemd-logind[1501]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 18:08:37.288853 systemd-logind[1501]: Watching system buttons on /dev/input/event2 (Power Button) May 14 18:08:37.290088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:08:37.388502 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:08:37.457454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:08:37.461891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:08:37.462145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:08:37.462877 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:08:37.466387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:08:37.469485 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:08:37.542604 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:08:37.546346 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:08:37.576137 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:08:37.576417 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:08:37.580072 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:08:37.605145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:08:37.617731 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:08:37.621388 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:08:37.624454 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 18:08:37.624759 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:08:37.727382 tar[1516]: linux-amd64/LICENSE May 14 18:08:37.727867 tar[1516]: linux-amd64/README.md May 14 18:08:37.748654 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:08:38.086268 systemd-networkd[1447]: eth0: Gained IPv6LL May 14 18:08:38.087132 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 18:08:38.087325 systemd-networkd[1447]: eth1: Gained IPv6LL May 14 18:08:38.088206 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 18:08:38.090424 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:08:38.092653 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:08:38.096409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:08:38.101381 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:08:38.140300 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:08:39.215827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:39.217114 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:08:39.218076 systemd[1]: Startup finished in 4.224s (kernel) + 6.853s (initrd) + 6.762s (userspace) = 17.841s. May 14 18:08:39.225009 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:08:39.683413 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:08:39.686735 systemd[1]: Started sshd@0-164.90.152.250:22-139.178.89.65:56556.service - OpenSSH per-connection server daemon (139.178.89.65:56556). May 14 18:08:39.799635 sshd[1678]: Accepted publickey for core from 139.178.89.65 port 56556 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:39.804883 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:39.818686 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:08:39.820347 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:08:39.836507 systemd-logind[1501]: New session 1 of user core. May 14 18:08:39.875065 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:08:39.881696 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:08:39.902667 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:08:39.908053 systemd-logind[1501]: New session c1 of user core. May 14 18:08:40.033863 kubelet[1668]: E0514 18:08:40.033664 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:08:40.037759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:08:40.037931 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:08:40.038600 systemd[1]: kubelet.service: Consumed 1.487s CPU time, 236.4M memory peak. May 14 18:08:40.092011 systemd[1683]: Queued start job for default target default.target. May 14 18:08:40.111628 systemd[1683]: Created slice app.slice - User Application Slice. May 14 18:08:40.111914 systemd[1683]: Reached target paths.target - Paths. May 14 18:08:40.112090 systemd[1683]: Reached target timers.target - Timers. May 14 18:08:40.113858 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:08:40.129621 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:08:40.129710 systemd[1683]: Reached target sockets.target - Sockets. May 14 18:08:40.129772 systemd[1683]: Reached target basic.target - Basic System. May 14 18:08:40.129806 systemd[1683]: Reached target default.target - Main User Target. May 14 18:08:40.129841 systemd[1683]: Startup finished in 209ms. May 14 18:08:40.130038 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:08:40.136314 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:08:40.216607 systemd[1]: Started sshd@1-164.90.152.250:22-139.178.89.65:56558.service - OpenSSH per-connection server daemon (139.178.89.65:56558). May 14 18:08:40.285304 sshd[1695]: Accepted publickey for core from 139.178.89.65 port 56558 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:40.288224 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:40.297684 systemd-logind[1501]: New session 2 of user core. May 14 18:08:40.304362 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:08:40.373536 sshd[1697]: Connection closed by 139.178.89.65 port 56558 May 14 18:08:40.374427 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 14 18:08:40.390191 systemd[1]: sshd@1-164.90.152.250:22-139.178.89.65:56558.service: Deactivated successfully. May 14 18:08:40.393852 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:08:40.395647 systemd-logind[1501]: Session 2 logged out. Waiting for processes to exit. May 14 18:08:40.400139 systemd[1]: Started sshd@2-164.90.152.250:22-139.178.89.65:56564.service - OpenSSH per-connection server daemon (139.178.89.65:56564). May 14 18:08:40.401815 systemd-logind[1501]: Removed session 2. May 14 18:08:40.469815 sshd[1703]: Accepted publickey for core from 139.178.89.65 port 56564 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:40.472289 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:40.480335 systemd-logind[1501]: New session 3 of user core. May 14 18:08:40.495373 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:08:40.555118 sshd[1705]: Connection closed by 139.178.89.65 port 56564 May 14 18:08:40.555793 sshd-session[1703]: pam_unix(sshd:session): session closed for user core May 14 18:08:40.567792 systemd[1]: sshd@2-164.90.152.250:22-139.178.89.65:56564.service: Deactivated successfully. May 14 18:08:40.570817 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:08:40.572219 systemd-logind[1501]: Session 3 logged out. Waiting for processes to exit. May 14 18:08:40.578391 systemd[1]: Started sshd@3-164.90.152.250:22-139.178.89.65:56568.service - OpenSSH per-connection server daemon (139.178.89.65:56568). May 14 18:08:40.579862 systemd-logind[1501]: Removed session 3. May 14 18:08:40.646012 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 56568 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:40.648148 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:40.655652 systemd-logind[1501]: New session 4 of user core. May 14 18:08:40.667343 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:08:40.732278 sshd[1713]: Connection closed by 139.178.89.65 port 56568 May 14 18:08:40.733171 sshd-session[1711]: pam_unix(sshd:session): session closed for user core May 14 18:08:40.757863 systemd[1]: sshd@3-164.90.152.250:22-139.178.89.65:56568.service: Deactivated successfully. May 14 18:08:40.760671 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:08:40.763072 systemd-logind[1501]: Session 4 logged out. Waiting for processes to exit. May 14 18:08:40.767083 systemd[1]: Started sshd@4-164.90.152.250:22-139.178.89.65:56582.service - OpenSSH per-connection server daemon (139.178.89.65:56582). May 14 18:08:40.768781 systemd-logind[1501]: Removed session 4. May 14 18:08:40.836869 sshd[1719]: Accepted publickey for core from 139.178.89.65 port 56582 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:40.838835 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:40.845848 systemd-logind[1501]: New session 5 of user core. May 14 18:08:40.851250 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:08:40.929021 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:08:40.929885 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:08:40.949300 sudo[1722]: pam_unix(sudo:session): session closed for user root May 14 18:08:40.953869 sshd[1721]: Connection closed by 139.178.89.65 port 56582 May 14 18:08:40.954777 sshd-session[1719]: pam_unix(sshd:session): session closed for user core May 14 18:08:40.972839 systemd[1]: sshd@4-164.90.152.250:22-139.178.89.65:56582.service: Deactivated successfully. May 14 18:08:40.975879 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:08:40.977405 systemd-logind[1501]: Session 5 logged out. Waiting for processes to exit. May 14 18:08:40.981942 systemd[1]: Started sshd@5-164.90.152.250:22-139.178.89.65:56590.service - OpenSSH per-connection server daemon (139.178.89.65:56590). May 14 18:08:40.982872 systemd-logind[1501]: Removed session 5. May 14 18:08:41.048215 sshd[1728]: Accepted publickey for core from 139.178.89.65 port 56590 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:41.050408 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:41.057155 systemd-logind[1501]: New session 6 of user core. May 14 18:08:41.069423 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:08:41.131222 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:08:41.132408 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:08:41.142215 sudo[1732]: pam_unix(sudo:session): session closed for user root May 14 18:08:41.150568 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:08:41.150899 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:08:41.163501 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:08:41.221984 augenrules[1754]: No rules May 14 18:08:41.223780 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:08:41.224220 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:08:41.226372 sudo[1731]: pam_unix(sudo:session): session closed for user root May 14 18:08:41.229603 sshd[1730]: Connection closed by 139.178.89.65 port 56590 May 14 18:08:41.230571 sshd-session[1728]: pam_unix(sshd:session): session closed for user core May 14 18:08:41.245135 systemd[1]: sshd@5-164.90.152.250:22-139.178.89.65:56590.service: Deactivated successfully. May 14 18:08:41.248599 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:08:41.250041 systemd-logind[1501]: Session 6 logged out. Waiting for processes to exit. May 14 18:08:41.255295 systemd[1]: Started sshd@6-164.90.152.250:22-139.178.89.65:56604.service - OpenSSH per-connection server daemon (139.178.89.65:56604). May 14 18:08:41.256672 systemd-logind[1501]: Removed session 6. May 14 18:08:41.321367 sshd[1763]: Accepted publickey for core from 139.178.89.65 port 56604 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:08:41.323110 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:41.331691 systemd-logind[1501]: New session 7 of user core. May 14 18:08:41.338409 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:08:41.404324 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:08:41.405357 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:08:42.077344 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:08:42.108025 (dockerd)[1784]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:08:42.577783 dockerd[1784]: time="2025-05-14T18:08:42.577699315Z" level=info msg="Starting up" May 14 18:08:42.580355 dockerd[1784]: time="2025-05-14T18:08:42.580251552Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:08:42.630206 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1808604222-merged.mount: Deactivated successfully. May 14 18:08:42.744027 dockerd[1784]: time="2025-05-14T18:08:42.743728222Z" level=info msg="Loading containers: start." May 14 18:08:42.756813 kernel: Initializing XFRM netlink socket May 14 18:08:43.061805 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 18:08:43.063796 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 18:08:43.080343 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 18:08:43.132769 systemd-networkd[1447]: docker0: Link UP May 14 18:08:43.133203 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 18:08:43.139660 dockerd[1784]: time="2025-05-14T18:08:43.139470399Z" level=info msg="Loading containers: done." May 14 18:08:43.161313 dockerd[1784]: time="2025-05-14T18:08:43.161248132Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:08:43.161567 dockerd[1784]: time="2025-05-14T18:08:43.161374488Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:08:43.161567 dockerd[1784]: time="2025-05-14T18:08:43.161543499Z" level=info msg="Initializing buildkit" May 14 18:08:43.199282 dockerd[1784]: time="2025-05-14T18:08:43.199204445Z" level=info msg="Completed buildkit initialization" May 14 18:08:43.208938 dockerd[1784]: time="2025-05-14T18:08:43.208206615Z" level=info msg="Daemon has completed initialization" May 14 18:08:43.208938 dockerd[1784]: time="2025-05-14T18:08:43.208488804Z" level=info msg="API listen on /run/docker.sock" May 14 18:08:43.209294 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:08:44.279631 containerd[1525]: time="2025-05-14T18:08:44.279123531Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 18:08:44.918460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3448164812.mount: Deactivated successfully. May 14 18:08:46.347278 containerd[1525]: time="2025-05-14T18:08:46.347196600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:46.348822 containerd[1525]: time="2025-05-14T18:08:46.348546440Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 14 18:08:46.349972 containerd[1525]: time="2025-05-14T18:08:46.349889402Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:46.352837 containerd[1525]: time="2025-05-14T18:08:46.352788724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:46.354245 containerd[1525]: time="2025-05-14T18:08:46.354186337Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.075019484s" May 14 18:08:46.354409 containerd[1525]: time="2025-05-14T18:08:46.354391045Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 18:08:46.356738 containerd[1525]: time="2025-05-14T18:08:46.356671952Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 18:08:47.976033 containerd[1525]: time="2025-05-14T18:08:47.975479520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:47.977402 containerd[1525]: time="2025-05-14T18:08:47.977360387Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 14 18:08:47.978176 containerd[1525]: time="2025-05-14T18:08:47.978122837Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:47.981117 containerd[1525]: time="2025-05-14T18:08:47.981048288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:47.982915 containerd[1525]: time="2025-05-14T18:08:47.982835459Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.625914684s" May 14 18:08:47.982915 containerd[1525]: time="2025-05-14T18:08:47.982902563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 18:08:47.984128 containerd[1525]: time="2025-05-14T18:08:47.983428092Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 18:08:49.449595 containerd[1525]: time="2025-05-14T18:08:49.449495364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:49.451340 containerd[1525]: time="2025-05-14T18:08:49.451279170Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 14 18:08:49.452757 containerd[1525]: time="2025-05-14T18:08:49.452677614Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:49.456554 containerd[1525]: time="2025-05-14T18:08:49.456464114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:49.458044 containerd[1525]: time="2025-05-14T18:08:49.457995747Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.47453057s" May 14 18:08:49.458044 containerd[1525]: time="2025-05-14T18:08:49.458040418Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 18:08:49.458756 containerd[1525]: time="2025-05-14T18:08:49.458582406Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 18:08:50.058538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:08:50.064489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:08:50.284180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:50.298836 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:08:50.386139 kubelet[2065]: E0514 18:08:50.385897 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:08:50.394675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:08:50.394891 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:08:50.396607 systemd[1]: kubelet.service: Consumed 223ms CPU time, 93.9M memory peak. May 14 18:08:50.803267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264544119.mount: Deactivated successfully. May 14 18:08:51.468251 containerd[1525]: time="2025-05-14T18:08:51.468157173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:51.470252 containerd[1525]: time="2025-05-14T18:08:51.470185410Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 14 18:08:51.471862 containerd[1525]: time="2025-05-14T18:08:51.471760965Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:51.474868 containerd[1525]: time="2025-05-14T18:08:51.474784022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:51.476303 containerd[1525]: time="2025-05-14T18:08:51.475660212Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.01704135s" May 14 18:08:51.476303 containerd[1525]: time="2025-05-14T18:08:51.475717533Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 14 18:08:51.476508 containerd[1525]: time="2025-05-14T18:08:51.476458797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:08:51.478292 systemd-resolved[1397]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 14 18:08:52.014256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105212564.mount: Deactivated successfully. May 14 18:08:53.241106 containerd[1525]: time="2025-05-14T18:08:53.241037865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:53.242664 containerd[1525]: time="2025-05-14T18:08:53.242605943Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 18:08:53.243598 containerd[1525]: time="2025-05-14T18:08:53.243522742Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:53.249276 containerd[1525]: time="2025-05-14T18:08:53.249177825Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.772684942s" May 14 18:08:53.249601 containerd[1525]: time="2025-05-14T18:08:53.249474724Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:08:53.249601 containerd[1525]: time="2025-05-14T18:08:53.249253726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:53.250665 containerd[1525]: time="2025-05-14T18:08:53.250437667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 18:08:53.769133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121152940.mount: Deactivated successfully. May 14 18:08:53.777977 containerd[1525]: time="2025-05-14T18:08:53.777888886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:08:53.779133 containerd[1525]: time="2025-05-14T18:08:53.779090784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 18:08:53.780062 containerd[1525]: time="2025-05-14T18:08:53.779904379Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:08:53.782987 containerd[1525]: time="2025-05-14T18:08:53.782865818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:08:53.784629 containerd[1525]: time="2025-05-14T18:08:53.784491889Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 534.017167ms" May 14 18:08:53.784629 containerd[1525]: time="2025-05-14T18:08:53.784572944Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 18:08:53.785665 containerd[1525]: time="2025-05-14T18:08:53.785556209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 18:08:54.534391 systemd-resolved[1397]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 14 18:08:54.543274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221198435.mount: Deactivated successfully. May 14 18:08:56.884038 containerd[1525]: time="2025-05-14T18:08:56.882849617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:56.884637 containerd[1525]: time="2025-05-14T18:08:56.884333570Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 14 18:08:56.885257 containerd[1525]: time="2025-05-14T18:08:56.885218014Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:56.888564 containerd[1525]: time="2025-05-14T18:08:56.888509753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:08:56.890166 containerd[1525]: time="2025-05-14T18:08:56.890116892Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.104508394s" May 14 18:08:56.890346 containerd[1525]: time="2025-05-14T18:08:56.890318091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 14 18:08:59.899559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:08:59.899877 systemd[1]: kubelet.service: Consumed 223ms CPU time, 93.9M memory peak. May 14 18:08:59.903349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:08:59.947830 systemd[1]: Reload requested from client PID 2206 ('systemctl') (unit session-7.scope)... May 14 18:08:59.948112 systemd[1]: Reloading... May 14 18:09:00.122020 zram_generator::config[2255]: No configuration found. May 14 18:09:00.268606 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:09:00.441016 systemd[1]: Reloading finished in 492 ms. May 14 18:09:00.512279 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:09:00.512423 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:09:00.512745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:09:00.512819 systemd[1]: kubelet.service: Consumed 133ms CPU time, 83.5M memory peak. May 14 18:09:00.517072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:09:00.710445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:09:00.724837 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:09:00.796304 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:09:00.796304 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:09:00.796304 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:09:00.798494 kubelet[2303]: I0514 18:09:00.798009 2303 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:09:01.854211 kubelet[2303]: I0514 18:09:01.854147 2303 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:09:01.854211 kubelet[2303]: I0514 18:09:01.854195 2303 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:09:01.856094 kubelet[2303]: I0514 18:09:01.856034 2303 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:09:01.882907 kubelet[2303]: I0514 18:09:01.882858 2303 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:09:01.886332 kubelet[2303]: E0514 18:09:01.886144 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://164.90.152.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.90.152.250:6443: connect: connection refused" logger="UnhandledError" May 14 18:09:01.900708 kubelet[2303]: I0514 18:09:01.900656 2303 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:09:01.907381 kubelet[2303]: I0514 18:09:01.907275 2303 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:09:01.907576 kubelet[2303]: I0514 18:09:01.907459 2303 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:09:01.907658 kubelet[2303]: I0514 18:09:01.907612 2303 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:09:01.908166 kubelet[2303]: I0514 18:09:01.907664 2303 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-3f9ee7d7d0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:09:01.908166 kubelet[2303]: I0514 18:09:01.908158 2303 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:09:01.908465 kubelet[2303]: I0514 18:09:01.908184 2303 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:09:01.908465 kubelet[2303]: I0514 18:09:01.908369 2303 state_mem.go:36] "Initialized new in-memory state store" May 14 18:09:01.912096 kubelet[2303]: I0514 18:09:01.911828 2303 kubelet.go:408] "Attempting to sync node with API server" May 14 18:09:01.912096 kubelet[2303]: I0514 18:09:01.911891 2303 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:09:01.912096 kubelet[2303]: I0514 18:09:01.911940 2303 kubelet.go:314] "Adding apiserver pod source" May 14 18:09:01.912096 kubelet[2303]: I0514 18:09:01.912105 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:09:01.918320 kubelet[2303]: W0514 18:09:01.917067 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.90.152.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-3f9ee7d7d0&limit=500&resourceVersion=0": dial tcp 164.90.152.250:6443: connect: connection refused May 14 18:09:01.918320 kubelet[2303]: E0514 18:09:01.917171 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.90.152.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-3f9ee7d7d0&limit=500&resourceVersion=0\": dial tcp 164.90.152.250:6443: connect: connection refused" logger="UnhandledError" May 14 18:09:01.919105 kubelet[2303]: I0514 18:09:01.919080 2303 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:09:01.921086 kubelet[2303]: I0514 18:09:01.921054 2303 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:09:01.921898 kubelet[2303]: W0514 18:09:01.921876 2303 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:09:01.923407 kubelet[2303]: I0514 18:09:01.923381 2303 server.go:1269] "Started kubelet" May 14 18:09:01.923930 kubelet[2303]: W0514 18:09:01.923882 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.90.152.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.90.152.250:6443: connect: connection refused May 14 18:09:01.924199 kubelet[2303]: E0514 18:09:01.924172 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.90.152.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.152.250:6443: connect: connection refused" logger="UnhandledError" May 14 18:09:01.929694 kubelet[2303]: I0514 18:09:01.929654 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:09:01.934113 kubelet[2303]: I0514 18:09:01.934033 2303 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:09:01.936097 kubelet[2303]: I0514 18:09:01.935858 2303 server.go:460] "Adding debug handlers to kubelet server" May 14 18:09:01.936335 kubelet[2303]: I0514 18:09:01.936084 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:09:01.945061 kubelet[2303]: I0514 18:09:01.944511 2303 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:09:01.945061 kubelet[2303]: I0514 18:09:01.939435 2303 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:09:01.946845 kubelet[2303]: I0514 18:09:01.946735 2303 factory.go:221] Registration of the systemd container factory successfully May 14 18:09:01.948470 kubelet[2303]: I0514 18:09:01.946882 2303 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:09:01.949037 kubelet[2303]: W0514 18:09:01.948904 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.90.152.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.152.250:6443: connect: connection refused May 14 18:09:01.950003 kubelet[2303]: E0514 18:09:01.949233 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.90.152.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.152.250:6443: connect: connection refused" logger="UnhandledError" May 14 18:09:01.950003 kubelet[2303]: E0514 18:09:01.949372 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.152.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-3f9ee7d7d0?timeout=10s\": dial tcp 164.90.152.250:6443: connect: connection refused" interval="200ms" May 14 18:09:01.952630 kubelet[2303]: I0514 18:09:01.937515 2303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:09:01.952920 kubelet[2303]: I0514 18:09:01.939459 2303 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:09:01.953092 kubelet[2303]: I0514 18:09:01.953079 2303 reconciler.go:26] "Reconciler: start to sync state" May 14 18:09:01.953517 kubelet[2303]: E0514 18:09:01.949820 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.90.152.250:6443/api/v1/namespaces/default/events\": dial tcp 164.90.152.250:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-3f9ee7d7d0.183f771bd882643e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-3f9ee7d7d0,UID:ci-4334.0.0-a-3f9ee7d7d0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-3f9ee7d7d0,},FirstTimestamp:2025-05-14 18:09:01.923288126 +0000 UTC m=+1.188407144,LastTimestamp:2025-05-14 18:09:01.923288126 +0000 UTC m=+1.188407144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-3f9ee7d7d0,}" May 14 18:09:01.953685 kubelet[2303]: E0514 18:09:01.939638 2303 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-3f9ee7d7d0\" not found" May 14 18:09:01.956721 kubelet[2303]: I0514 18:09:01.956668 2303 factory.go:221] Registration of the containerd container factory successfully May 14 18:09:01.974084 kubelet[2303]: E0514 18:09:01.973932 2303 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:09:01.978666 kubelet[2303]: I0514 18:09:01.978501 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:09:01.981030 kubelet[2303]: I0514 18:09:01.980991 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:09:01.981649 kubelet[2303]: I0514 18:09:01.981209 2303 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:09:01.981649 kubelet[2303]: I0514 18:09:01.981242 2303 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:09:01.981649 kubelet[2303]: E0514 18:09:01.981334 2303 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:09:01.984584 kubelet[2303]: I0514 18:09:01.984542 2303 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:09:01.985503 kubelet[2303]: I0514 18:09:01.984782 2303 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:09:01.985503 kubelet[2303]: I0514 18:09:01.984833 2303 state_mem.go:36] "Initialized new in-memory state store" May 14 18:09:01.985693 kubelet[2303]: W0514 18:09:01.984699 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.90.152.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.152.250:6443: connect: connection refused May 14 18:09:01.985847 kubelet[2303]: E0514 18:09:01.985814 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.90.152.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.152.250:6443: connect: connection refused" logger="UnhandledError" May 14 18:09:01.991412 kubelet[2303]: I0514 18:09:01.991358 2303 policy_none.go:49] "None policy: Start" May 14 18:09:01.992984 kubelet[2303]: I0514 18:09:01.992935 2303 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:09:01.992984 kubelet[2303]: I0514 18:09:01.992986 2303 state_mem.go:35] "Initializing new in-memory state store" May 14 18:09:02.004988 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:09:02.019349 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:09:02.024488 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:09:02.040695 kubelet[2303]: I0514 18:09:02.040622 2303 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:09:02.041883 kubelet[2303]: I0514 18:09:02.041028 2303 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:09:02.041883 kubelet[2303]: I0514 18:09:02.041067 2303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:09:02.042592 kubelet[2303]: I0514 18:09:02.042544 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:09:02.046091 kubelet[2303]: E0514 18:09:02.046060 2303 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-3f9ee7d7d0\" not found" May 14 18:09:02.097415 systemd[1]: Created slice kubepods-burstable-podd913bb4f8334670071b6f384a8fd5540.slice - libcontainer container kubepods-burstable-podd913bb4f8334670071b6f384a8fd5540.slice. May 14 18:09:02.111291 systemd[1]: Created slice kubepods-burstable-pod0f4b1817be920f3862533371993a5b1b.slice - libcontainer container kubepods-burstable-pod0f4b1817be920f3862533371993a5b1b.slice. May 14 18:09:02.120287 systemd[1]: Created slice kubepods-burstable-pod4bd48309a9bb273a59e3621a8b9dc4eb.slice - libcontainer container kubepods-burstable-pod4bd48309a9bb273a59e3621a8b9dc4eb.slice. May 14 18:09:02.142977 kubelet[2303]: I0514 18:09:02.142899 2303 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.143440 kubelet[2303]: E0514 18:09:02.143406 2303 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.90.152.250:6443/api/v1/nodes\": dial tcp 164.90.152.250:6443: connect: connection refused" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.150145 kubelet[2303]: E0514 18:09:02.150063 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.152.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-3f9ee7d7d0?timeout=10s\": dial tcp 164.90.152.250:6443: connect: connection refused" interval="400ms" May 14 18:09:02.154419 kubelet[2303]: I0514 18:09:02.154328 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.154419 kubelet[2303]: I0514 18:09:02.154391 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4bd48309a9bb273a59e3621a8b9dc4eb-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"4bd48309a9bb273a59e3621a8b9dc4eb\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.154419 kubelet[2303]: I0514 18:09:02.154430 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d913bb4f8334670071b6f384a8fd5540-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"d913bb4f8334670071b6f384a8fd5540\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.154684 kubelet[2303]: I0514 18:09:02.154455 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.154684 kubelet[2303]: I0514 18:09:02.154498 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.154684 kubelet[2303]: I0514 18:09:02.154526 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.154684 kubelet[2303]: I0514 18:09:02.154559 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d913bb4f8334670071b6f384a8fd5540-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"d913bb4f8334670071b6f384a8fd5540\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.154684 kubelet[2303]: I0514 18:09:02.154590 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d913bb4f8334670071b6f384a8fd5540-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"d913bb4f8334670071b6f384a8fd5540\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.154842 kubelet[2303]: I0514 18:09:02.154613 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.345562 kubelet[2303]: I0514 18:09:02.345496 2303 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.346286 kubelet[2303]: E0514 18:09:02.346225 2303 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.90.152.250:6443/api/v1/nodes\": dial tcp 164.90.152.250:6443: connect: connection refused" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.408749 kubelet[2303]: E0514 18:09:02.408530 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:02.409875 containerd[1525]: time="2025-05-14T18:09:02.409593914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0,Uid:d913bb4f8334670071b6f384a8fd5540,Namespace:kube-system,Attempt:0,}" May 14 18:09:02.417738 kubelet[2303]: E0514 18:09:02.417299 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:02.424188 containerd[1525]: time="2025-05-14T18:09:02.423891696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0,Uid:0f4b1817be920f3862533371993a5b1b,Namespace:kube-system,Attempt:0,}" May 14 18:09:02.425217 kubelet[2303]: E0514 18:09:02.424944 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:02.426343 containerd[1525]: time="2025-05-14T18:09:02.426280506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-3f9ee7d7d0,Uid:4bd48309a9bb273a59e3621a8b9dc4eb,Namespace:kube-system,Attempt:0,}" May 14 18:09:02.551729 kubelet[2303]: E0514 18:09:02.551666 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.152.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-3f9ee7d7d0?timeout=10s\": dial tcp 164.90.152.250:6443: connect: connection refused" interval="800ms" May 14 18:09:02.634007 containerd[1525]: time="2025-05-14T18:09:02.633663022Z" level=info msg="connecting to shim e6e1ca92c70bc685bef2f90b7ee5ea88c2f6214b9c97dc5efd5e5520e1da74d6" address="unix:///run/containerd/s/4f4a69f6d84cd2eb6b993a3497b267d22b8a702f57adc93678b66bbd0181551e" namespace=k8s.io protocol=ttrpc version=3 May 14 18:09:02.648488 containerd[1525]: time="2025-05-14T18:09:02.647885688Z" level=info msg="connecting to shim 5fd6f8fd4f8293965ee7956a102469a40ae1ae456532250963cc3608477b9d55" address="unix:///run/containerd/s/780e7e827191576638203c313624d2b74614054f46350dc838c3cdf19bf4491d" namespace=k8s.io protocol=ttrpc version=3 May 14 18:09:02.648823 containerd[1525]: time="2025-05-14T18:09:02.648782783Z" level=info msg="connecting to shim 4a23e6ef25d62f9aed9e087598473f5baffc46e21c037feb9d2c975ac4de6c48" address="unix:///run/containerd/s/fc3266d65dc508810e227a4f7f717cdb2b6dab7b2523c296130e60ef18519623" namespace=k8s.io protocol=ttrpc version=3 May 14 18:09:02.751308 kubelet[2303]: I0514 18:09:02.750642 2303 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.751308 kubelet[2303]: E0514 18:09:02.751164 2303 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.90.152.250:6443/api/v1/nodes\": dial tcp 164.90.152.250:6443: connect: connection refused" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:02.787369 systemd[1]: Started cri-containerd-4a23e6ef25d62f9aed9e087598473f5baffc46e21c037feb9d2c975ac4de6c48.scope - libcontainer container 4a23e6ef25d62f9aed9e087598473f5baffc46e21c037feb9d2c975ac4de6c48. May 14 18:09:02.790276 systemd[1]: Started cri-containerd-5fd6f8fd4f8293965ee7956a102469a40ae1ae456532250963cc3608477b9d55.scope - libcontainer container 5fd6f8fd4f8293965ee7956a102469a40ae1ae456532250963cc3608477b9d55. May 14 18:09:02.792678 systemd[1]: Started cri-containerd-e6e1ca92c70bc685bef2f90b7ee5ea88c2f6214b9c97dc5efd5e5520e1da74d6.scope - libcontainer container e6e1ca92c70bc685bef2f90b7ee5ea88c2f6214b9c97dc5efd5e5520e1da74d6. May 14 18:09:02.840802 kubelet[2303]: W0514 18:09:02.840749 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.90.152.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.152.250:6443: connect: connection refused May 14 18:09:02.840938 kubelet[2303]: E0514 18:09:02.840804 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.90.152.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.152.250:6443: connect: connection refused" logger="UnhandledError" May 14 18:09:02.881070 kubelet[2303]: W0514 18:09:02.880379 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.90.152.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.90.152.250:6443: connect: connection refused May 14 18:09:02.881708 kubelet[2303]: E0514 18:09:02.881072 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.90.152.250:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.152.250:6443: connect: connection refused" logger="UnhandledError" May 14 18:09:02.890528 kubelet[2303]: W0514 18:09:02.890446 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.90.152.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-3f9ee7d7d0&limit=500&resourceVersion=0": dial tcp 164.90.152.250:6443: connect: connection refused May 14 18:09:02.890528 kubelet[2303]: E0514 18:09:02.890528 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.90.152.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-3f9ee7d7d0&limit=500&resourceVersion=0\": dial tcp 164.90.152.250:6443: connect: connection refused" logger="UnhandledError" May 14 18:09:02.980610 containerd[1525]: time="2025-05-14T18:09:02.980394323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0,Uid:d913bb4f8334670071b6f384a8fd5540,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a23e6ef25d62f9aed9e087598473f5baffc46e21c037feb9d2c975ac4de6c48\"" May 14 18:09:02.982395 containerd[1525]: time="2025-05-14T18:09:02.982281801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-3f9ee7d7d0,Uid:4bd48309a9bb273a59e3621a8b9dc4eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6e1ca92c70bc685bef2f90b7ee5ea88c2f6214b9c97dc5efd5e5520e1da74d6\"" May 14 18:09:02.985110 kubelet[2303]: E0514 18:09:02.984643 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:02.985515 kubelet[2303]: E0514 18:09:02.985453 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:02.990044 containerd[1525]: time="2025-05-14T18:09:02.989112906Z" level=info msg="CreateContainer within sandbox \"e6e1ca92c70bc685bef2f90b7ee5ea88c2f6214b9c97dc5efd5e5520e1da74d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:09:02.996139 containerd[1525]: time="2025-05-14T18:09:02.996038771Z" level=info msg="CreateContainer within sandbox \"4a23e6ef25d62f9aed9e087598473f5baffc46e21c037feb9d2c975ac4de6c48\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:09:03.007581 containerd[1525]: time="2025-05-14T18:09:03.007400822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0,Uid:0f4b1817be920f3862533371993a5b1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fd6f8fd4f8293965ee7956a102469a40ae1ae456532250963cc3608477b9d55\"" May 14 18:09:03.011138 kubelet[2303]: E0514 18:09:03.010624 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:03.016226 containerd[1525]: time="2025-05-14T18:09:03.016066787Z" level=info msg="CreateContainer within sandbox \"5fd6f8fd4f8293965ee7956a102469a40ae1ae456532250963cc3608477b9d55\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:09:03.037707 containerd[1525]: time="2025-05-14T18:09:03.037632989Z" level=info msg="Container c643bf6429403d825a2f93c90d686f48c9bd4652eac81f6e2179571da942ae00: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:03.046373 containerd[1525]: time="2025-05-14T18:09:03.046180187Z" level=info msg="Container 85d91d1a9878b8d3201599db2812f90aa295ecb2efccb88119bd92cfa31ef92e: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:03.050058 containerd[1525]: time="2025-05-14T18:09:03.049934567Z" level=info msg="Container d98632ca0edf49dc59000c49375b87fe29ebd002cfd3e09cf63a51995290d2b9: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:03.069580 containerd[1525]: time="2025-05-14T18:09:03.069484633Z" level=info msg="CreateContainer within sandbox \"4a23e6ef25d62f9aed9e087598473f5baffc46e21c037feb9d2c975ac4de6c48\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c643bf6429403d825a2f93c90d686f48c9bd4652eac81f6e2179571da942ae00\"" May 14 18:09:03.070902 containerd[1525]: time="2025-05-14T18:09:03.070833565Z" level=info msg="StartContainer for \"c643bf6429403d825a2f93c90d686f48c9bd4652eac81f6e2179571da942ae00\"" May 14 18:09:03.073620 containerd[1525]: time="2025-05-14T18:09:03.073559295Z" level=info msg="connecting to shim c643bf6429403d825a2f93c90d686f48c9bd4652eac81f6e2179571da942ae00" address="unix:///run/containerd/s/fc3266d65dc508810e227a4f7f717cdb2b6dab7b2523c296130e60ef18519623" protocol=ttrpc version=3 May 14 18:09:03.079012 containerd[1525]: time="2025-05-14T18:09:03.078914307Z" level=info msg="CreateContainer within sandbox \"5fd6f8fd4f8293965ee7956a102469a40ae1ae456532250963cc3608477b9d55\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d98632ca0edf49dc59000c49375b87fe29ebd002cfd3e09cf63a51995290d2b9\"" May 14 18:09:03.080174 containerd[1525]: time="2025-05-14T18:09:03.080108823Z" level=info msg="StartContainer for \"d98632ca0edf49dc59000c49375b87fe29ebd002cfd3e09cf63a51995290d2b9\"" May 14 18:09:03.081896 containerd[1525]: time="2025-05-14T18:09:03.081829020Z" level=info msg="connecting to shim d98632ca0edf49dc59000c49375b87fe29ebd002cfd3e09cf63a51995290d2b9" address="unix:///run/containerd/s/780e7e827191576638203c313624d2b74614054f46350dc838c3cdf19bf4491d" protocol=ttrpc version=3 May 14 18:09:03.093173 containerd[1525]: time="2025-05-14T18:09:03.093055054Z" level=info msg="CreateContainer within sandbox \"e6e1ca92c70bc685bef2f90b7ee5ea88c2f6214b9c97dc5efd5e5520e1da74d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"85d91d1a9878b8d3201599db2812f90aa295ecb2efccb88119bd92cfa31ef92e\"" May 14 18:09:03.096754 containerd[1525]: time="2025-05-14T18:09:03.096633037Z" level=info msg="StartContainer for \"85d91d1a9878b8d3201599db2812f90aa295ecb2efccb88119bd92cfa31ef92e\"" May 14 18:09:03.101222 containerd[1525]: time="2025-05-14T18:09:03.101068701Z" level=info msg="connecting to shim 85d91d1a9878b8d3201599db2812f90aa295ecb2efccb88119bd92cfa31ef92e" address="unix:///run/containerd/s/4f4a69f6d84cd2eb6b993a3497b267d22b8a702f57adc93678b66bbd0181551e" protocol=ttrpc version=3 May 14 18:09:03.126526 systemd[1]: Started cri-containerd-c643bf6429403d825a2f93c90d686f48c9bd4652eac81f6e2179571da942ae00.scope - libcontainer container c643bf6429403d825a2f93c90d686f48c9bd4652eac81f6e2179571da942ae00. May 14 18:09:03.155558 systemd[1]: Started cri-containerd-85d91d1a9878b8d3201599db2812f90aa295ecb2efccb88119bd92cfa31ef92e.scope - libcontainer container 85d91d1a9878b8d3201599db2812f90aa295ecb2efccb88119bd92cfa31ef92e. May 14 18:09:03.160666 systemd[1]: Started cri-containerd-d98632ca0edf49dc59000c49375b87fe29ebd002cfd3e09cf63a51995290d2b9.scope - libcontainer container d98632ca0edf49dc59000c49375b87fe29ebd002cfd3e09cf63a51995290d2b9. May 14 18:09:03.273601 containerd[1525]: time="2025-05-14T18:09:03.273410108Z" level=info msg="StartContainer for \"c643bf6429403d825a2f93c90d686f48c9bd4652eac81f6e2179571da942ae00\" returns successfully" May 14 18:09:03.351999 containerd[1525]: time="2025-05-14T18:09:03.351884436Z" level=info msg="StartContainer for \"d98632ca0edf49dc59000c49375b87fe29ebd002cfd3e09cf63a51995290d2b9\" returns successfully" May 14 18:09:03.354987 kubelet[2303]: E0514 18:09:03.354903 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.152.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-3f9ee7d7d0?timeout=10s\": dial tcp 164.90.152.250:6443: connect: connection refused" interval="1.6s" May 14 18:09:03.369828 containerd[1525]: time="2025-05-14T18:09:03.369769381Z" level=info msg="StartContainer for \"85d91d1a9878b8d3201599db2812f90aa295ecb2efccb88119bd92cfa31ef92e\" returns successfully" May 14 18:09:03.409964 kubelet[2303]: W0514 18:09:03.409861 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.90.152.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.152.250:6443: connect: connection refused May 14 18:09:03.410509 kubelet[2303]: E0514 18:09:03.410422 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.90.152.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.152.250:6443: connect: connection refused" logger="UnhandledError" May 14 18:09:03.554929 kubelet[2303]: I0514 18:09:03.553817 2303 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:04.013225 kubelet[2303]: E0514 18:09:04.013182 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:04.018796 kubelet[2303]: E0514 18:09:04.018686 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:04.026558 kubelet[2303]: E0514 18:09:04.026482 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:05.028244 kubelet[2303]: E0514 18:09:05.026137 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:05.029177 kubelet[2303]: E0514 18:09:05.028677 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:06.029433 kubelet[2303]: E0514 18:09:06.028927 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:06.242510 kubelet[2303]: E0514 18:09:06.242431 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-3f9ee7d7d0\" not found" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:06.347564 kubelet[2303]: E0514 18:09:06.347146 2303 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4334.0.0-a-3f9ee7d7d0.183f771bd882643e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-3f9ee7d7d0,UID:ci-4334.0.0-a-3f9ee7d7d0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-3f9ee7d7d0,},FirstTimestamp:2025-05-14 18:09:01.923288126 +0000 UTC m=+1.188407144,LastTimestamp:2025-05-14 18:09:01.923288126 +0000 UTC m=+1.188407144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-3f9ee7d7d0,}" May 14 18:09:06.360469 kubelet[2303]: I0514 18:09:06.358972 2303 kubelet_node_status.go:75] "Successfully registered node" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:06.360469 kubelet[2303]: E0514 18:09:06.360135 2303 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4334.0.0-a-3f9ee7d7d0\": node \"ci-4334.0.0-a-3f9ee7d7d0\" not found" May 14 18:09:06.451365 kubelet[2303]: E0514 18:09:06.451222 2303 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4334.0.0-a-3f9ee7d7d0.183f771bdb86d127 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-3f9ee7d7d0,UID:ci-4334.0.0-a-3f9ee7d7d0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-3f9ee7d7d0,},FirstTimestamp:2025-05-14 18:09:01.973909799 +0000 UTC m=+1.239028807,LastTimestamp:2025-05-14 18:09:01.973909799 +0000 UTC m=+1.239028807,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-3f9ee7d7d0,}" May 14 18:09:06.846692 kubelet[2303]: E0514 18:09:06.846614 2303 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:06.846943 kubelet[2303]: E0514 18:09:06.846915 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:06.925007 kubelet[2303]: I0514 18:09:06.924938 2303 apiserver.go:52] "Watching apiserver" May 14 18:09:06.953291 kubelet[2303]: I0514 18:09:06.953233 2303 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:09:07.044723 kubelet[2303]: W0514 18:09:07.044665 2303 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:09:07.047693 kubelet[2303]: E0514 18:09:07.047240 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:08.034428 kubelet[2303]: E0514 18:09:08.034027 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:08.650414 kubelet[2303]: W0514 18:09:08.650362 2303 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:09:08.652901 kubelet[2303]: E0514 18:09:08.651265 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:08.708740 systemd[1]: Reload requested from client PID 2573 ('systemctl') (unit session-7.scope)... May 14 18:09:08.708766 systemd[1]: Reloading... May 14 18:09:08.849005 zram_generator::config[2612]: No configuration found. May 14 18:09:09.009052 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:09:09.036225 kubelet[2303]: E0514 18:09:09.036115 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:09.171384 systemd[1]: Reloading finished in 461 ms. May 14 18:09:09.199483 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:09:09.219752 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:09:09.220365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:09:09.220486 systemd[1]: kubelet.service: Consumed 1.789s CPU time, 109.7M memory peak. May 14 18:09:09.224440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:09:09.432344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:09:09.449803 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:09:09.539206 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:09:09.540997 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:09:09.540997 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:09:09.540997 kubelet[2667]: I0514 18:09:09.540153 2667 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:09:09.549978 kubelet[2667]: I0514 18:09:09.549872 2667 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:09:09.549978 kubelet[2667]: I0514 18:09:09.549917 2667 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:09:09.550289 kubelet[2667]: I0514 18:09:09.550238 2667 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:09:09.553601 kubelet[2667]: I0514 18:09:09.553554 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:09:09.558265 kubelet[2667]: I0514 18:09:09.557916 2667 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:09:09.564346 kubelet[2667]: I0514 18:09:09.564305 2667 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:09:09.574079 kubelet[2667]: I0514 18:09:09.573212 2667 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:09:09.574079 kubelet[2667]: I0514 18:09:09.573422 2667 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:09:09.574079 kubelet[2667]: I0514 18:09:09.573571 2667 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:09:09.574419 kubelet[2667]: I0514 18:09:09.573603 2667 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-3f9ee7d7d0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:09:09.576120 kubelet[2667]: I0514 18:09:09.576073 2667 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:09:09.576450 kubelet[2667]: I0514 18:09:09.576425 2667 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:09:09.576540 kubelet[2667]: I0514 18:09:09.576526 2667 state_mem.go:36] "Initialized new in-memory state store" May 14 18:09:09.576797 kubelet[2667]: I0514 18:09:09.576781 2667 kubelet.go:408] "Attempting to sync node with API server" May 14 18:09:09.577129 kubelet[2667]: I0514 18:09:09.577106 2667 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:09:09.577262 kubelet[2667]: I0514 18:09:09.577248 2667 kubelet.go:314] "Adding apiserver pod source" May 14 18:09:09.577292 kubelet[2667]: I0514 18:09:09.577276 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:09:09.579588 kubelet[2667]: I0514 18:09:09.579560 2667 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:09:09.580332 kubelet[2667]: I0514 18:09:09.580304 2667 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:09:09.583056 kubelet[2667]: I0514 18:09:09.583022 2667 server.go:1269] "Started kubelet" May 14 18:09:09.587038 kubelet[2667]: I0514 18:09:09.587005 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:09:09.598733 kubelet[2667]: I0514 18:09:09.598655 2667 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:09:09.606445 kubelet[2667]: I0514 18:09:09.606393 2667 server.go:460] "Adding debug handlers to kubelet server" May 14 18:09:09.608625 kubelet[2667]: I0514 18:09:09.608538 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:09:09.609033 kubelet[2667]: I0514 18:09:09.609018 2667 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:09:09.609509 kubelet[2667]: I0514 18:09:09.609491 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:09:09.611630 kubelet[2667]: I0514 18:09:09.611601 2667 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:09:09.612025 kubelet[2667]: E0514 18:09:09.612002 2667 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-3f9ee7d7d0\" not found" May 14 18:09:09.614904 kubelet[2667]: I0514 18:09:09.614878 2667 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:09:09.615224 kubelet[2667]: I0514 18:09:09.615210 2667 reconciler.go:26] "Reconciler: start to sync state" May 14 18:09:09.617760 kubelet[2667]: I0514 18:09:09.617722 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:09:09.620081 kubelet[2667]: I0514 18:09:09.619856 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:09:09.622244 kubelet[2667]: I0514 18:09:09.622212 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:09:09.622679 kubelet[2667]: I0514 18:09:09.622526 2667 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:09:09.623102 kubelet[2667]: I0514 18:09:09.622739 2667 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:09:09.623316 kubelet[2667]: E0514 18:09:09.622812 2667 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:09:09.625753 kubelet[2667]: E0514 18:09:09.625651 2667 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:09:09.626179 kubelet[2667]: I0514 18:09:09.626114 2667 factory.go:221] Registration of the containerd container factory successfully May 14 18:09:09.626179 kubelet[2667]: I0514 18:09:09.626134 2667 factory.go:221] Registration of the systemd container factory successfully May 14 18:09:09.709165 kubelet[2667]: I0514 18:09:09.707910 2667 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:09:09.709165 kubelet[2667]: I0514 18:09:09.707977 2667 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:09:09.709165 kubelet[2667]: I0514 18:09:09.708025 2667 state_mem.go:36] "Initialized new in-memory state store" May 14 18:09:09.709165 kubelet[2667]: I0514 18:09:09.708338 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:09:09.709165 kubelet[2667]: I0514 18:09:09.708355 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:09:09.709165 kubelet[2667]: I0514 18:09:09.708426 2667 policy_none.go:49] "None policy: Start" May 14 18:09:09.711260 kubelet[2667]: I0514 18:09:09.709564 2667 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:09:09.711260 kubelet[2667]: I0514 18:09:09.709639 2667 state_mem.go:35] "Initializing new in-memory state store" May 14 18:09:09.711260 kubelet[2667]: I0514 18:09:09.710060 2667 state_mem.go:75] "Updated machine memory state" May 14 18:09:09.721095 kubelet[2667]: I0514 18:09:09.721048 2667 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:09:09.721373 kubelet[2667]: I0514 18:09:09.721350 2667 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:09:09.721465 kubelet[2667]: I0514 18:09:09.721379 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:09:09.723297 kubelet[2667]: I0514 18:09:09.722513 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:09:09.722699 sudo[2696]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 18:09:09.723067 sudo[2696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 18:09:09.753584 kubelet[2667]: W0514 18:09:09.752911 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:09:09.754262 kubelet[2667]: E0514 18:09:09.754211 2667 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4334.0.0-a-3f9ee7d7d0\" already exists" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.762620 kubelet[2667]: W0514 18:09:09.762579 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:09:09.762778 kubelet[2667]: E0514 18:09:09.762653 2667 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0\" already exists" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.763424 kubelet[2667]: W0514 18:09:09.763156 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:09:09.816342 kubelet[2667]: I0514 18:09:09.816196 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.816342 kubelet[2667]: I0514 18:09:09.816344 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.816512 kubelet[2667]: I0514 18:09:09.816371 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d913bb4f8334670071b6f384a8fd5540-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"d913bb4f8334670071b6f384a8fd5540\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.816512 kubelet[2667]: I0514 18:09:09.816412 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d913bb4f8334670071b6f384a8fd5540-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"d913bb4f8334670071b6f384a8fd5540\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.816512 kubelet[2667]: I0514 18:09:09.816443 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d913bb4f8334670071b6f384a8fd5540-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"d913bb4f8334670071b6f384a8fd5540\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.816512 kubelet[2667]: I0514 18:09:09.816484 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.816512 kubelet[2667]: I0514 18:09:09.816504 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.838375 kubelet[2667]: I0514 18:09:09.837924 2667 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.853197 kubelet[2667]: I0514 18:09:09.853059 2667 kubelet_node_status.go:111] "Node was previously registered" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.853372 kubelet[2667]: I0514 18:09:09.853277 2667 kubelet_node_status.go:75] "Successfully registered node" node="ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.918981 kubelet[2667]: I0514 18:09:09.917708 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f4b1817be920f3862533371993a5b1b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"0f4b1817be920f3862533371993a5b1b\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:09.918981 kubelet[2667]: I0514 18:09:09.917809 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4bd48309a9bb273a59e3621a8b9dc4eb-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-3f9ee7d7d0\" (UID: \"4bd48309a9bb273a59e3621a8b9dc4eb\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-3f9ee7d7d0" May 14 18:09:10.056984 kubelet[2667]: E0514 18:09:10.056112 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:10.065013 kubelet[2667]: E0514 18:09:10.063785 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:10.065013 kubelet[2667]: E0514 18:09:10.063910 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:10.518402 sudo[2696]: pam_unix(sudo:session): session closed for user root May 14 18:09:10.578556 kubelet[2667]: I0514 18:09:10.578473 2667 apiserver.go:52] "Watching apiserver" May 14 18:09:10.615806 kubelet[2667]: I0514 18:09:10.615552 2667 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:09:10.684254 kubelet[2667]: E0514 18:09:10.684207 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:10.684900 kubelet[2667]: E0514 18:09:10.684858 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:10.685169 kubelet[2667]: E0514 18:09:10.685152 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:10.721082 kubelet[2667]: I0514 18:09:10.720993 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-3f9ee7d7d0" podStartSLOduration=1.720971365 podStartE2EDuration="1.720971365s" podCreationTimestamp="2025-05-14 18:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:09:10.720635679 +0000 UTC m=+1.262845533" watchObservedRunningTime="2025-05-14 18:09:10.720971365 +0000 UTC m=+1.263181086" May 14 18:09:10.752979 kubelet[2667]: I0514 18:09:10.752806 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-3f9ee7d7d0" podStartSLOduration=2.752774183 podStartE2EDuration="2.752774183s" podCreationTimestamp="2025-05-14 18:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:09:10.736566093 +0000 UTC m=+1.278775827" watchObservedRunningTime="2025-05-14 18:09:10.752774183 +0000 UTC m=+1.294983926" May 14 18:09:11.688157 kubelet[2667]: E0514 18:09:11.686766 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:12.271539 sudo[1766]: pam_unix(sudo:session): session closed for user root May 14 18:09:12.275914 sshd[1765]: Connection closed by 139.178.89.65 port 56604 May 14 18:09:12.277063 sshd-session[1763]: pam_unix(sshd:session): session closed for user core May 14 18:09:12.282575 systemd[1]: sshd@6-164.90.152.250:22-139.178.89.65:56604.service: Deactivated successfully. May 14 18:09:12.286325 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:09:12.286851 systemd[1]: session-7.scope: Consumed 5.598s CPU time, 226M memory peak. May 14 18:09:12.288796 systemd-logind[1501]: Session 7 logged out. Waiting for processes to exit. May 14 18:09:12.291517 systemd-logind[1501]: Removed session 7. May 14 18:09:13.308143 systemd-resolved[1397]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. May 14 18:09:14.066926 kubelet[2667]: E0514 18:09:14.066389 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:14.087655 kubelet[2667]: I0514 18:09:14.087354 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-3f9ee7d7d0" podStartSLOduration=7.087311544 podStartE2EDuration="7.087311544s" podCreationTimestamp="2025-05-14 18:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:09:10.754390506 +0000 UTC m=+1.296600258" watchObservedRunningTime="2025-05-14 18:09:14.087311544 +0000 UTC m=+4.629521272" May 14 18:09:14.692928 kubelet[2667]: E0514 18:09:14.692877 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:15.692875 kubelet[2667]: E0514 18:09:15.692392 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:15.694885 kubelet[2667]: E0514 18:09:15.694822 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:15.699289 kubelet[2667]: E0514 18:09:15.699252 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:15.888980 kubelet[2667]: I0514 18:09:15.888856 2667 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:09:15.889666 containerd[1525]: time="2025-05-14T18:09:15.889611517Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:09:15.891985 kubelet[2667]: I0514 18:09:15.891594 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:09:16.755255 systemd[1]: Created slice kubepods-besteffort-pod92df6902_9c5f_4b3c_8e68_4d048a94d31c.slice - libcontainer container kubepods-besteffort-pod92df6902_9c5f_4b3c_8e68_4d048a94d31c.slice. May 14 18:09:16.769128 kubelet[2667]: I0514 18:09:16.769075 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92df6902-9c5f-4b3c-8e68-4d048a94d31c-kube-proxy\") pod \"kube-proxy-kqgrd\" (UID: \"92df6902-9c5f-4b3c-8e68-4d048a94d31c\") " pod="kube-system/kube-proxy-kqgrd" May 14 18:09:16.769128 kubelet[2667]: I0514 18:09:16.769130 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92df6902-9c5f-4b3c-8e68-4d048a94d31c-lib-modules\") pod \"kube-proxy-kqgrd\" (UID: \"92df6902-9c5f-4b3c-8e68-4d048a94d31c\") " pod="kube-system/kube-proxy-kqgrd" May 14 18:09:16.769898 kubelet[2667]: I0514 18:09:16.769156 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmn2r\" (UniqueName: \"kubernetes.io/projected/92df6902-9c5f-4b3c-8e68-4d048a94d31c-kube-api-access-wmn2r\") pod \"kube-proxy-kqgrd\" (UID: \"92df6902-9c5f-4b3c-8e68-4d048a94d31c\") " pod="kube-system/kube-proxy-kqgrd" May 14 18:09:16.769898 kubelet[2667]: I0514 18:09:16.769190 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92df6902-9c5f-4b3c-8e68-4d048a94d31c-xtables-lock\") pod \"kube-proxy-kqgrd\" (UID: \"92df6902-9c5f-4b3c-8e68-4d048a94d31c\") " pod="kube-system/kube-proxy-kqgrd" May 14 18:09:16.787306 systemd[1]: Created slice kubepods-burstable-pod8e8defdf_3357_4334_b4b6_e6c23eaa7a8e.slice - libcontainer container kubepods-burstable-pod8e8defdf_3357_4334_b4b6_e6c23eaa7a8e.slice. May 14 18:09:16.871049 kubelet[2667]: I0514 18:09:16.870399 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-hostproc\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.871349 kubelet[2667]: I0514 18:09:16.871307 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cni-path\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.871473 kubelet[2667]: I0514 18:09:16.871448 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-etc-cni-netd\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.871727 kubelet[2667]: I0514 18:09:16.871682 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-lib-modules\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.871866 kubelet[2667]: I0514 18:09:16.871838 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4htcp\" (UniqueName: \"kubernetes.io/projected/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-kube-api-access-4htcp\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.872209 kubelet[2667]: I0514 18:09:16.872154 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-host-proc-sys-net\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.872330 kubelet[2667]: I0514 18:09:16.872319 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-hubble-tls\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.874988 kubelet[2667]: I0514 18:09:16.872663 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-run\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.874988 kubelet[2667]: I0514 18:09:16.872684 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-xtables-lock\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.874988 kubelet[2667]: I0514 18:09:16.872700 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-clustermesh-secrets\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.874988 kubelet[2667]: I0514 18:09:16.872715 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-config-path\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.874988 kubelet[2667]: I0514 18:09:16.872733 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-bpf-maps\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.874988 kubelet[2667]: I0514 18:09:16.872749 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-cgroup\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.875271 kubelet[2667]: I0514 18:09:16.872765 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-host-proc-sys-kernel\") pod \"cilium-dvkd2\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " pod="kube-system/cilium-dvkd2" May 14 18:09:16.945114 systemd[1]: Created slice kubepods-besteffort-pod37ebb25f_fa56_4bb4_956a_4abdb7c70a4b.slice - libcontainer container kubepods-besteffort-pod37ebb25f_fa56_4bb4_956a_4abdb7c70a4b.slice. May 14 18:09:16.975192 kubelet[2667]: I0514 18:09:16.975125 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgf2\" (UniqueName: \"kubernetes.io/projected/37ebb25f-fa56-4bb4-956a-4abdb7c70a4b-kube-api-access-hkgf2\") pod \"cilium-operator-5d85765b45-wkbts\" (UID: \"37ebb25f-fa56-4bb4-956a-4abdb7c70a4b\") " pod="kube-system/cilium-operator-5d85765b45-wkbts" May 14 18:09:16.975795 kubelet[2667]: I0514 18:09:16.975721 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37ebb25f-fa56-4bb4-956a-4abdb7c70a4b-cilium-config-path\") pod \"cilium-operator-5d85765b45-wkbts\" (UID: \"37ebb25f-fa56-4bb4-956a-4abdb7c70a4b\") " pod="kube-system/cilium-operator-5d85765b45-wkbts" May 14 18:09:17.073457 kubelet[2667]: E0514 18:09:17.073358 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:17.075449 containerd[1525]: time="2025-05-14T18:09:17.075395352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kqgrd,Uid:92df6902-9c5f-4b3c-8e68-4d048a94d31c,Namespace:kube-system,Attempt:0,}" May 14 18:09:17.094887 kubelet[2667]: E0514 18:09:17.094722 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:17.097351 containerd[1525]: time="2025-05-14T18:09:17.097258051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvkd2,Uid:8e8defdf-3357-4334-b4b6-e6c23eaa7a8e,Namespace:kube-system,Attempt:0,}" May 14 18:09:17.156664 containerd[1525]: time="2025-05-14T18:09:17.156598329Z" level=info msg="connecting to shim f90bc7db01940dc2a1defbe8c68682329a85dabd2143bb5d4bb658bb955ab67e" address="unix:///run/containerd/s/7fd164d245629dd33521d6da36827674a041219783be931f98e61440525e3321" namespace=k8s.io protocol=ttrpc version=3 May 14 18:09:17.173901 containerd[1525]: time="2025-05-14T18:09:17.173780263Z" level=info msg="connecting to shim 5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c" address="unix:///run/containerd/s/c028f20a074963e21b34cf49b8c38193c8e8d5e3893c1ad5069aae440680e96b" namespace=k8s.io protocol=ttrpc version=3 May 14 18:09:17.202276 systemd[1]: Started cri-containerd-f90bc7db01940dc2a1defbe8c68682329a85dabd2143bb5d4bb658bb955ab67e.scope - libcontainer container f90bc7db01940dc2a1defbe8c68682329a85dabd2143bb5d4bb658bb955ab67e. May 14 18:09:17.235399 systemd[1]: Started cri-containerd-5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c.scope - libcontainer container 5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c. May 14 18:09:17.249520 kubelet[2667]: E0514 18:09:17.249454 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:17.251593 containerd[1525]: time="2025-05-14T18:09:17.251179907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wkbts,Uid:37ebb25f-fa56-4bb4-956a-4abdb7c70a4b,Namespace:kube-system,Attempt:0,}" May 14 18:09:17.271556 containerd[1525]: time="2025-05-14T18:09:17.271516129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kqgrd,Uid:92df6902-9c5f-4b3c-8e68-4d048a94d31c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f90bc7db01940dc2a1defbe8c68682329a85dabd2143bb5d4bb658bb955ab67e\"" May 14 18:09:17.272985 kubelet[2667]: E0514 18:09:17.272755 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:17.276070 containerd[1525]: time="2025-05-14T18:09:17.275594188Z" level=info msg="CreateContainer within sandbox \"f90bc7db01940dc2a1defbe8c68682329a85dabd2143bb5d4bb658bb955ab67e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:09:17.299608 containerd[1525]: time="2025-05-14T18:09:17.299530767Z" level=info msg="connecting to shim 3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4" address="unix:///run/containerd/s/79cb0079adbc18c43d0ab94ec8f7efcae860c83e8fc5cef96f3ff21dc140056b" namespace=k8s.io protocol=ttrpc version=3 May 14 18:09:17.304498 containerd[1525]: time="2025-05-14T18:09:17.304448299Z" level=info msg="Container cf84b1a20fd1da8da39f440290af6d7e8d02a2d5fd676d65d3eeb286099a704c: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:17.313760 containerd[1525]: time="2025-05-14T18:09:17.313696984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvkd2,Uid:8e8defdf-3357-4334-b4b6-e6c23eaa7a8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\"" May 14 18:09:17.316000 kubelet[2667]: E0514 18:09:17.315907 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:17.322326 containerd[1525]: time="2025-05-14T18:09:17.322261127Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 18:09:17.332173 containerd[1525]: time="2025-05-14T18:09:17.330552078Z" level=info msg="CreateContainer within sandbox \"f90bc7db01940dc2a1defbe8c68682329a85dabd2143bb5d4bb658bb955ab67e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf84b1a20fd1da8da39f440290af6d7e8d02a2d5fd676d65d3eeb286099a704c\"" May 14 18:09:17.335877 containerd[1525]: time="2025-05-14T18:09:17.335808029Z" level=info msg="StartContainer for \"cf84b1a20fd1da8da39f440290af6d7e8d02a2d5fd676d65d3eeb286099a704c\"" May 14 18:09:17.344946 containerd[1525]: time="2025-05-14T18:09:17.344880107Z" level=info msg="connecting to shim cf84b1a20fd1da8da39f440290af6d7e8d02a2d5fd676d65d3eeb286099a704c" address="unix:///run/containerd/s/7fd164d245629dd33521d6da36827674a041219783be931f98e61440525e3321" protocol=ttrpc version=3 May 14 18:09:17.356316 systemd[1]: Started cri-containerd-3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4.scope - libcontainer container 3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4. May 14 18:09:17.395321 systemd[1]: Started cri-containerd-cf84b1a20fd1da8da39f440290af6d7e8d02a2d5fd676d65d3eeb286099a704c.scope - libcontainer container cf84b1a20fd1da8da39f440290af6d7e8d02a2d5fd676d65d3eeb286099a704c. May 14 18:09:17.476866 containerd[1525]: time="2025-05-14T18:09:17.476746771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wkbts,Uid:37ebb25f-fa56-4bb4-956a-4abdb7c70a4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\"" May 14 18:09:17.478063 containerd[1525]: time="2025-05-14T18:09:17.477785392Z" level=info msg="StartContainer for \"cf84b1a20fd1da8da39f440290af6d7e8d02a2d5fd676d65d3eeb286099a704c\" returns successfully" May 14 18:09:17.478945 kubelet[2667]: E0514 18:09:17.478475 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:17.701199 kubelet[2667]: E0514 18:09:17.700818 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:17.708579 kubelet[2667]: E0514 18:09:17.708457 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:17.725356 kubelet[2667]: I0514 18:09:17.724732 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kqgrd" podStartSLOduration=1.724707772 podStartE2EDuration="1.724707772s" podCreationTimestamp="2025-05-14 18:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:09:17.724538809 +0000 UTC m=+8.266748556" watchObservedRunningTime="2025-05-14 18:09:17.724707772 +0000 UTC m=+8.266917503" May 14 18:09:18.714397 kubelet[2667]: E0514 18:09:18.714314 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:21.378703 update_engine[1505]: I20250514 18:09:21.377774 1505 update_attempter.cc:509] Updating boot flags... May 14 18:09:23.556631 systemd-timesyncd[1415]: Timed out waiting for reply from 96.245.170.99:123 (2.flatcar.pool.ntp.org). May 14 18:09:24.240165 systemd-resolved[1397]: Clock change detected. Flushing caches. May 14 18:09:24.240855 systemd-timesyncd[1415]: Contacted time server 70.60.65.40:123 (2.flatcar.pool.ntp.org). May 14 18:09:24.240938 systemd-timesyncd[1415]: Initial clock synchronization to Wed 2025-05-14 18:09:24.239919 UTC. May 14 18:09:24.279016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332157094.mount: Deactivated successfully. May 14 18:09:27.530634 containerd[1525]: time="2025-05-14T18:09:27.458185243Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:09:27.533611 containerd[1525]: time="2025-05-14T18:09:27.495741752Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 18:09:27.534137 containerd[1525]: time="2025-05-14T18:09:27.534042101Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:09:27.537030 containerd[1525]: time="2025-05-14T18:09:27.536730646Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.591561347s" May 14 18:09:27.537030 containerd[1525]: time="2025-05-14T18:09:27.536854295Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 18:09:27.539156 containerd[1525]: time="2025-05-14T18:09:27.538705224Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 18:09:27.544364 containerd[1525]: time="2025-05-14T18:09:27.544286013Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:09:27.620446 containerd[1525]: time="2025-05-14T18:09:27.619963870Z" level=info msg="Container 1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:27.626350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989915156.mount: Deactivated successfully. May 14 18:09:27.632444 containerd[1525]: time="2025-05-14T18:09:27.632349790Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\"" May 14 18:09:27.638356 containerd[1525]: time="2025-05-14T18:09:27.638285308Z" level=info msg="StartContainer for \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\"" May 14 18:09:27.644763 containerd[1525]: time="2025-05-14T18:09:27.644197406Z" level=info msg="connecting to shim 1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f" address="unix:///run/containerd/s/c028f20a074963e21b34cf49b8c38193c8e8d5e3893c1ad5069aae440680e96b" protocol=ttrpc version=3 May 14 18:09:27.683840 systemd[1]: Started cri-containerd-1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f.scope - libcontainer container 1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f. May 14 18:09:27.789371 containerd[1525]: time="2025-05-14T18:09:27.789198674Z" level=info msg="StartContainer for \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\" returns successfully" May 14 18:09:27.806347 systemd[1]: cri-containerd-1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f.scope: Deactivated successfully. May 14 18:09:27.874469 containerd[1525]: time="2025-05-14T18:09:27.874378725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\" id:\"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\" pid:3093 exited_at:{seconds:1747246167 nanos:811302094}" May 14 18:09:27.874765 containerd[1525]: time="2025-05-14T18:09:27.874491338Z" level=info msg="received exit event container_id:\"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\" id:\"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\" pid:3093 exited_at:{seconds:1747246167 nanos:811302094}" May 14 18:09:27.923390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f-rootfs.mount: Deactivated successfully. May 14 18:09:28.355501 systemd[1]: Started sshd@7-164.90.152.250:22-218.92.0.215:45716.service - OpenSSH per-connection server daemon (218.92.0.215:45716). May 14 18:09:28.373968 kubelet[2667]: E0514 18:09:28.373848 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:28.386695 containerd[1525]: time="2025-05-14T18:09:28.385862714Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:09:28.462501 containerd[1525]: time="2025-05-14T18:09:28.461911701Z" level=info msg="Container 3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:28.473376 containerd[1525]: time="2025-05-14T18:09:28.473291479Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\"" May 14 18:09:28.474519 containerd[1525]: time="2025-05-14T18:09:28.474353824Z" level=info msg="StartContainer for \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\"" May 14 18:09:28.478086 containerd[1525]: time="2025-05-14T18:09:28.477863415Z" level=info msg="connecting to shim 3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803" address="unix:///run/containerd/s/c028f20a074963e21b34cf49b8c38193c8e8d5e3893c1ad5069aae440680e96b" protocol=ttrpc version=3 May 14 18:09:28.511335 systemd[1]: Started cri-containerd-3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803.scope - libcontainer container 3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803. May 14 18:09:28.548469 sshd[3125]: Unable to negotiate with 218.92.0.215 port 45716: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] May 14 18:09:28.549143 systemd[1]: sshd@7-164.90.152.250:22-218.92.0.215:45716.service: Deactivated successfully. May 14 18:09:28.572642 containerd[1525]: time="2025-05-14T18:09:28.572576853Z" level=info msg="StartContainer for \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\" returns successfully" May 14 18:09:28.606189 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:09:28.606771 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:09:28.607028 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 18:09:28.612779 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:09:28.633949 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:09:28.636276 systemd[1]: cri-containerd-3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803.scope: Deactivated successfully. May 14 18:09:28.673227 containerd[1525]: time="2025-05-14T18:09:28.673162606Z" level=info msg="received exit event container_id:\"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\" id:\"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\" pid:3141 exited_at:{seconds:1747246168 nanos:639806267}" May 14 18:09:28.675869 containerd[1525]: time="2025-05-14T18:09:28.675809538Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\" id:\"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\" pid:3141 exited_at:{seconds:1747246168 nanos:639806267}" May 14 18:09:28.695242 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:09:28.727118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803-rootfs.mount: Deactivated successfully. May 14 18:09:29.380108 kubelet[2667]: E0514 18:09:29.380068 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:29.384453 containerd[1525]: time="2025-05-14T18:09:29.384328953Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:09:29.433707 containerd[1525]: time="2025-05-14T18:09:29.433617247Z" level=info msg="Container 54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:29.440293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2806681979.mount: Deactivated successfully. May 14 18:09:29.453206 containerd[1525]: time="2025-05-14T18:09:29.453126467Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\"" May 14 18:09:29.455192 containerd[1525]: time="2025-05-14T18:09:29.454936466Z" level=info msg="StartContainer for \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\"" May 14 18:09:29.459675 containerd[1525]: time="2025-05-14T18:09:29.459382024Z" level=info msg="connecting to shim 54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9" address="unix:///run/containerd/s/c028f20a074963e21b34cf49b8c38193c8e8d5e3893c1ad5069aae440680e96b" protocol=ttrpc version=3 May 14 18:09:29.499804 systemd[1]: Started cri-containerd-54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9.scope - libcontainer container 54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9. May 14 18:09:29.579526 containerd[1525]: time="2025-05-14T18:09:29.579328513Z" level=info msg="StartContainer for \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\" returns successfully" May 14 18:09:29.581501 systemd[1]: cri-containerd-54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9.scope: Deactivated successfully. May 14 18:09:29.586303 containerd[1525]: time="2025-05-14T18:09:29.586219561Z" level=info msg="received exit event container_id:\"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\" id:\"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\" pid:3193 exited_at:{seconds:1747246169 nanos:585001200}" May 14 18:09:29.587151 containerd[1525]: time="2025-05-14T18:09:29.587095401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\" id:\"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\" pid:3193 exited_at:{seconds:1747246169 nanos:585001200}" May 14 18:09:29.659066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9-rootfs.mount: Deactivated successfully. May 14 18:09:30.401675 kubelet[2667]: E0514 18:09:30.401618 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:30.413401 containerd[1525]: time="2025-05-14T18:09:30.413139621Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:09:30.448790 containerd[1525]: time="2025-05-14T18:09:30.448704145Z" level=info msg="Container 9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:30.455210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833249024.mount: Deactivated successfully. May 14 18:09:30.487381 containerd[1525]: time="2025-05-14T18:09:30.487304323Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\"" May 14 18:09:30.491325 containerd[1525]: time="2025-05-14T18:09:30.491044394Z" level=info msg="StartContainer for \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\"" May 14 18:09:30.502431 containerd[1525]: time="2025-05-14T18:09:30.502124832Z" level=info msg="connecting to shim 9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d" address="unix:///run/containerd/s/c028f20a074963e21b34cf49b8c38193c8e8d5e3893c1ad5069aae440680e96b" protocol=ttrpc version=3 May 14 18:09:30.538822 systemd[1]: Started cri-containerd-9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d.scope - libcontainer container 9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d. May 14 18:09:30.612018 systemd[1]: cri-containerd-9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d.scope: Deactivated successfully. May 14 18:09:30.616148 containerd[1525]: time="2025-05-14T18:09:30.614190908Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e8defdf_3357_4334_b4b6_e6c23eaa7a8e.slice/cri-containerd-9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d.scope/memory.events\": no such file or directory" May 14 18:09:30.627898 containerd[1525]: time="2025-05-14T18:09:30.627508309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\" id:\"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\" pid:3243 exited_at:{seconds:1747246170 nanos:616220422}" May 14 18:09:30.643278 containerd[1525]: time="2025-05-14T18:09:30.643153132Z" level=info msg="received exit event container_id:\"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\" id:\"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\" pid:3243 exited_at:{seconds:1747246170 nanos:616220422}" May 14 18:09:30.648874 containerd[1525]: time="2025-05-14T18:09:30.648705534Z" level=info msg="StartContainer for \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\" returns successfully" May 14 18:09:30.719184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d-rootfs.mount: Deactivated successfully. May 14 18:09:30.936958 containerd[1525]: time="2025-05-14T18:09:30.936560008Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:09:30.939064 containerd[1525]: time="2025-05-14T18:09:30.938989409Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 18:09:30.940079 containerd[1525]: time="2025-05-14T18:09:30.939985833Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:09:30.942019 containerd[1525]: time="2025-05-14T18:09:30.941957705Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.401362921s" May 14 18:09:30.942438 containerd[1525]: time="2025-05-14T18:09:30.942252369Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 18:09:30.949320 containerd[1525]: time="2025-05-14T18:09:30.949233798Z" level=info msg="CreateContainer within sandbox \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 18:09:30.973565 containerd[1525]: time="2025-05-14T18:09:30.970209812Z" level=info msg="Container 6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:30.983544 containerd[1525]: time="2025-05-14T18:09:30.983483256Z" level=info msg="CreateContainer within sandbox \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\"" May 14 18:09:30.984287 containerd[1525]: time="2025-05-14T18:09:30.984253045Z" level=info msg="StartContainer for \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\"" May 14 18:09:30.986369 containerd[1525]: time="2025-05-14T18:09:30.985611273Z" level=info msg="connecting to shim 6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49" address="unix:///run/containerd/s/79cb0079adbc18c43d0ab94ec8f7efcae860c83e8fc5cef96f3ff21dc140056b" protocol=ttrpc version=3 May 14 18:09:31.014839 systemd[1]: Started cri-containerd-6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49.scope - libcontainer container 6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49. May 14 18:09:31.084145 containerd[1525]: time="2025-05-14T18:09:31.084080758Z" level=info msg="StartContainer for \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" returns successfully" May 14 18:09:31.430262 kubelet[2667]: E0514 18:09:31.430208 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:31.438083 kubelet[2667]: E0514 18:09:31.438013 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:31.438974 containerd[1525]: time="2025-05-14T18:09:31.438131410Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:09:31.567321 containerd[1525]: time="2025-05-14T18:09:31.566688336Z" level=info msg="Container 13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:31.586902 containerd[1525]: time="2025-05-14T18:09:31.586803402Z" level=info msg="CreateContainer within sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\"" May 14 18:09:31.587805 containerd[1525]: time="2025-05-14T18:09:31.587741447Z" level=info msg="StartContainer for \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\"" May 14 18:09:31.591708 containerd[1525]: time="2025-05-14T18:09:31.591601679Z" level=info msg="connecting to shim 13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29" address="unix:///run/containerd/s/c028f20a074963e21b34cf49b8c38193c8e8d5e3893c1ad5069aae440680e96b" protocol=ttrpc version=3 May 14 18:09:31.672781 systemd[1]: Started cri-containerd-13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29.scope - libcontainer container 13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29. May 14 18:09:31.825019 containerd[1525]: time="2025-05-14T18:09:31.824831219Z" level=info msg="StartContainer for \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" returns successfully" May 14 18:09:32.168890 containerd[1525]: time="2025-05-14T18:09:32.168393619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" id:\"3848be51365c8f609d32adb53ae2383406f7c76c82256ca30d44021a094f0b6c\" pid:3344 exited_at:{seconds:1747246172 nanos:168015690}" May 14 18:09:32.260801 kubelet[2667]: I0514 18:09:32.260744 2667 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 18:09:32.333651 kubelet[2667]: I0514 18:09:32.333582 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wkbts" podStartSLOduration=3.492744829 podStartE2EDuration="16.33355678s" podCreationTimestamp="2025-05-14 18:09:16 +0000 UTC" firstStartedPulling="2025-05-14 18:09:17.480912674 +0000 UTC m=+8.023122393" lastFinishedPulling="2025-05-14 18:09:30.944185462 +0000 UTC m=+20.863934344" observedRunningTime="2025-05-14 18:09:31.760828208 +0000 UTC m=+21.680577105" watchObservedRunningTime="2025-05-14 18:09:32.33355678 +0000 UTC m=+22.253305763" May 14 18:09:32.343678 systemd[1]: Created slice kubepods-burstable-podf92d9e79_66d5_4da5_b69b_75136b1a701f.slice - libcontainer container kubepods-burstable-podf92d9e79_66d5_4da5_b69b_75136b1a701f.slice. May 14 18:09:32.361435 systemd[1]: Created slice kubepods-burstable-pod5ad81bb4_33f1_462b_87ec_41b481b8feda.slice - libcontainer container kubepods-burstable-pod5ad81bb4_33f1_462b_87ec_41b481b8feda.slice. May 14 18:09:32.403494 kubelet[2667]: I0514 18:09:32.403207 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92d9e79-66d5-4da5-b69b-75136b1a701f-config-volume\") pod \"coredns-6f6b679f8f-69t9d\" (UID: \"f92d9e79-66d5-4da5-b69b-75136b1a701f\") " pod="kube-system/coredns-6f6b679f8f-69t9d" May 14 18:09:32.403494 kubelet[2667]: I0514 18:09:32.403262 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwn2c\" (UniqueName: \"kubernetes.io/projected/f92d9e79-66d5-4da5-b69b-75136b1a701f-kube-api-access-bwn2c\") pod \"coredns-6f6b679f8f-69t9d\" (UID: \"f92d9e79-66d5-4da5-b69b-75136b1a701f\") " pod="kube-system/coredns-6f6b679f8f-69t9d" May 14 18:09:32.403494 kubelet[2667]: I0514 18:09:32.403308 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ad81bb4-33f1-462b-87ec-41b481b8feda-config-volume\") pod \"coredns-6f6b679f8f-65jlw\" (UID: \"5ad81bb4-33f1-462b-87ec-41b481b8feda\") " pod="kube-system/coredns-6f6b679f8f-65jlw" May 14 18:09:32.403494 kubelet[2667]: I0514 18:09:32.403331 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcv4r\" (UniqueName: \"kubernetes.io/projected/5ad81bb4-33f1-462b-87ec-41b481b8feda-kube-api-access-gcv4r\") pod \"coredns-6f6b679f8f-65jlw\" (UID: \"5ad81bb4-33f1-462b-87ec-41b481b8feda\") " pod="kube-system/coredns-6f6b679f8f-65jlw" May 14 18:09:32.458445 kubelet[2667]: E0514 18:09:32.458375 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:32.459772 kubelet[2667]: E0514 18:09:32.459729 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:32.955390 kubelet[2667]: E0514 18:09:32.955306 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:32.958150 containerd[1525]: time="2025-05-14T18:09:32.958101435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69t9d,Uid:f92d9e79-66d5-4da5-b69b-75136b1a701f,Namespace:kube-system,Attempt:0,}" May 14 18:09:32.967174 kubelet[2667]: E0514 18:09:32.966871 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:32.968008 containerd[1525]: time="2025-05-14T18:09:32.967963131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-65jlw,Uid:5ad81bb4-33f1-462b-87ec-41b481b8feda,Namespace:kube-system,Attempt:0,}" May 14 18:09:33.463626 kubelet[2667]: E0514 18:09:33.463523 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:34.465904 kubelet[2667]: E0514 18:09:34.465864 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:34.807893 systemd-networkd[1447]: cilium_host: Link UP May 14 18:09:34.808144 systemd-networkd[1447]: cilium_net: Link UP May 14 18:09:34.808373 systemd-networkd[1447]: cilium_host: Gained carrier May 14 18:09:34.810387 systemd-networkd[1447]: cilium_net: Gained carrier May 14 18:09:34.979037 systemd-networkd[1447]: cilium_vxlan: Link UP May 14 18:09:34.979052 systemd-networkd[1447]: cilium_vxlan: Gained carrier May 14 18:09:35.348796 systemd-networkd[1447]: cilium_host: Gained IPv6LL May 14 18:09:35.426490 kernel: NET: Registered PF_ALG protocol family May 14 18:09:35.669750 systemd-networkd[1447]: cilium_net: Gained IPv6LL May 14 18:09:36.180908 systemd-networkd[1447]: cilium_vxlan: Gained IPv6LL May 14 18:09:36.519054 systemd-networkd[1447]: lxc_health: Link UP May 14 18:09:36.525331 systemd-networkd[1447]: lxc_health: Gained carrier May 14 18:09:37.046453 kernel: eth0: renamed from tmpa5b4d May 14 18:09:37.050107 systemd-networkd[1447]: lxc72fb78b6db83: Link UP May 14 18:09:37.053335 systemd-networkd[1447]: lxc72fb78b6db83: Gained carrier May 14 18:09:37.090181 systemd-networkd[1447]: lxc0586c5c2f605: Link UP May 14 18:09:37.101501 kernel: eth0: renamed from tmpd2fab May 14 18:09:37.104333 systemd-networkd[1447]: lxc0586c5c2f605: Gained carrier May 14 18:09:37.653536 systemd-networkd[1447]: lxc_health: Gained IPv6LL May 14 18:09:37.720998 kubelet[2667]: E0514 18:09:37.720656 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:37.751577 kubelet[2667]: I0514 18:09:37.750970 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dvkd2" podStartSLOduration=12.153052986 podStartE2EDuration="21.750914171s" podCreationTimestamp="2025-05-14 18:09:16 +0000 UTC" firstStartedPulling="2025-05-14 18:09:17.318033612 +0000 UTC m=+7.860243327" lastFinishedPulling="2025-05-14 18:09:27.538355635 +0000 UTC m=+17.458104512" observedRunningTime="2025-05-14 18:09:32.769659717 +0000 UTC m=+22.689408605" watchObservedRunningTime="2025-05-14 18:09:37.750914171 +0000 UTC m=+27.670663079" May 14 18:09:38.229167 systemd-networkd[1447]: lxc72fb78b6db83: Gained IPv6LL May 14 18:09:38.491305 kubelet[2667]: E0514 18:09:38.491151 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:38.996941 systemd-networkd[1447]: lxc0586c5c2f605: Gained IPv6LL May 14 18:09:39.494354 kubelet[2667]: E0514 18:09:39.494302 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:42.980345 containerd[1525]: time="2025-05-14T18:09:42.979801792Z" level=info msg="connecting to shim a5b4d0e0b02c3142cdb56c170fe3e458322ee1e52f912ccfe19e3ee91b5fa8d2" address="unix:///run/containerd/s/245dda415405139462530485dcc087afc699f5ff8cad077702da3c4f21f98df5" namespace=k8s.io protocol=ttrpc version=3 May 14 18:09:42.986816 containerd[1525]: time="2025-05-14T18:09:42.986758591Z" level=info msg="connecting to shim d2fab2d95884d2e778ff2dd84a67f37d7c4028caa6c8bca17cc96253b38c9451" address="unix:///run/containerd/s/7acc3b367d2e782c83770c66dac9561d1257b30370090c3d0a54ff31af732f69" namespace=k8s.io protocol=ttrpc version=3 May 14 18:09:43.069077 systemd[1]: Started cri-containerd-a5b4d0e0b02c3142cdb56c170fe3e458322ee1e52f912ccfe19e3ee91b5fa8d2.scope - libcontainer container a5b4d0e0b02c3142cdb56c170fe3e458322ee1e52f912ccfe19e3ee91b5fa8d2. May 14 18:09:43.088259 systemd[1]: Started cri-containerd-d2fab2d95884d2e778ff2dd84a67f37d7c4028caa6c8bca17cc96253b38c9451.scope - libcontainer container d2fab2d95884d2e778ff2dd84a67f37d7c4028caa6c8bca17cc96253b38c9451. May 14 18:09:43.225138 containerd[1525]: time="2025-05-14T18:09:43.225072218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69t9d,Uid:f92d9e79-66d5-4da5-b69b-75136b1a701f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5b4d0e0b02c3142cdb56c170fe3e458322ee1e52f912ccfe19e3ee91b5fa8d2\"" May 14 18:09:43.226728 kubelet[2667]: E0514 18:09:43.226652 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:43.232593 containerd[1525]: time="2025-05-14T18:09:43.232269485Z" level=info msg="CreateContainer within sandbox \"a5b4d0e0b02c3142cdb56c170fe3e458322ee1e52f912ccfe19e3ee91b5fa8d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:09:43.248583 containerd[1525]: time="2025-05-14T18:09:43.248505008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-65jlw,Uid:5ad81bb4-33f1-462b-87ec-41b481b8feda,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2fab2d95884d2e778ff2dd84a67f37d7c4028caa6c8bca17cc96253b38c9451\"" May 14 18:09:43.250912 kubelet[2667]: E0514 18:09:43.250500 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:43.256284 containerd[1525]: time="2025-05-14T18:09:43.256220159Z" level=info msg="CreateContainer within sandbox \"d2fab2d95884d2e778ff2dd84a67f37d7c4028caa6c8bca17cc96253b38c9451\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:09:43.278869 containerd[1525]: time="2025-05-14T18:09:43.278537964Z" level=info msg="Container 14faa2d4127bfcb9f70ed07286b2845da4f50e929da53004b014fc21baf4ab7a: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:43.280741 containerd[1525]: time="2025-05-14T18:09:43.280695068Z" level=info msg="Container d3b7129bfd713e4e77793fbe03cda63a42e4a692a651e4be29a2e44282525179: CDI devices from CRI Config.CDIDevices: []" May 14 18:09:43.289340 containerd[1525]: time="2025-05-14T18:09:43.289275698Z" level=info msg="CreateContainer within sandbox \"a5b4d0e0b02c3142cdb56c170fe3e458322ee1e52f912ccfe19e3ee91b5fa8d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14faa2d4127bfcb9f70ed07286b2845da4f50e929da53004b014fc21baf4ab7a\"" May 14 18:09:43.291454 containerd[1525]: time="2025-05-14T18:09:43.290236763Z" level=info msg="StartContainer for \"14faa2d4127bfcb9f70ed07286b2845da4f50e929da53004b014fc21baf4ab7a\"" May 14 18:09:43.294258 containerd[1525]: time="2025-05-14T18:09:43.294203196Z" level=info msg="connecting to shim 14faa2d4127bfcb9f70ed07286b2845da4f50e929da53004b014fc21baf4ab7a" address="unix:///run/containerd/s/245dda415405139462530485dcc087afc699f5ff8cad077702da3c4f21f98df5" protocol=ttrpc version=3 May 14 18:09:43.300587 containerd[1525]: time="2025-05-14T18:09:43.300392878Z" level=info msg="CreateContainer within sandbox \"d2fab2d95884d2e778ff2dd84a67f37d7c4028caa6c8bca17cc96253b38c9451\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3b7129bfd713e4e77793fbe03cda63a42e4a692a651e4be29a2e44282525179\"" May 14 18:09:43.302531 containerd[1525]: time="2025-05-14T18:09:43.302153001Z" level=info msg="StartContainer for \"d3b7129bfd713e4e77793fbe03cda63a42e4a692a651e4be29a2e44282525179\"" May 14 18:09:43.304775 containerd[1525]: time="2025-05-14T18:09:43.304708259Z" level=info msg="connecting to shim d3b7129bfd713e4e77793fbe03cda63a42e4a692a651e4be29a2e44282525179" address="unix:///run/containerd/s/7acc3b367d2e782c83770c66dac9561d1257b30370090c3d0a54ff31af732f69" protocol=ttrpc version=3 May 14 18:09:43.342836 systemd[1]: Started cri-containerd-14faa2d4127bfcb9f70ed07286b2845da4f50e929da53004b014fc21baf4ab7a.scope - libcontainer container 14faa2d4127bfcb9f70ed07286b2845da4f50e929da53004b014fc21baf4ab7a. May 14 18:09:43.345288 systemd[1]: Started cri-containerd-d3b7129bfd713e4e77793fbe03cda63a42e4a692a651e4be29a2e44282525179.scope - libcontainer container d3b7129bfd713e4e77793fbe03cda63a42e4a692a651e4be29a2e44282525179. May 14 18:09:43.433802 containerd[1525]: time="2025-05-14T18:09:43.433624026Z" level=info msg="StartContainer for \"14faa2d4127bfcb9f70ed07286b2845da4f50e929da53004b014fc21baf4ab7a\" returns successfully" May 14 18:09:43.436469 containerd[1525]: time="2025-05-14T18:09:43.435657737Z" level=info msg="StartContainer for \"d3b7129bfd713e4e77793fbe03cda63a42e4a692a651e4be29a2e44282525179\" returns successfully" May 14 18:09:43.516570 kubelet[2667]: E0514 18:09:43.515906 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:43.523530 kubelet[2667]: E0514 18:09:43.523474 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:43.549124 kubelet[2667]: I0514 18:09:43.549016 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-65jlw" podStartSLOduration=27.548983192 podStartE2EDuration="27.548983192s" podCreationTimestamp="2025-05-14 18:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:09:43.546804439 +0000 UTC m=+33.466553321" watchObservedRunningTime="2025-05-14 18:09:43.548983192 +0000 UTC m=+33.468732079" May 14 18:09:43.582817 kubelet[2667]: I0514 18:09:43.582742 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-69t9d" podStartSLOduration=27.582709024 podStartE2EDuration="27.582709024s" podCreationTimestamp="2025-05-14 18:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:09:43.58096704 +0000 UTC m=+33.500715933" watchObservedRunningTime="2025-05-14 18:09:43.582709024 +0000 UTC m=+33.502457891" May 14 18:09:43.948725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3684795934.mount: Deactivated successfully. May 14 18:09:44.525260 kubelet[2667]: E0514 18:09:44.525108 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:44.527435 kubelet[2667]: E0514 18:09:44.526966 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:45.528539 kubelet[2667]: E0514 18:09:45.527868 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:45.528539 kubelet[2667]: E0514 18:09:45.527958 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:09:58.270257 systemd[1]: Started sshd@8-164.90.152.250:22-139.178.89.65:40270.service - OpenSSH per-connection server daemon (139.178.89.65:40270). May 14 18:09:58.367585 sshd[3999]: Accepted publickey for core from 139.178.89.65 port 40270 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:09:58.369952 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:58.377071 systemd-logind[1501]: New session 8 of user core. May 14 18:09:58.385799 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:09:59.036060 sshd[4001]: Connection closed by 139.178.89.65 port 40270 May 14 18:09:59.037347 sshd-session[3999]: pam_unix(sshd:session): session closed for user core May 14 18:09:59.042499 systemd-logind[1501]: Session 8 logged out. Waiting for processes to exit. May 14 18:09:59.042822 systemd[1]: sshd@8-164.90.152.250:22-139.178.89.65:40270.service: Deactivated successfully. May 14 18:09:59.047781 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:09:59.051543 systemd-logind[1501]: Removed session 8. May 14 18:10:04.056149 systemd[1]: Started sshd@9-164.90.152.250:22-139.178.89.65:40272.service - OpenSSH per-connection server daemon (139.178.89.65:40272). May 14 18:10:04.148677 sshd[4014]: Accepted publickey for core from 139.178.89.65 port 40272 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:04.152040 sshd-session[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:04.158807 systemd-logind[1501]: New session 9 of user core. May 14 18:10:04.166846 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:10:04.355315 sshd[4016]: Connection closed by 139.178.89.65 port 40272 May 14 18:10:04.355790 sshd-session[4014]: pam_unix(sshd:session): session closed for user core May 14 18:10:04.364307 systemd[1]: sshd@9-164.90.152.250:22-139.178.89.65:40272.service: Deactivated successfully. May 14 18:10:04.367705 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:10:04.369431 systemd-logind[1501]: Session 9 logged out. Waiting for processes to exit. May 14 18:10:04.372404 systemd-logind[1501]: Removed session 9. May 14 18:10:09.370935 systemd[1]: Started sshd@10-164.90.152.250:22-139.178.89.65:54822.service - OpenSSH per-connection server daemon (139.178.89.65:54822). May 14 18:10:09.442586 sshd[4029]: Accepted publickey for core from 139.178.89.65 port 54822 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:09.444485 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:09.453041 systemd-logind[1501]: New session 10 of user core. May 14 18:10:09.462861 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:10:09.630619 sshd[4031]: Connection closed by 139.178.89.65 port 54822 May 14 18:10:09.632707 sshd-session[4029]: pam_unix(sshd:session): session closed for user core May 14 18:10:09.642398 systemd[1]: sshd@10-164.90.152.250:22-139.178.89.65:54822.service: Deactivated successfully. May 14 18:10:09.648624 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:10:09.651212 systemd-logind[1501]: Session 10 logged out. Waiting for processes to exit. May 14 18:10:09.653383 systemd-logind[1501]: Removed session 10. May 14 18:10:14.646556 systemd[1]: Started sshd@11-164.90.152.250:22-139.178.89.65:54824.service - OpenSSH per-connection server daemon (139.178.89.65:54824). May 14 18:10:14.714514 sshd[4046]: Accepted publickey for core from 139.178.89.65 port 54824 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:14.716504 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:14.725167 systemd-logind[1501]: New session 11 of user core. May 14 18:10:14.729777 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:10:14.906976 sshd[4048]: Connection closed by 139.178.89.65 port 54824 May 14 18:10:14.907849 sshd-session[4046]: pam_unix(sshd:session): session closed for user core May 14 18:10:14.921959 systemd[1]: sshd@11-164.90.152.250:22-139.178.89.65:54824.service: Deactivated successfully. May 14 18:10:14.925896 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:10:14.927725 systemd-logind[1501]: Session 11 logged out. Waiting for processes to exit. May 14 18:10:14.933600 systemd[1]: Started sshd@12-164.90.152.250:22-139.178.89.65:54838.service - OpenSSH per-connection server daemon (139.178.89.65:54838). May 14 18:10:14.935500 systemd-logind[1501]: Removed session 11. May 14 18:10:15.013281 sshd[4060]: Accepted publickey for core from 139.178.89.65 port 54838 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:15.015891 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:15.024561 systemd-logind[1501]: New session 12 of user core. May 14 18:10:15.029739 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:10:15.281468 sshd[4062]: Connection closed by 139.178.89.65 port 54838 May 14 18:10:15.284835 sshd-session[4060]: pam_unix(sshd:session): session closed for user core May 14 18:10:15.298247 systemd[1]: sshd@12-164.90.152.250:22-139.178.89.65:54838.service: Deactivated successfully. May 14 18:10:15.304546 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:10:15.307279 systemd-logind[1501]: Session 12 logged out. Waiting for processes to exit. May 14 18:10:15.320337 systemd[1]: Started sshd@13-164.90.152.250:22-139.178.89.65:54846.service - OpenSSH per-connection server daemon (139.178.89.65:54846). May 14 18:10:15.324497 systemd-logind[1501]: Removed session 12. May 14 18:10:15.396209 sshd[4072]: Accepted publickey for core from 139.178.89.65 port 54846 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:15.398815 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:15.406639 systemd-logind[1501]: New session 13 of user core. May 14 18:10:15.411764 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:10:15.585784 sshd[4074]: Connection closed by 139.178.89.65 port 54846 May 14 18:10:15.588068 sshd-session[4072]: pam_unix(sshd:session): session closed for user core May 14 18:10:15.592990 systemd[1]: sshd@13-164.90.152.250:22-139.178.89.65:54846.service: Deactivated successfully. May 14 18:10:15.598892 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:10:15.603982 systemd-logind[1501]: Session 13 logged out. Waiting for processes to exit. May 14 18:10:15.605543 systemd-logind[1501]: Removed session 13. May 14 18:10:19.253450 kubelet[2667]: E0514 18:10:19.253114 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:20.602761 systemd[1]: Started sshd@14-164.90.152.250:22-139.178.89.65:43642.service - OpenSSH per-connection server daemon (139.178.89.65:43642). May 14 18:10:20.696379 sshd[4088]: Accepted publickey for core from 139.178.89.65 port 43642 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:20.699115 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:20.708698 systemd-logind[1501]: New session 14 of user core. May 14 18:10:20.716855 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:10:20.879789 sshd[4090]: Connection closed by 139.178.89.65 port 43642 May 14 18:10:20.880604 sshd-session[4088]: pam_unix(sshd:session): session closed for user core May 14 18:10:20.886262 systemd[1]: sshd@14-164.90.152.250:22-139.178.89.65:43642.service: Deactivated successfully. May 14 18:10:20.889513 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:10:20.891401 systemd-logind[1501]: Session 14 logged out. Waiting for processes to exit. May 14 18:10:20.894843 systemd-logind[1501]: Removed session 14. May 14 18:10:25.896685 systemd[1]: Started sshd@15-164.90.152.250:22-139.178.89.65:43648.service - OpenSSH per-connection server daemon (139.178.89.65:43648). May 14 18:10:25.963385 sshd[4102]: Accepted publickey for core from 139.178.89.65 port 43648 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:25.965554 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:25.972830 systemd-logind[1501]: New session 15 of user core. May 14 18:10:25.984747 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:10:26.137805 sshd[4104]: Connection closed by 139.178.89.65 port 43648 May 14 18:10:26.140716 sshd-session[4102]: pam_unix(sshd:session): session closed for user core May 14 18:10:26.153600 systemd[1]: sshd@15-164.90.152.250:22-139.178.89.65:43648.service: Deactivated successfully. May 14 18:10:26.157430 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:10:26.158848 systemd-logind[1501]: Session 15 logged out. Waiting for processes to exit. May 14 18:10:26.168578 systemd[1]: Started sshd@16-164.90.152.250:22-139.178.89.65:43652.service - OpenSSH per-connection server daemon (139.178.89.65:43652). May 14 18:10:26.170033 systemd-logind[1501]: Removed session 15. May 14 18:10:26.245555 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 43652 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:26.249932 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:26.259957 systemd-logind[1501]: New session 16 of user core. May 14 18:10:26.266829 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:10:26.588481 sshd[4117]: Connection closed by 139.178.89.65 port 43652 May 14 18:10:26.589514 sshd-session[4115]: pam_unix(sshd:session): session closed for user core May 14 18:10:26.601328 systemd[1]: sshd@16-164.90.152.250:22-139.178.89.65:43652.service: Deactivated successfully. May 14 18:10:26.605159 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:10:26.608367 systemd-logind[1501]: Session 16 logged out. Waiting for processes to exit. May 14 18:10:26.612758 systemd[1]: Started sshd@17-164.90.152.250:22-139.178.89.65:59948.service - OpenSSH per-connection server daemon (139.178.89.65:59948). May 14 18:10:26.613775 systemd-logind[1501]: Removed session 16. May 14 18:10:26.703471 sshd[4127]: Accepted publickey for core from 139.178.89.65 port 59948 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:26.704860 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:26.712911 systemd-logind[1501]: New session 17 of user core. May 14 18:10:26.718733 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:10:27.252848 kubelet[2667]: E0514 18:10:27.252657 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:28.776631 sshd[4129]: Connection closed by 139.178.89.65 port 59948 May 14 18:10:28.777601 sshd-session[4127]: pam_unix(sshd:session): session closed for user core May 14 18:10:28.795743 systemd[1]: sshd@17-164.90.152.250:22-139.178.89.65:59948.service: Deactivated successfully. May 14 18:10:28.804061 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:10:28.808322 systemd-logind[1501]: Session 17 logged out. Waiting for processes to exit. May 14 18:10:28.820239 systemd[1]: Started sshd@18-164.90.152.250:22-139.178.89.65:59964.service - OpenSSH per-connection server daemon (139.178.89.65:59964). May 14 18:10:28.823024 systemd-logind[1501]: Removed session 17. May 14 18:10:28.903504 sshd[4147]: Accepted publickey for core from 139.178.89.65 port 59964 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:28.905633 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:28.913142 systemd-logind[1501]: New session 18 of user core. May 14 18:10:28.921922 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:10:29.328303 sshd[4149]: Connection closed by 139.178.89.65 port 59964 May 14 18:10:29.328827 sshd-session[4147]: pam_unix(sshd:session): session closed for user core May 14 18:10:29.346808 systemd[1]: sshd@18-164.90.152.250:22-139.178.89.65:59964.service: Deactivated successfully. May 14 18:10:29.352383 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:10:29.354311 systemd-logind[1501]: Session 18 logged out. Waiting for processes to exit. May 14 18:10:29.363338 systemd[1]: Started sshd@19-164.90.152.250:22-139.178.89.65:59974.service - OpenSSH per-connection server daemon (139.178.89.65:59974). May 14 18:10:29.365146 systemd-logind[1501]: Removed session 18. May 14 18:10:29.428998 sshd[4159]: Accepted publickey for core from 139.178.89.65 port 59974 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:29.431292 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:29.440181 systemd-logind[1501]: New session 19 of user core. May 14 18:10:29.453906 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:10:29.608871 sshd[4161]: Connection closed by 139.178.89.65 port 59974 May 14 18:10:29.610002 sshd-session[4159]: pam_unix(sshd:session): session closed for user core May 14 18:10:29.615818 systemd[1]: sshd@19-164.90.152.250:22-139.178.89.65:59974.service: Deactivated successfully. May 14 18:10:29.619482 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:10:29.623404 systemd-logind[1501]: Session 19 logged out. Waiting for processes to exit. May 14 18:10:29.626786 systemd-logind[1501]: Removed session 19. May 14 18:10:34.631864 systemd[1]: Started sshd@20-164.90.152.250:22-139.178.89.65:59980.service - OpenSSH per-connection server daemon (139.178.89.65:59980). May 14 18:10:34.702996 sshd[4174]: Accepted publickey for core from 139.178.89.65 port 59980 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:34.706153 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:34.716258 systemd-logind[1501]: New session 20 of user core. May 14 18:10:34.725900 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:10:35.077482 sshd[4176]: Connection closed by 139.178.89.65 port 59980 May 14 18:10:35.078261 sshd-session[4174]: pam_unix(sshd:session): session closed for user core May 14 18:10:35.084214 systemd-logind[1501]: Session 20 logged out. Waiting for processes to exit. May 14 18:10:35.086759 systemd[1]: sshd@20-164.90.152.250:22-139.178.89.65:59980.service: Deactivated successfully. May 14 18:10:35.092401 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:10:35.098054 systemd-logind[1501]: Removed session 20. May 14 18:10:37.246908 kubelet[2667]: E0514 18:10:37.246768 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:39.246829 kubelet[2667]: E0514 18:10:39.246481 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:39.246829 kubelet[2667]: E0514 18:10:39.246708 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:40.095900 systemd[1]: Started sshd@21-164.90.152.250:22-139.178.89.65:54136.service - OpenSSH per-connection server daemon (139.178.89.65:54136). May 14 18:10:40.183463 sshd[4190]: Accepted publickey for core from 139.178.89.65 port 54136 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:40.187211 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:40.195212 systemd-logind[1501]: New session 21 of user core. May 14 18:10:40.201796 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:10:40.431504 sshd[4192]: Connection closed by 139.178.89.65 port 54136 May 14 18:10:40.432320 sshd-session[4190]: pam_unix(sshd:session): session closed for user core May 14 18:10:40.440068 systemd[1]: sshd@21-164.90.152.250:22-139.178.89.65:54136.service: Deactivated successfully. May 14 18:10:40.445499 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:10:40.447262 systemd-logind[1501]: Session 21 logged out. Waiting for processes to exit. May 14 18:10:40.450342 systemd-logind[1501]: Removed session 21. May 14 18:10:45.450076 systemd[1]: Started sshd@22-164.90.152.250:22-139.178.89.65:54142.service - OpenSSH per-connection server daemon (139.178.89.65:54142). May 14 18:10:45.518554 sshd[4204]: Accepted publickey for core from 139.178.89.65 port 54142 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:45.520986 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:45.529090 systemd-logind[1501]: New session 22 of user core. May 14 18:10:45.536831 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:10:45.724228 sshd[4206]: Connection closed by 139.178.89.65 port 54142 May 14 18:10:45.725243 sshd-session[4204]: pam_unix(sshd:session): session closed for user core May 14 18:10:45.732771 systemd[1]: sshd@22-164.90.152.250:22-139.178.89.65:54142.service: Deactivated successfully. May 14 18:10:45.737442 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:10:45.740696 systemd-logind[1501]: Session 22 logged out. Waiting for processes to exit. May 14 18:10:45.744866 systemd-logind[1501]: Removed session 22. May 14 18:10:46.248186 kubelet[2667]: E0514 18:10:46.246925 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:50.744213 systemd[1]: Started sshd@23-164.90.152.250:22-139.178.89.65:49820.service - OpenSSH per-connection server daemon (139.178.89.65:49820). May 14 18:10:50.842227 sshd[4221]: Accepted publickey for core from 139.178.89.65 port 49820 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:50.844457 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:50.851453 systemd-logind[1501]: New session 23 of user core. May 14 18:10:50.858862 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:10:51.006698 sshd[4223]: Connection closed by 139.178.89.65 port 49820 May 14 18:10:51.007529 sshd-session[4221]: pam_unix(sshd:session): session closed for user core May 14 18:10:51.019963 systemd[1]: sshd@23-164.90.152.250:22-139.178.89.65:49820.service: Deactivated successfully. May 14 18:10:51.023824 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:10:51.026603 systemd-logind[1501]: Session 23 logged out. Waiting for processes to exit. May 14 18:10:51.032040 systemd[1]: Started sshd@24-164.90.152.250:22-139.178.89.65:49824.service - OpenSSH per-connection server daemon (139.178.89.65:49824). May 14 18:10:51.033557 systemd-logind[1501]: Removed session 23. May 14 18:10:51.102724 sshd[4234]: Accepted publickey for core from 139.178.89.65 port 49824 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:51.105168 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:51.115228 systemd-logind[1501]: New session 24 of user core. May 14 18:10:51.120728 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:10:53.071196 containerd[1525]: time="2025-05-14T18:10:53.071072799Z" level=info msg="StopContainer for \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" with timeout 30 (s)" May 14 18:10:53.080734 containerd[1525]: time="2025-05-14T18:10:53.080693126Z" level=info msg="Stop container \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" with signal terminated" May 14 18:10:53.123847 systemd[1]: cri-containerd-6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49.scope: Deactivated successfully. May 14 18:10:53.128080 containerd[1525]: time="2025-05-14T18:10:53.126141561Z" level=info msg="received exit event container_id:\"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" id:\"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" pid:3283 exited_at:{seconds:1747246253 nanos:125330146}" May 14 18:10:53.128389 containerd[1525]: time="2025-05-14T18:10:53.126405267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" id:\"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" pid:3283 exited_at:{seconds:1747246253 nanos:125330146}" May 14 18:10:53.149968 containerd[1525]: time="2025-05-14T18:10:53.149911457Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:10:53.157237 containerd[1525]: time="2025-05-14T18:10:53.156789424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" id:\"615a86c59d4cb17ad7a9c97909f76892e8457f296f5788dc831c567b1119e70e\" pid:4262 exited_at:{seconds:1747246253 nanos:156302337}" May 14 18:10:53.164653 containerd[1525]: time="2025-05-14T18:10:53.164451245Z" level=info msg="StopContainer for \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" with timeout 2 (s)" May 14 18:10:53.166022 containerd[1525]: time="2025-05-14T18:10:53.165975436Z" level=info msg="Stop container \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" with signal terminated" May 14 18:10:53.187082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49-rootfs.mount: Deactivated successfully. May 14 18:10:53.192543 systemd-networkd[1447]: lxc_health: Link DOWN May 14 18:10:53.192555 systemd-networkd[1447]: lxc_health: Lost carrier May 14 18:10:53.211475 containerd[1525]: time="2025-05-14T18:10:53.209921549Z" level=info msg="StopContainer for \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" returns successfully" May 14 18:10:53.212459 containerd[1525]: time="2025-05-14T18:10:53.212220123Z" level=info msg="StopPodSandbox for \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\"" May 14 18:10:53.216536 systemd[1]: cri-containerd-13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29.scope: Deactivated successfully. May 14 18:10:53.217014 systemd[1]: cri-containerd-13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29.scope: Consumed 10.109s CPU time, 193.1M memory peak, 69M read from disk, 13.3M written to disk. May 14 18:10:53.221842 containerd[1525]: time="2025-05-14T18:10:53.221797503Z" level=info msg="received exit event container_id:\"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" id:\"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" pid:3315 exited_at:{seconds:1747246253 nanos:221386659}" May 14 18:10:53.223361 containerd[1525]: time="2025-05-14T18:10:53.223300478Z" level=info msg="Container to stop \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:53.223941 containerd[1525]: time="2025-05-14T18:10:53.223879749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" id:\"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" pid:3315 exited_at:{seconds:1747246253 nanos:221386659}" May 14 18:10:53.236367 systemd[1]: cri-containerd-3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4.scope: Deactivated successfully. May 14 18:10:53.241084 containerd[1525]: time="2025-05-14T18:10:53.240988002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" id:\"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" pid:2866 exit_status:137 exited_at:{seconds:1747246253 nanos:240349308}" May 14 18:10:53.265926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29-rootfs.mount: Deactivated successfully. May 14 18:10:53.285515 containerd[1525]: time="2025-05-14T18:10:53.285396705Z" level=info msg="StopContainer for \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" returns successfully" May 14 18:10:53.286384 containerd[1525]: time="2025-05-14T18:10:53.286356942Z" level=info msg="StopPodSandbox for \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\"" May 14 18:10:53.286723 containerd[1525]: time="2025-05-14T18:10:53.286666641Z" level=info msg="Container to stop \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:53.286723 containerd[1525]: time="2025-05-14T18:10:53.286687125Z" level=info msg="Container to stop \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:53.286723 containerd[1525]: time="2025-05-14T18:10:53.286697587Z" level=info msg="Container to stop \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:53.287193 containerd[1525]: time="2025-05-14T18:10:53.286859674Z" level=info msg="Container to stop \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:53.287193 containerd[1525]: time="2025-05-14T18:10:53.286880006Z" level=info msg="Container to stop \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:53.300125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4-rootfs.mount: Deactivated successfully. May 14 18:10:53.306362 systemd[1]: cri-containerd-5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c.scope: Deactivated successfully. May 14 18:10:53.311133 containerd[1525]: time="2025-05-14T18:10:53.310894822Z" level=info msg="shim disconnected" id=3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4 namespace=k8s.io May 14 18:10:53.311666 containerd[1525]: time="2025-05-14T18:10:53.311479200Z" level=warning msg="cleaning up after shim disconnected" id=3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4 namespace=k8s.io May 14 18:10:53.318387 containerd[1525]: time="2025-05-14T18:10:53.311760713Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:10:53.354724 containerd[1525]: time="2025-05-14T18:10:53.353568352Z" level=info msg="received exit event sandbox_id:\"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" exit_status:137 exited_at:{seconds:1747246253 nanos:240349308}" May 14 18:10:53.359043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4-shm.mount: Deactivated successfully. May 14 18:10:53.379832 containerd[1525]: time="2025-05-14T18:10:53.379768336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" id:\"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" pid:2811 exit_status:137 exited_at:{seconds:1747246253 nanos:318964872}" May 14 18:10:53.380706 containerd[1525]: time="2025-05-14T18:10:53.380667001Z" level=info msg="TearDown network for sandbox \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" successfully" May 14 18:10:53.380880 containerd[1525]: time="2025-05-14T18:10:53.380861186Z" level=info msg="StopPodSandbox for \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" returns successfully" May 14 18:10:53.398471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c-rootfs.mount: Deactivated successfully. May 14 18:10:53.403460 containerd[1525]: time="2025-05-14T18:10:53.402623077Z" level=info msg="received exit event sandbox_id:\"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" exit_status:137 exited_at:{seconds:1747246253 nanos:318964872}" May 14 18:10:53.404315 containerd[1525]: time="2025-05-14T18:10:53.404274297Z" level=info msg="TearDown network for sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" successfully" May 14 18:10:53.404477 containerd[1525]: time="2025-05-14T18:10:53.404458236Z" level=info msg="StopPodSandbox for \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" returns successfully" May 14 18:10:53.408489 containerd[1525]: time="2025-05-14T18:10:53.407494779Z" level=info msg="shim disconnected" id=5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c namespace=k8s.io May 14 18:10:53.408489 containerd[1525]: time="2025-05-14T18:10:53.407550665Z" level=warning msg="cleaning up after shim disconnected" id=5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c namespace=k8s.io May 14 18:10:53.408489 containerd[1525]: time="2025-05-14T18:10:53.407562948Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:10:53.516199 kubelet[2667]: I0514 18:10:53.516118 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37ebb25f-fa56-4bb4-956a-4abdb7c70a4b-cilium-config-path\") pod \"37ebb25f-fa56-4bb4-956a-4abdb7c70a4b\" (UID: \"37ebb25f-fa56-4bb4-956a-4abdb7c70a4b\") " May 14 18:10:53.516199 kubelet[2667]: I0514 18:10:53.516209 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkgf2\" (UniqueName: \"kubernetes.io/projected/37ebb25f-fa56-4bb4-956a-4abdb7c70a4b-kube-api-access-hkgf2\") pod \"37ebb25f-fa56-4bb4-956a-4abdb7c70a4b\" (UID: \"37ebb25f-fa56-4bb4-956a-4abdb7c70a4b\") " May 14 18:10:53.519406 kubelet[2667]: I0514 18:10:53.519344 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ebb25f-fa56-4bb4-956a-4abdb7c70a4b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "37ebb25f-fa56-4bb4-956a-4abdb7c70a4b" (UID: "37ebb25f-fa56-4bb4-956a-4abdb7c70a4b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:10:53.537834 kubelet[2667]: I0514 18:10:53.537725 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ebb25f-fa56-4bb4-956a-4abdb7c70a4b-kube-api-access-hkgf2" (OuterVolumeSpecName: "kube-api-access-hkgf2") pod "37ebb25f-fa56-4bb4-956a-4abdb7c70a4b" (UID: "37ebb25f-fa56-4bb4-956a-4abdb7c70a4b"). InnerVolumeSpecName "kube-api-access-hkgf2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:10:53.617402 kubelet[2667]: I0514 18:10:53.617186 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-etc-cni-netd\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617402 kubelet[2667]: I0514 18:10:53.617267 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-bpf-maps\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617402 kubelet[2667]: I0514 18:10:53.617298 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-xtables-lock\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617402 kubelet[2667]: I0514 18:10:53.617328 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cni-path\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617402 kubelet[2667]: I0514 18:10:53.617367 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-hubble-tls\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617402 kubelet[2667]: I0514 18:10:53.617398 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-clustermesh-secrets\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617874 kubelet[2667]: I0514 18:10:53.617472 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-host-proc-sys-kernel\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617874 kubelet[2667]: I0514 18:10:53.617499 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-run\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617874 kubelet[2667]: I0514 18:10:53.617527 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-hostproc\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617874 kubelet[2667]: I0514 18:10:53.617556 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-config-path\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617874 kubelet[2667]: I0514 18:10:53.617585 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4htcp\" (UniqueName: \"kubernetes.io/projected/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-kube-api-access-4htcp\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.617874 kubelet[2667]: I0514 18:10:53.617611 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-lib-modules\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.618186 kubelet[2667]: I0514 18:10:53.617635 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-host-proc-sys-net\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.618186 kubelet[2667]: I0514 18:10:53.617658 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-cgroup\") pod \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\" (UID: \"8e8defdf-3357-4334-b4b6-e6c23eaa7a8e\") " May 14 18:10:53.618186 kubelet[2667]: I0514 18:10:53.617717 2667 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37ebb25f-fa56-4bb4-956a-4abdb7c70a4b-cilium-config-path\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.618186 kubelet[2667]: I0514 18:10:53.617738 2667 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hkgf2\" (UniqueName: \"kubernetes.io/projected/37ebb25f-fa56-4bb4-956a-4abdb7c70a4b-kube-api-access-hkgf2\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.618186 kubelet[2667]: I0514 18:10:53.617826 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.618186 kubelet[2667]: I0514 18:10:53.617886 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.619970 kubelet[2667]: I0514 18:10:53.617909 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.619970 kubelet[2667]: I0514 18:10:53.617946 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.619970 kubelet[2667]: I0514 18:10:53.617967 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cni-path" (OuterVolumeSpecName: "cni-path") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.619970 kubelet[2667]: I0514 18:10:53.619109 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-hostproc" (OuterVolumeSpecName: "hostproc") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.631488 kubelet[2667]: I0514 18:10:53.630634 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:10:53.631488 kubelet[2667]: I0514 18:10:53.631243 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:10:53.631488 kubelet[2667]: I0514 18:10:53.631382 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.631488 kubelet[2667]: I0514 18:10:53.631492 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.631816 kubelet[2667]: I0514 18:10:53.631526 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.634723 kubelet[2667]: I0514 18:10:53.634649 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:10:53.634986 kubelet[2667]: I0514 18:10:53.634964 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:53.639525 kubelet[2667]: I0514 18:10:53.639457 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-kube-api-access-4htcp" (OuterVolumeSpecName: "kube-api-access-4htcp") pod "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" (UID: "8e8defdf-3357-4334-b4b6-e6c23eaa7a8e"). InnerVolumeSpecName "kube-api-access-4htcp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:10:53.718917 kubelet[2667]: I0514 18:10:53.718687 2667 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-xtables-lock\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.718917 kubelet[2667]: I0514 18:10:53.718750 2667 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cni-path\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.718917 kubelet[2667]: I0514 18:10:53.718760 2667 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-hubble-tls\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.718917 kubelet[2667]: I0514 18:10:53.718769 2667 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-clustermesh-secrets\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.718917 kubelet[2667]: I0514 18:10:53.718784 2667 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-host-proc-sys-kernel\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.718917 kubelet[2667]: I0514 18:10:53.718794 2667 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-run\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.718917 kubelet[2667]: I0514 18:10:53.718803 2667 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-hostproc\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.718917 kubelet[2667]: I0514 18:10:53.718812 2667 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-config-path\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.719339 kubelet[2667]: I0514 18:10:53.718824 2667 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4htcp\" (UniqueName: \"kubernetes.io/projected/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-kube-api-access-4htcp\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.719339 kubelet[2667]: I0514 18:10:53.718833 2667 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-lib-modules\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.719339 kubelet[2667]: I0514 18:10:53.718844 2667 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-host-proc-sys-net\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.719339 kubelet[2667]: I0514 18:10:53.718856 2667 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-cilium-cgroup\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.719339 kubelet[2667]: I0514 18:10:53.718864 2667 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-etc-cni-netd\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.719339 kubelet[2667]: I0514 18:10:53.718874 2667 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e-bpf-maps\") on node \"ci-4334.0.0-a-3f9ee7d7d0\" DevicePath \"\"" May 14 18:10:53.733335 kubelet[2667]: I0514 18:10:53.733196 2667 scope.go:117] "RemoveContainer" containerID="6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49" May 14 18:10:53.740218 containerd[1525]: time="2025-05-14T18:10:53.739923338Z" level=info msg="RemoveContainer for \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\"" May 14 18:10:53.745465 systemd[1]: Removed slice kubepods-besteffort-pod37ebb25f_fa56_4bb4_956a_4abdb7c70a4b.slice - libcontainer container kubepods-besteffort-pod37ebb25f_fa56_4bb4_956a_4abdb7c70a4b.slice. May 14 18:10:53.748022 containerd[1525]: time="2025-05-14T18:10:53.747988392Z" level=info msg="RemoveContainer for \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" returns successfully" May 14 18:10:53.748937 kubelet[2667]: I0514 18:10:53.748906 2667 scope.go:117] "RemoveContainer" containerID="6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49" May 14 18:10:53.753820 containerd[1525]: time="2025-05-14T18:10:53.749276045Z" level=error msg="ContainerStatus for \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\": not found" May 14 18:10:53.757921 kubelet[2667]: E0514 18:10:53.757362 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\": not found" containerID="6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49" May 14 18:10:53.757921 kubelet[2667]: I0514 18:10:53.757446 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49"} err="failed to get container status \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\": rpc error: code = NotFound desc = an error occurred when try to find container \"6342eff418d0c01164dd37b31958394f4051cfe5078d3e0122de832b3c381f49\": not found" May 14 18:10:53.757921 kubelet[2667]: I0514 18:10:53.757739 2667 scope.go:117] "RemoveContainer" containerID="13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29" May 14 18:10:53.766697 containerd[1525]: time="2025-05-14T18:10:53.765547987Z" level=info msg="RemoveContainer for \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\"" May 14 18:10:53.773405 systemd[1]: Removed slice kubepods-burstable-pod8e8defdf_3357_4334_b4b6_e6c23eaa7a8e.slice - libcontainer container kubepods-burstable-pod8e8defdf_3357_4334_b4b6_e6c23eaa7a8e.slice. May 14 18:10:53.773597 systemd[1]: kubepods-burstable-pod8e8defdf_3357_4334_b4b6_e6c23eaa7a8e.slice: Consumed 10.256s CPU time, 193.4M memory peak, 69M read from disk, 13.3M written to disk. May 14 18:10:53.792375 containerd[1525]: time="2025-05-14T18:10:53.792239187Z" level=info msg="RemoveContainer for \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" returns successfully" May 14 18:10:53.792984 kubelet[2667]: I0514 18:10:53.792953 2667 scope.go:117] "RemoveContainer" containerID="9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d" May 14 18:10:53.797850 containerd[1525]: time="2025-05-14T18:10:53.797714644Z" level=info msg="RemoveContainer for \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\"" May 14 18:10:53.807989 containerd[1525]: time="2025-05-14T18:10:53.807869434Z" level=info msg="RemoveContainer for \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\" returns successfully" May 14 18:10:53.810727 kubelet[2667]: I0514 18:10:53.810686 2667 scope.go:117] "RemoveContainer" containerID="54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9" May 14 18:10:53.817452 containerd[1525]: time="2025-05-14T18:10:53.817385127Z" level=info msg="RemoveContainer for \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\"" May 14 18:10:53.825278 containerd[1525]: time="2025-05-14T18:10:53.825122101Z" level=info msg="RemoveContainer for \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\" returns successfully" May 14 18:10:53.825751 kubelet[2667]: I0514 18:10:53.825697 2667 scope.go:117] "RemoveContainer" containerID="3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803" May 14 18:10:53.831489 containerd[1525]: time="2025-05-14T18:10:53.831291462Z" level=info msg="RemoveContainer for \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\"" May 14 18:10:53.836374 containerd[1525]: time="2025-05-14T18:10:53.836320868Z" level=info msg="RemoveContainer for \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\" returns successfully" May 14 18:10:53.837026 kubelet[2667]: I0514 18:10:53.836991 2667 scope.go:117] "RemoveContainer" containerID="1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f" May 14 18:10:53.840872 containerd[1525]: time="2025-05-14T18:10:53.840807933Z" level=info msg="RemoveContainer for \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\"" May 14 18:10:53.845766 containerd[1525]: time="2025-05-14T18:10:53.845714167Z" level=info msg="RemoveContainer for \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\" returns successfully" May 14 18:10:53.846030 kubelet[2667]: I0514 18:10:53.845991 2667 scope.go:117] "RemoveContainer" containerID="13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29" May 14 18:10:53.846499 containerd[1525]: time="2025-05-14T18:10:53.846362614Z" level=error msg="ContainerStatus for \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\": not found" May 14 18:10:53.846717 kubelet[2667]: E0514 18:10:53.846683 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\": not found" containerID="13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29" May 14 18:10:53.846765 kubelet[2667]: I0514 18:10:53.846718 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29"} err="failed to get container status \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\": rpc error: code = NotFound desc = an error occurred when try to find container \"13ac41574ab9eb8848d8e4996beffc2e4057fba3ef7bd45af30d148302071a29\": not found" May 14 18:10:53.846810 kubelet[2667]: I0514 18:10:53.846772 2667 scope.go:117] "RemoveContainer" containerID="9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d" May 14 18:10:53.847140 containerd[1525]: time="2025-05-14T18:10:53.847027287Z" level=error msg="ContainerStatus for \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\": not found" May 14 18:10:53.847265 kubelet[2667]: E0514 18:10:53.847229 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\": not found" containerID="9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d" May 14 18:10:53.847323 kubelet[2667]: I0514 18:10:53.847308 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d"} err="failed to get container status \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9cc1c56524261a9827afddcd2c476dff95467e87fb0e5dbba0f77f38abbfd47d\": not found" May 14 18:10:53.847368 kubelet[2667]: I0514 18:10:53.847329 2667 scope.go:117] "RemoveContainer" containerID="54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9" May 14 18:10:53.847644 containerd[1525]: time="2025-05-14T18:10:53.847604532Z" level=error msg="ContainerStatus for \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\": not found" May 14 18:10:53.847824 kubelet[2667]: E0514 18:10:53.847801 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\": not found" containerID="54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9" May 14 18:10:53.847877 kubelet[2667]: I0514 18:10:53.847824 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9"} err="failed to get container status \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"54c8c110b40e104f557dadb7b7ec4b72cd02ae9665e0ab39b8cac12a30c6e2b9\": not found" May 14 18:10:53.847877 kubelet[2667]: I0514 18:10:53.847843 2667 scope.go:117] "RemoveContainer" containerID="3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803" May 14 18:10:53.848083 containerd[1525]: time="2025-05-14T18:10:53.848040954Z" level=error msg="ContainerStatus for \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\": not found" May 14 18:10:53.848398 kubelet[2667]: E0514 18:10:53.848369 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\": not found" containerID="3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803" May 14 18:10:53.848488 kubelet[2667]: I0514 18:10:53.848401 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803"} err="failed to get container status \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ea2d201d7021cc45fbc161d1232b81c87bdf1abc5aabf9799d96b4d492f3803\": not found" May 14 18:10:53.848488 kubelet[2667]: I0514 18:10:53.848452 2667 scope.go:117] "RemoveContainer" containerID="1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f" May 14 18:10:53.848718 containerd[1525]: time="2025-05-14T18:10:53.848675913Z" level=error msg="ContainerStatus for \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\": not found" May 14 18:10:53.848818 kubelet[2667]: E0514 18:10:53.848798 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\": not found" containerID="1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f" May 14 18:10:53.848861 kubelet[2667]: I0514 18:10:53.848820 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f"} err="failed to get container status \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1db17fe0bc1ef2816ae83c39327fcdb864606e3d08ccc8ec4d2ffe03835cd68f\": not found" May 14 18:10:54.185257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c-shm.mount: Deactivated successfully. May 14 18:10:54.185498 systemd[1]: var-lib-kubelet-pods-37ebb25f\x2dfa56\x2d4bb4\x2d956a\x2d4abdb7c70a4b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhkgf2.mount: Deactivated successfully. May 14 18:10:54.185583 systemd[1]: var-lib-kubelet-pods-8e8defdf\x2d3357\x2d4334\x2db4b6\x2de6c23eaa7a8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4htcp.mount: Deactivated successfully. May 14 18:10:54.185676 systemd[1]: var-lib-kubelet-pods-8e8defdf\x2d3357\x2d4334\x2db4b6\x2de6c23eaa7a8e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 18:10:54.185752 systemd[1]: var-lib-kubelet-pods-8e8defdf\x2d3357\x2d4334\x2db4b6\x2de6c23eaa7a8e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 18:10:54.250590 kubelet[2667]: I0514 18:10:54.249918 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ebb25f-fa56-4bb4-956a-4abdb7c70a4b" path="/var/lib/kubelet/pods/37ebb25f-fa56-4bb4-956a-4abdb7c70a4b/volumes" May 14 18:10:54.250792 kubelet[2667]: I0514 18:10:54.250402 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" path="/var/lib/kubelet/pods/8e8defdf-3357-4334-b4b6-e6c23eaa7a8e/volumes" May 14 18:10:54.967459 sshd[4236]: Connection closed by 139.178.89.65 port 49824 May 14 18:10:54.968808 sshd-session[4234]: pam_unix(sshd:session): session closed for user core May 14 18:10:54.982853 systemd[1]: sshd@24-164.90.152.250:22-139.178.89.65:49824.service: Deactivated successfully. May 14 18:10:54.987157 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:10:54.987647 systemd[1]: session-24.scope: Consumed 1.198s CPU time, 28.9M memory peak. May 14 18:10:54.988768 systemd-logind[1501]: Session 24 logged out. Waiting for processes to exit. May 14 18:10:54.994396 systemd[1]: Started sshd@25-164.90.152.250:22-139.178.89.65:49828.service - OpenSSH per-connection server daemon (139.178.89.65:49828). May 14 18:10:54.996140 systemd-logind[1501]: Removed session 24. May 14 18:10:55.093363 sshd[4385]: Accepted publickey for core from 139.178.89.65 port 49828 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:55.095496 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:55.102755 systemd-logind[1501]: New session 25 of user core. May 14 18:10:55.106760 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:10:55.395894 kubelet[2667]: E0514 18:10:55.395708 2667 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:10:55.974770 sshd[4387]: Connection closed by 139.178.89.65 port 49828 May 14 18:10:55.976870 sshd-session[4385]: pam_unix(sshd:session): session closed for user core May 14 18:10:55.995855 systemd[1]: sshd@25-164.90.152.250:22-139.178.89.65:49828.service: Deactivated successfully. May 14 18:10:56.002231 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:10:56.009162 systemd-logind[1501]: Session 25 logged out. Waiting for processes to exit. May 14 18:10:56.011018 kubelet[2667]: E0514 18:10:56.010949 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" containerName="apply-sysctl-overwrites" May 14 18:10:56.011018 kubelet[2667]: E0514 18:10:56.010997 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" containerName="cilium-agent" May 14 18:10:56.011018 kubelet[2667]: E0514 18:10:56.011009 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" containerName="mount-cgroup" May 14 18:10:56.011018 kubelet[2667]: E0514 18:10:56.011019 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" containerName="mount-bpf-fs" May 14 18:10:56.011018 kubelet[2667]: E0514 18:10:56.011029 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" containerName="clean-cilium-state" May 14 18:10:56.011376 kubelet[2667]: E0514 18:10:56.011037 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37ebb25f-fa56-4bb4-956a-4abdb7c70a4b" containerName="cilium-operator" May 14 18:10:56.011376 kubelet[2667]: I0514 18:10:56.011073 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ebb25f-fa56-4bb4-956a-4abdb7c70a4b" containerName="cilium-operator" May 14 18:10:56.011376 kubelet[2667]: I0514 18:10:56.011083 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8defdf-3357-4334-b4b6-e6c23eaa7a8e" containerName="cilium-agent" May 14 18:10:56.021852 systemd[1]: Started sshd@26-164.90.152.250:22-139.178.89.65:49840.service - OpenSSH per-connection server daemon (139.178.89.65:49840). May 14 18:10:56.027582 systemd-logind[1501]: Removed session 25. May 14 18:10:56.041754 systemd[1]: Created slice kubepods-burstable-podaadd50cb_6705_44eb_be12_eac6105bc22a.slice - libcontainer container kubepods-burstable-podaadd50cb_6705_44eb_be12_eac6105bc22a.slice. May 14 18:10:56.136215 kubelet[2667]: I0514 18:10:56.136162 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-lib-modules\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136215 kubelet[2667]: I0514 18:10:56.136219 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56c68\" (UniqueName: \"kubernetes.io/projected/aadd50cb-6705-44eb-be12-eac6105bc22a-kube-api-access-56c68\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136503 kubelet[2667]: I0514 18:10:56.136251 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-cilium-run\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136503 kubelet[2667]: I0514 18:10:56.136266 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-bpf-maps\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136503 kubelet[2667]: I0514 18:10:56.136293 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-xtables-lock\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136503 kubelet[2667]: I0514 18:10:56.136318 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aadd50cb-6705-44eb-be12-eac6105bc22a-cilium-config-path\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136503 kubelet[2667]: I0514 18:10:56.136335 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aadd50cb-6705-44eb-be12-eac6105bc22a-cilium-ipsec-secrets\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136503 kubelet[2667]: I0514 18:10:56.136352 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aadd50cb-6705-44eb-be12-eac6105bc22a-hubble-tls\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136654 kubelet[2667]: I0514 18:10:56.136368 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-host-proc-sys-kernel\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136654 kubelet[2667]: I0514 18:10:56.136389 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-etc-cni-netd\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136654 kubelet[2667]: I0514 18:10:56.136437 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-hostproc\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136654 kubelet[2667]: I0514 18:10:56.136461 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-host-proc-sys-net\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136654 kubelet[2667]: I0514 18:10:56.136481 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-cilium-cgroup\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136654 kubelet[2667]: I0514 18:10:56.136511 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aadd50cb-6705-44eb-be12-eac6105bc22a-cni-path\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.136822 kubelet[2667]: I0514 18:10:56.136529 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aadd50cb-6705-44eb-be12-eac6105bc22a-clustermesh-secrets\") pod \"cilium-cfdnm\" (UID: \"aadd50cb-6705-44eb-be12-eac6105bc22a\") " pod="kube-system/cilium-cfdnm" May 14 18:10:56.141253 sshd[4398]: Accepted publickey for core from 139.178.89.65 port 49840 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:56.143123 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:56.150922 systemd-logind[1501]: New session 26 of user core. May 14 18:10:56.158928 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:10:56.225702 sshd[4400]: Connection closed by 139.178.89.65 port 49840 May 14 18:10:56.227623 sshd-session[4398]: pam_unix(sshd:session): session closed for user core May 14 18:10:56.291731 systemd[1]: sshd@26-164.90.152.250:22-139.178.89.65:49840.service: Deactivated successfully. May 14 18:10:56.297386 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:10:56.299051 systemd-logind[1501]: Session 26 logged out. Waiting for processes to exit. May 14 18:10:56.305349 systemd[1]: Started sshd@27-164.90.152.250:22-139.178.89.65:49846.service - OpenSSH per-connection server daemon (139.178.89.65:49846). May 14 18:10:56.308613 systemd-logind[1501]: Removed session 26. May 14 18:10:56.359454 kubelet[2667]: E0514 18:10:56.359197 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:56.361944 containerd[1525]: time="2025-05-14T18:10:56.361697168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cfdnm,Uid:aadd50cb-6705-44eb-be12-eac6105bc22a,Namespace:kube-system,Attempt:0,}" May 14 18:10:56.385500 sshd[4411]: Accepted publickey for core from 139.178.89.65 port 49846 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:10:56.391940 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:56.401358 systemd-logind[1501]: New session 27 of user core. May 14 18:10:56.405572 containerd[1525]: time="2025-05-14T18:10:56.405383563Z" level=info msg="connecting to shim eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235" address="unix:///run/containerd/s/dea726b575a0cd07333ada918b05bd3599d1d3a1873fc8671148e1df9c333527" namespace=k8s.io protocol=ttrpc version=3 May 14 18:10:56.406746 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 18:10:56.444750 systemd[1]: Started cri-containerd-eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235.scope - libcontainer container eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235. May 14 18:10:56.490627 containerd[1525]: time="2025-05-14T18:10:56.490386213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cfdnm,Uid:aadd50cb-6705-44eb-be12-eac6105bc22a,Namespace:kube-system,Attempt:0,} returns sandbox id \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\"" May 14 18:10:56.492848 kubelet[2667]: E0514 18:10:56.492780 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:56.521790 containerd[1525]: time="2025-05-14T18:10:56.521463589Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:10:56.534040 containerd[1525]: time="2025-05-14T18:10:56.533726062Z" level=info msg="Container c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:56.547593 containerd[1525]: time="2025-05-14T18:10:56.545901748Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158\"" May 14 18:10:56.551479 containerd[1525]: time="2025-05-14T18:10:56.549658362Z" level=info msg="StartContainer for \"c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158\"" May 14 18:10:56.553843 containerd[1525]: time="2025-05-14T18:10:56.553779153Z" level=info msg="connecting to shim c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158" address="unix:///run/containerd/s/dea726b575a0cd07333ada918b05bd3599d1d3a1873fc8671148e1df9c333527" protocol=ttrpc version=3 May 14 18:10:56.600778 systemd[1]: Started cri-containerd-c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158.scope - libcontainer container c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158. May 14 18:10:56.668055 containerd[1525]: time="2025-05-14T18:10:56.668003811Z" level=info msg="StartContainer for \"c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158\" returns successfully" May 14 18:10:56.685694 systemd[1]: cri-containerd-c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158.scope: Deactivated successfully. May 14 18:10:56.686608 systemd[1]: cri-containerd-c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158.scope: Consumed 34ms CPU time, 9.6M memory peak, 3.2M read from disk. May 14 18:10:56.691312 containerd[1525]: time="2025-05-14T18:10:56.691074796Z" level=info msg="received exit event container_id:\"c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158\" id:\"c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158\" pid:4481 exited_at:{seconds:1747246256 nanos:690586912}" May 14 18:10:56.691742 containerd[1525]: time="2025-05-14T18:10:56.691703481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158\" id:\"c66ce1c93f33d5744299b237638c4f196310f15bd33712452648263689db6158\" pid:4481 exited_at:{seconds:1747246256 nanos:690586912}" May 14 18:10:56.778257 kubelet[2667]: E0514 18:10:56.777300 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:56.783649 containerd[1525]: time="2025-05-14T18:10:56.783507591Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:10:56.797367 containerd[1525]: time="2025-05-14T18:10:56.797299535Z" level=info msg="Container c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:56.812128 containerd[1525]: time="2025-05-14T18:10:56.812055357Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2\"" May 14 18:10:56.814172 containerd[1525]: time="2025-05-14T18:10:56.814089996Z" level=info msg="StartContainer for \"c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2\"" May 14 18:10:56.816928 containerd[1525]: time="2025-05-14T18:10:56.816830284Z" level=info msg="connecting to shim c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2" address="unix:///run/containerd/s/dea726b575a0cd07333ada918b05bd3599d1d3a1873fc8671148e1df9c333527" protocol=ttrpc version=3 May 14 18:10:56.856796 systemd[1]: Started cri-containerd-c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2.scope - libcontainer container c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2. May 14 18:10:56.913435 containerd[1525]: time="2025-05-14T18:10:56.913314102Z" level=info msg="StartContainer for \"c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2\" returns successfully" May 14 18:10:56.929673 systemd[1]: cri-containerd-c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2.scope: Deactivated successfully. May 14 18:10:56.930546 systemd[1]: cri-containerd-c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2.scope: Consumed 32ms CPU time, 7.6M memory peak, 2.2M read from disk. May 14 18:10:56.933713 containerd[1525]: time="2025-05-14T18:10:56.933646167Z" level=info msg="received exit event container_id:\"c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2\" id:\"c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2\" pid:4523 exited_at:{seconds:1747246256 nanos:933062539}" May 14 18:10:56.934366 containerd[1525]: time="2025-05-14T18:10:56.934323985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2\" id:\"c0e031c7955f9d8c8e58866ab94541d6ec774840a6eb39b9f6747db8ecf934f2\" pid:4523 exited_at:{seconds:1747246256 nanos:933062539}" May 14 18:10:57.784897 kubelet[2667]: E0514 18:10:57.784604 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:57.793100 containerd[1525]: time="2025-05-14T18:10:57.790107800Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:10:57.818035 containerd[1525]: time="2025-05-14T18:10:57.813122257Z" level=info msg="Container 17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:57.836687 containerd[1525]: time="2025-05-14T18:10:57.836515724Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0\"" May 14 18:10:57.842450 containerd[1525]: time="2025-05-14T18:10:57.842265966Z" level=info msg="StartContainer for \"17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0\"" May 14 18:10:57.847680 containerd[1525]: time="2025-05-14T18:10:57.847478647Z" level=info msg="connecting to shim 17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0" address="unix:///run/containerd/s/dea726b575a0cd07333ada918b05bd3599d1d3a1873fc8671148e1df9c333527" protocol=ttrpc version=3 May 14 18:10:57.901884 systemd[1]: Started cri-containerd-17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0.scope - libcontainer container 17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0. May 14 18:10:57.963607 containerd[1525]: time="2025-05-14T18:10:57.963546741Z" level=info msg="StartContainer for \"17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0\" returns successfully" May 14 18:10:57.970253 systemd[1]: cri-containerd-17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0.scope: Deactivated successfully. May 14 18:10:57.972387 containerd[1525]: time="2025-05-14T18:10:57.972167805Z" level=info msg="received exit event container_id:\"17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0\" id:\"17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0\" pid:4566 exited_at:{seconds:1747246257 nanos:971008682}" May 14 18:10:57.974029 containerd[1525]: time="2025-05-14T18:10:57.973253527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0\" id:\"17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0\" pid:4566 exited_at:{seconds:1747246257 nanos:971008682}" May 14 18:10:58.008874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17e06968209c6e6a472612337c2cb727bf87b17e7a9fa6a5f57e7ec8fa83ccc0-rootfs.mount: Deactivated successfully. May 14 18:10:58.791981 kubelet[2667]: E0514 18:10:58.791880 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:58.798017 containerd[1525]: time="2025-05-14T18:10:58.797961299Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:10:58.827555 containerd[1525]: time="2025-05-14T18:10:58.826798630Z" level=info msg="Container ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:58.840890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796643597.mount: Deactivated successfully. May 14 18:10:58.850788 containerd[1525]: time="2025-05-14T18:10:58.849135797Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28\"" May 14 18:10:58.851439 containerd[1525]: time="2025-05-14T18:10:58.851339374Z" level=info msg="StartContainer for \"ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28\"" May 14 18:10:58.853524 containerd[1525]: time="2025-05-14T18:10:58.853482491Z" level=info msg="connecting to shim ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28" address="unix:///run/containerd/s/dea726b575a0cd07333ada918b05bd3599d1d3a1873fc8671148e1df9c333527" protocol=ttrpc version=3 May 14 18:10:58.904812 systemd[1]: Started cri-containerd-ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28.scope - libcontainer container ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28. May 14 18:10:58.960091 systemd[1]: cri-containerd-ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28.scope: Deactivated successfully. May 14 18:10:58.972456 containerd[1525]: time="2025-05-14T18:10:58.971806708Z" level=info msg="StartContainer for \"ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28\" returns successfully" May 14 18:10:58.972456 containerd[1525]: time="2025-05-14T18:10:58.971963618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28\" id:\"ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28\" pid:4607 exited_at:{seconds:1747246258 nanos:965771886}" May 14 18:10:58.972456 containerd[1525]: time="2025-05-14T18:10:58.972021295Z" level=info msg="received exit event container_id:\"ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28\" id:\"ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28\" pid:4607 exited_at:{seconds:1747246258 nanos:965771886}" May 14 18:10:59.011572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab3c68dd7e1050ff5e27e07c8ea61f6e75a1fc765b93e3dba10cecab15848d28-rootfs.mount: Deactivated successfully. May 14 18:10:59.800483 kubelet[2667]: E0514 18:10:59.799870 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:10:59.805646 containerd[1525]: time="2025-05-14T18:10:59.805581763Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:10:59.828188 containerd[1525]: time="2025-05-14T18:10:59.827583124Z" level=info msg="Container f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:59.846129 containerd[1525]: time="2025-05-14T18:10:59.846013536Z" level=info msg="CreateContainer within sandbox \"eeeb0c0d2e07bf8a2dbd5471a7bcebcfe0f9d57fb9db0c2fce7882609bed7235\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a\"" May 14 18:10:59.849905 containerd[1525]: time="2025-05-14T18:10:59.849857335Z" level=info msg="StartContainer for \"f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a\"" May 14 18:10:59.853108 containerd[1525]: time="2025-05-14T18:10:59.853053180Z" level=info msg="connecting to shim f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a" address="unix:///run/containerd/s/dea726b575a0cd07333ada918b05bd3599d1d3a1873fc8671148e1df9c333527" protocol=ttrpc version=3 May 14 18:10:59.905763 systemd[1]: Started cri-containerd-f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a.scope - libcontainer container f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a. May 14 18:10:59.980335 containerd[1525]: time="2025-05-14T18:10:59.980262553Z" level=info msg="StartContainer for \"f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a\" returns successfully" May 14 18:11:00.121075 containerd[1525]: time="2025-05-14T18:11:00.120787297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a\" id:\"90ee0685643725a2e093475d703130186e4100efafbc4847939fd0f517f65709\" pid:4677 exited_at:{seconds:1747246260 nanos:119853186}" May 14 18:11:00.247656 kubelet[2667]: E0514 18:11:00.247380 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-65jlw" podUID="5ad81bb4-33f1-462b-87ec-41b481b8feda" May 14 18:11:00.586572 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 14 18:11:00.816170 kubelet[2667]: E0514 18:11:00.815374 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:11:02.247347 kubelet[2667]: E0514 18:11:02.247272 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:11:02.363036 kubelet[2667]: E0514 18:11:02.362909 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:11:03.021469 containerd[1525]: time="2025-05-14T18:11:03.019972734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a\" id:\"01647eaa79662311aa390589aefa5299533e263c757bc7dd10639219dd64cc89\" pid:4837 exit_status:1 exited_at:{seconds:1747246263 nanos:18236143}" May 14 18:11:04.671167 systemd-networkd[1447]: lxc_health: Link UP May 14 18:11:04.683153 systemd-networkd[1447]: lxc_health: Gained carrier May 14 18:11:05.321091 containerd[1525]: time="2025-05-14T18:11:05.320819951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a\" id:\"6b18d5736b4952f212f754282c712334cb42a75cfa2a93daf3b7513e45c2e2df\" pid:5199 exited_at:{seconds:1747246265 nanos:320142448}" May 14 18:11:06.357910 systemd-networkd[1447]: lxc_health: Gained IPv6LL May 14 18:11:06.375598 kubelet[2667]: E0514 18:11:06.375470 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:11:06.488643 kubelet[2667]: I0514 18:11:06.488497 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cfdnm" podStartSLOduration=11.488475027 podStartE2EDuration="11.488475027s" podCreationTimestamp="2025-05-14 18:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:11:00.857139363 +0000 UTC m=+110.776888264" watchObservedRunningTime="2025-05-14 18:11:06.488475027 +0000 UTC m=+116.408223920" May 14 18:11:06.876506 kubelet[2667]: E0514 18:11:06.876377 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:11:07.548522 containerd[1525]: time="2025-05-14T18:11:07.547215925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a\" id:\"4d16260fe5e4f1af23b40870ba7829776c39a61ccf88d1130b13d0893fd7e5e3\" pid:5235 exited_at:{seconds:1747246267 nanos:545665623}" May 14 18:11:07.879405 kubelet[2667]: E0514 18:11:07.879193 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:11:09.733082 containerd[1525]: time="2025-05-14T18:11:09.732928240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5c2ae1712821d93a7c4bf01eefcea1f583674a9eb269b24a33b56e84711af2a\" id:\"a4eb3fda91546db2df8161fc48c3bcca84f2d86a23c6d3c7ed23a23901c1fd68\" pid:5261 exited_at:{seconds:1747246269 nanos:731546651}" May 14 18:11:09.748450 sshd[4429]: Connection closed by 139.178.89.65 port 49846 May 14 18:11:09.749577 sshd-session[4411]: pam_unix(sshd:session): session closed for user core May 14 18:11:09.756750 systemd-logind[1501]: Session 27 logged out. Waiting for processes to exit. May 14 18:11:09.757563 systemd[1]: sshd@27-164.90.152.250:22-139.178.89.65:49846.service: Deactivated successfully. May 14 18:11:09.762602 systemd[1]: session-27.scope: Deactivated successfully. May 14 18:11:09.767590 systemd-logind[1501]: Removed session 27. May 14 18:11:10.250981 containerd[1525]: time="2025-05-14T18:11:10.250926523Z" level=info msg="StopPodSandbox for \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\"" May 14 18:11:10.251302 containerd[1525]: time="2025-05-14T18:11:10.251111657Z" level=info msg="TearDown network for sandbox \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" successfully" May 14 18:11:10.251302 containerd[1525]: time="2025-05-14T18:11:10.251130608Z" level=info msg="StopPodSandbox for \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" returns successfully" May 14 18:11:10.253510 containerd[1525]: time="2025-05-14T18:11:10.251530084Z" level=info msg="RemovePodSandbox for \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\"" May 14 18:11:10.253510 containerd[1525]: time="2025-05-14T18:11:10.251570330Z" level=info msg="Forcibly stopping sandbox \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\"" May 14 18:11:10.253510 containerd[1525]: time="2025-05-14T18:11:10.251661803Z" level=info msg="TearDown network for sandbox \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" successfully" May 14 18:11:10.255110 containerd[1525]: time="2025-05-14T18:11:10.255038620Z" level=info msg="Ensure that sandbox 3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4 in task-service has been cleanup successfully" May 14 18:11:10.261475 containerd[1525]: time="2025-05-14T18:11:10.261029584Z" level=info msg="RemovePodSandbox \"3d4c7ecbe03fa7a82038206f1b99defa08c4f3a0c5d01d5271282462c6c050c4\" returns successfully" May 14 18:11:10.264518 containerd[1525]: time="2025-05-14T18:11:10.261762664Z" level=info msg="StopPodSandbox for \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\"" May 14 18:11:10.264518 containerd[1525]: time="2025-05-14T18:11:10.261901878Z" level=info msg="TearDown network for sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" successfully" May 14 18:11:10.264518 containerd[1525]: time="2025-05-14T18:11:10.261913470Z" level=info msg="StopPodSandbox for \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" returns successfully" May 14 18:11:10.264518 containerd[1525]: time="2025-05-14T18:11:10.262274172Z" level=info msg="RemovePodSandbox for \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\"" May 14 18:11:10.264518 containerd[1525]: time="2025-05-14T18:11:10.262303700Z" level=info msg="Forcibly stopping sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\"" May 14 18:11:10.264518 containerd[1525]: time="2025-05-14T18:11:10.262380134Z" level=info msg="TearDown network for sandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" successfully" May 14 18:11:10.264518 containerd[1525]: time="2025-05-14T18:11:10.264162395Z" level=info msg="Ensure that sandbox 5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c in task-service has been cleanup successfully" May 14 18:11:10.267667 containerd[1525]: time="2025-05-14T18:11:10.267603425Z" level=info msg="RemovePodSandbox \"5f076acf022e394cddbb57c23cb837b10764c845255094a2390ce706a102562c\" returns successfully"