Sep 4 23:49:48.976313 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:03:18 -00 2025 Sep 4 23:49:48.976356 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:49:48.976378 kernel: BIOS-provided physical RAM map: Sep 4 23:49:48.976391 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 23:49:48.976404 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 23:49:48.976417 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 23:49:48.976431 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 4 23:49:48.976445 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 4 23:49:48.976458 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 23:49:48.976471 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 23:49:48.976488 kernel: NX (Execute Disable) protection: active Sep 4 23:49:48.976501 kernel: APIC: Static calls initialized Sep 4 23:49:48.976522 kernel: SMBIOS 2.8 present. Sep 4 23:49:48.976536 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 4 23:49:48.976553 kernel: Hypervisor detected: KVM Sep 4 23:49:48.976567 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 23:49:48.976590 kernel: kvm-clock: using sched offset of 3587041767 cycles Sep 4 23:49:48.976606 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 23:49:48.976621 kernel: tsc: Detected 1995.313 MHz processor Sep 4 23:49:48.976636 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 23:49:48.976651 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 23:49:48.976665 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 4 23:49:48.976680 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 23:49:48.976695 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 23:49:48.976713 kernel: ACPI: Early table checksum verification disabled Sep 4 23:49:48.976728 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 4 23:49:48.976742 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:49:48.976757 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:49:48.976772 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:49:48.976787 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 4 23:49:48.976801 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:49:48.976815 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:49:48.976830 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:49:48.976848 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:49:48.976863 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 4 23:49:48.976877 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 4 23:49:48.976892 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 4 23:49:48.976906 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 4 23:49:48.976921 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 4 23:49:48.976936 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 4 23:49:48.976958 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 4 23:49:48.976976 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 23:49:48.976991 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 23:49:48.977007 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 4 23:49:48.977022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 4 23:49:48.977044 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 4 23:49:48.977105 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 4 23:49:48.977125 kernel: Zone ranges: Sep 4 23:49:48.977141 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 23:49:48.977156 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 4 23:49:48.977171 kernel: Normal empty Sep 4 23:49:48.977186 kernel: Movable zone start for each node Sep 4 23:49:48.977203 kernel: Early memory node ranges Sep 4 23:49:48.977218 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 23:49:48.977233 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 4 23:49:48.977249 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 4 23:49:48.977264 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 23:49:48.977283 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 23:49:48.977303 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 4 23:49:48.977319 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 23:49:48.977334 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 23:49:48.977350 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 23:49:48.977365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 23:49:48.977380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 23:49:48.977396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 23:49:48.977411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 23:49:48.977430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 23:49:48.977444 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 23:49:48.977460 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 23:49:48.977475 kernel: TSC deadline timer available Sep 4 23:49:48.977491 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 23:49:48.977507 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 23:49:48.977523 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 4 23:49:48.977543 kernel: Booting paravirtualized kernel on KVM Sep 4 23:49:48.977558 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 23:49:48.977578 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 23:49:48.977593 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 4 23:49:48.977608 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 4 23:49:48.977624 kernel: pcpu-alloc: [0] 0 1 Sep 4 23:49:48.977639 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 4 23:49:48.977656 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:49:48.977673 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:49:48.977688 kernel: random: crng init done Sep 4 23:49:48.977715 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:49:48.977730 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 23:49:48.977745 kernel: Fallback order for Node 0: 0 Sep 4 23:49:48.977761 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 4 23:49:48.977776 kernel: Policy zone: DMA32 Sep 4 23:49:48.977792 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:49:48.977809 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 127196K reserved, 0K cma-reserved) Sep 4 23:49:48.977825 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:49:48.977840 kernel: Kernel/User page tables isolation: enabled Sep 4 23:49:48.977859 kernel: ftrace: allocating 37943 entries in 149 pages Sep 4 23:49:48.977875 kernel: ftrace: allocated 149 pages with 4 groups Sep 4 23:49:48.977890 kernel: Dynamic Preempt: voluntary Sep 4 23:49:48.977906 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:49:48.977929 kernel: rcu: RCU event tracing is enabled. Sep 4 23:49:48.977945 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:49:48.977961 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:49:48.977978 kernel: Rude variant of Tasks RCU enabled. Sep 4 23:49:48.977993 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:49:48.978012 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:49:48.978028 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:49:48.978043 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 23:49:48.980650 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:49:48.980699 kernel: Console: colour VGA+ 80x25 Sep 4 23:49:48.980713 kernel: printk: console [tty0] enabled Sep 4 23:49:48.980724 kernel: printk: console [ttyS0] enabled Sep 4 23:49:48.980739 kernel: ACPI: Core revision 20230628 Sep 4 23:49:48.980755 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 23:49:48.980783 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 23:49:48.980798 kernel: x2apic enabled Sep 4 23:49:48.980813 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 23:49:48.980828 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 23:49:48.980844 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c5814ab, max_idle_ns: 881590472177 ns Sep 4 23:49:48.980859 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995313) Sep 4 23:49:48.980874 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 4 23:49:48.980890 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 4 23:49:48.980921 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 23:49:48.980937 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 23:49:48.980953 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 23:49:48.980967 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 4 23:49:48.980984 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 23:49:48.980998 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 23:49:48.981012 kernel: MDS: Mitigation: Clear CPU buffers Sep 4 23:49:48.981026 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 23:49:48.981042 kernel: active return thunk: its_return_thunk Sep 4 23:49:48.981129 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 4 23:49:48.981144 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 23:49:48.981156 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 23:49:48.981170 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 23:49:48.981186 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 23:49:48.981201 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 4 23:49:48.981217 kernel: Freeing SMP alternatives memory: 32K Sep 4 23:49:48.981233 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:49:48.981253 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:49:48.981269 kernel: landlock: Up and running. Sep 4 23:49:48.981285 kernel: SELinux: Initializing. Sep 4 23:49:48.981302 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 23:49:48.981318 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 23:49:48.981334 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 4 23:49:48.981351 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:49:48.981367 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:49:48.981384 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:49:48.981403 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 4 23:49:48.981420 kernel: signal: max sigframe size: 1776 Sep 4 23:49:48.981435 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:49:48.981453 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:49:48.981466 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 23:49:48.981481 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:49:48.981496 kernel: smpboot: x86: Booting SMP configuration: Sep 4 23:49:48.981513 kernel: .... node #0, CPUs: #1 Sep 4 23:49:48.981535 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:49:48.981552 kernel: smpboot: Max logical packages: 1 Sep 4 23:49:48.981566 kernel: smpboot: Total of 2 processors activated (7981.25 BogoMIPS) Sep 4 23:49:48.981583 kernel: devtmpfs: initialized Sep 4 23:49:48.981598 kernel: x86/mm: Memory block size: 128MB Sep 4 23:49:48.981614 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:49:48.981629 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:49:48.981644 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:49:48.981661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:49:48.981677 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:49:48.981695 kernel: audit: type=2000 audit(1757029787.509:1): state=initialized audit_enabled=0 res=1 Sep 4 23:49:48.981711 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:49:48.981728 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 23:49:48.981744 kernel: cpuidle: using governor menu Sep 4 23:49:48.981759 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:49:48.981775 kernel: dca service started, version 1.12.1 Sep 4 23:49:48.981791 kernel: PCI: Using configuration type 1 for base access Sep 4 23:49:48.981807 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 23:49:48.981822 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:49:48.981842 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:49:48.981858 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:49:48.981874 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:49:48.981890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:49:48.981907 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:49:48.981923 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 23:49:48.981939 kernel: ACPI: Interpreter enabled Sep 4 23:49:48.981955 kernel: ACPI: PM: (supports S0 S5) Sep 4 23:49:48.981972 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 23:49:48.981989 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 23:49:48.982008 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 23:49:48.982028 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 23:49:48.982045 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:49:48.982519 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:49:48.982712 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 23:49:48.982886 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 23:49:48.982915 kernel: acpiphp: Slot [3] registered Sep 4 23:49:48.982931 kernel: acpiphp: Slot [4] registered Sep 4 23:49:48.982947 kernel: acpiphp: Slot [5] registered Sep 4 23:49:48.982963 kernel: acpiphp: Slot [6] registered Sep 4 23:49:48.982979 kernel: acpiphp: Slot [7] registered Sep 4 23:49:48.982995 kernel: acpiphp: Slot [8] registered Sep 4 23:49:48.983010 kernel: acpiphp: Slot [9] registered Sep 4 23:49:48.983026 kernel: acpiphp: Slot [10] registered Sep 4 23:49:48.983043 kernel: acpiphp: Slot [11] registered Sep 4 23:49:48.986147 kernel: acpiphp: Slot [12] registered Sep 4 23:49:48.986182 kernel: acpiphp: Slot [13] registered Sep 4 23:49:48.986199 kernel: acpiphp: Slot [14] registered Sep 4 23:49:48.986216 kernel: acpiphp: Slot [15] registered Sep 4 23:49:48.986232 kernel: acpiphp: Slot [16] registered Sep 4 23:49:48.986248 kernel: acpiphp: Slot [17] registered Sep 4 23:49:48.986264 kernel: acpiphp: Slot [18] registered Sep 4 23:49:48.986278 kernel: acpiphp: Slot [19] registered Sep 4 23:49:48.986294 kernel: acpiphp: Slot [20] registered Sep 4 23:49:48.986310 kernel: acpiphp: Slot [21] registered Sep 4 23:49:48.986330 kernel: acpiphp: Slot [22] registered Sep 4 23:49:48.986346 kernel: acpiphp: Slot [23] registered Sep 4 23:49:48.986362 kernel: acpiphp: Slot [24] registered Sep 4 23:49:48.986378 kernel: acpiphp: Slot [25] registered Sep 4 23:49:48.986394 kernel: acpiphp: Slot [26] registered Sep 4 23:49:48.986410 kernel: acpiphp: Slot [27] registered Sep 4 23:49:48.986427 kernel: acpiphp: Slot [28] registered Sep 4 23:49:48.986443 kernel: acpiphp: Slot [29] registered Sep 4 23:49:48.986459 kernel: acpiphp: Slot [30] registered Sep 4 23:49:48.986475 kernel: acpiphp: Slot [31] registered Sep 4 23:49:48.986495 kernel: PCI host bridge to bus 0000:00 Sep 4 23:49:48.987150 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 23:49:48.987360 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 23:49:48.987511 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 23:49:48.987655 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 23:49:48.987800 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 23:49:48.987942 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:49:48.990879 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 23:49:48.991141 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 23:49:48.991359 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 23:49:48.991523 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 4 23:49:48.991685 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 23:49:48.991842 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 23:49:48.992012 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 23:49:48.996340 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 23:49:48.996580 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 4 23:49:48.996748 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 4 23:49:48.996935 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 23:49:48.998046 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 23:49:48.998310 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 23:49:48.998519 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 4 23:49:48.998690 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 4 23:49:48.998870 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 4 23:49:48.999041 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 4 23:49:49.001236 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 4 23:49:49.001424 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 23:49:49.001626 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 4 23:49:49.001791 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 4 23:49:49.001950 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 4 23:49:49.002287 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 4 23:49:49.002463 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 23:49:49.002620 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 4 23:49:49.002784 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 4 23:49:49.002976 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 4 23:49:49.004235 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 4 23:49:49.004416 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 4 23:49:49.004581 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 4 23:49:49.004735 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 4 23:49:49.004955 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 4 23:49:49.006192 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 23:49:49.006373 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 4 23:49:49.006529 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 4 23:49:49.006704 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 4 23:49:49.006863 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 4 23:49:49.006963 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 4 23:49:49.011133 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 4 23:49:49.011314 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 4 23:49:49.011429 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 4 23:49:49.011526 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 4 23:49:49.011544 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 23:49:49.011555 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 23:49:49.011565 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 23:49:49.011574 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 23:49:49.011583 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 23:49:49.011596 kernel: iommu: Default domain type: Translated Sep 4 23:49:49.011605 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 23:49:49.011613 kernel: PCI: Using ACPI for IRQ routing Sep 4 23:49:49.011622 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 23:49:49.011632 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 23:49:49.011641 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 4 23:49:49.011742 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 23:49:49.011842 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 23:49:49.011982 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 23:49:49.012004 kernel: vgaarb: loaded Sep 4 23:49:49.012013 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 23:49:49.012023 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 23:49:49.012032 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 23:49:49.012041 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:49:49.012050 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:49:49.012091 kernel: pnp: PnP ACPI init Sep 4 23:49:49.012100 kernel: pnp: PnP ACPI: found 4 devices Sep 4 23:49:49.012109 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 23:49:49.012122 kernel: NET: Registered PF_INET protocol family Sep 4 23:49:49.012131 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:49:49.012142 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 23:49:49.012151 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:49:49.012160 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 23:49:49.012170 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 23:49:49.012178 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 23:49:49.012188 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 23:49:49.012200 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 23:49:49.012209 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:49:49.012218 kernel: NET: Registered PF_XDP protocol family Sep 4 23:49:49.012324 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 23:49:49.012416 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 23:49:49.012506 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 23:49:49.012594 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 23:49:49.012683 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 23:49:49.012787 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 23:49:49.012897 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 23:49:49.012910 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 23:49:49.013011 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 43654 usecs Sep 4 23:49:49.013023 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:49:49.013032 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 23:49:49.013041 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c5814ab, max_idle_ns: 881590472177 ns Sep 4 23:49:49.013050 kernel: Initialise system trusted keyrings Sep 4 23:49:49.015158 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 23:49:49.015184 kernel: Key type asymmetric registered Sep 4 23:49:49.015194 kernel: Asymmetric key parser 'x509' registered Sep 4 23:49:49.015204 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 23:49:49.015214 kernel: io scheduler mq-deadline registered Sep 4 23:49:49.015223 kernel: io scheduler kyber registered Sep 4 23:49:49.015232 kernel: io scheduler bfq registered Sep 4 23:49:49.015242 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 23:49:49.015252 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 4 23:49:49.015261 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 23:49:49.015273 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 23:49:49.015282 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:49:49.015292 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 23:49:49.015301 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 23:49:49.015310 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 23:49:49.015320 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 23:49:49.015329 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 23:49:49.015532 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 4 23:49:49.015634 kernel: rtc_cmos 00:03: registered as rtc0 Sep 4 23:49:49.015729 kernel: rtc_cmos 00:03: setting system clock to 2025-09-04T23:49:48 UTC (1757029788) Sep 4 23:49:49.015822 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 4 23:49:49.015834 kernel: intel_pstate: CPU model not supported Sep 4 23:49:49.015843 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:49:49.015852 kernel: Segment Routing with IPv6 Sep 4 23:49:49.015861 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:49:49.015871 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:49:49.015879 kernel: Key type dns_resolver registered Sep 4 23:49:49.015891 kernel: IPI shorthand broadcast: enabled Sep 4 23:49:49.015901 kernel: sched_clock: Marking stable (1076004447, 134432728)->(1329456453, -119019278) Sep 4 23:49:49.015910 kernel: registered taskstats version 1 Sep 4 23:49:49.015919 kernel: Loading compiled-in X.509 certificates Sep 4 23:49:49.015928 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: f395d469db1520f53594f6c4948c5f8002e6cc8b' Sep 4 23:49:49.015937 kernel: Key type .fscrypt registered Sep 4 23:49:49.015945 kernel: Key type fscrypt-provisioning registered Sep 4 23:49:49.015954 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:49:49.015966 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:49:49.015975 kernel: ima: No architecture policies found Sep 4 23:49:49.015983 kernel: clk: Disabling unused clocks Sep 4 23:49:49.015992 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 4 23:49:49.016001 kernel: Write protecting the kernel read-only data: 38912k Sep 4 23:49:49.016027 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 4 23:49:49.016039 kernel: Run /init as init process Sep 4 23:49:49.016048 kernel: with arguments: Sep 4 23:49:49.016088 kernel: /init Sep 4 23:49:49.016097 kernel: with environment: Sep 4 23:49:49.016110 kernel: HOME=/ Sep 4 23:49:49.016118 kernel: TERM=linux Sep 4 23:49:49.016128 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:49:49.016140 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:49:49.016153 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:49:49.016164 systemd[1]: Detected virtualization kvm. Sep 4 23:49:49.016173 systemd[1]: Detected architecture x86-64. Sep 4 23:49:49.016185 systemd[1]: Running in initrd. Sep 4 23:49:49.016194 systemd[1]: No hostname configured, using default hostname. Sep 4 23:49:49.016204 systemd[1]: Hostname set to . Sep 4 23:49:49.016214 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:49:49.016224 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:49:49.016233 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:49:49.016243 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:49:49.016254 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:49:49.016266 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:49:49.016276 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:49:49.016286 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:49:49.016297 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:49:49.016307 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:49:49.016317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:49:49.016327 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:49:49.016340 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:49:49.016349 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:49:49.016362 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:49:49.016372 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:49:49.016381 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:49:49.016394 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:49:49.016404 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:49:49.016414 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:49:49.016424 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:49:49.016437 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:49:49.016447 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:49:49.016456 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:49:49.016466 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:49:49.016477 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:49:49.016489 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:49:49.016499 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:49:49.016508 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:49:49.016518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:49:49.016528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:49:49.016538 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:49:49.016582 systemd-journald[183]: Collecting audit messages is disabled. Sep 4 23:49:49.016610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:49:49.016621 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:49:49.016633 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:49:49.016645 systemd-journald[183]: Journal started Sep 4 23:49:49.016673 systemd-journald[183]: Runtime Journal (/run/log/journal/5e73d040dba0412faa4e821c1ced2991) is 4.9M, max 39.3M, 34.4M free. Sep 4 23:49:48.984680 systemd-modules-load[184]: Inserted module 'overlay' Sep 4 23:49:49.067337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:49:49.067396 kernel: Bridge firewalling registered Sep 4 23:49:49.067417 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:49:49.026372 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 4 23:49:49.068386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:49:49.075717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:49:49.076984 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:49:49.091437 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:49:49.094241 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:49:49.098284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:49:49.099728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:49:49.118411 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:49:49.128344 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:49:49.130506 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:49:49.133772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:49:49.136341 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:49:49.148279 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:49:49.152553 dracut-cmdline[214]: dracut-dracut-053 Sep 4 23:49:49.155791 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:49:49.196735 systemd-resolved[224]: Positive Trust Anchors: Sep 4 23:49:49.196750 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:49:49.196786 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:49:49.199770 systemd-resolved[224]: Defaulting to hostname 'linux'. Sep 4 23:49:49.202084 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:49:49.202709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:49:49.271120 kernel: SCSI subsystem initialized Sep 4 23:49:49.298108 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:49:49.312103 kernel: iscsi: registered transport (tcp) Sep 4 23:49:49.338400 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:49:49.338509 kernel: QLogic iSCSI HBA Driver Sep 4 23:49:49.396836 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:49:49.403372 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:49:49.433870 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:49:49.433952 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:49:49.435101 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:49:49.484114 kernel: raid6: avx2x4 gen() 30672 MB/s Sep 4 23:49:49.502177 kernel: raid6: avx2x2 gen() 27464 MB/s Sep 4 23:49:49.519287 kernel: raid6: avx2x1 gen() 18190 MB/s Sep 4 23:49:49.519390 kernel: raid6: using algorithm avx2x4 gen() 30672 MB/s Sep 4 23:49:49.538113 kernel: raid6: .... xor() 8961 MB/s, rmw enabled Sep 4 23:49:49.538194 kernel: raid6: using avx2x2 recovery algorithm Sep 4 23:49:49.573118 kernel: xor: automatically using best checksumming function avx Sep 4 23:49:49.796122 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:49:49.813252 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:49:49.825474 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:49:49.849495 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 4 23:49:49.858450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:49:49.867842 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:49:49.888509 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 4 23:49:49.930256 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:49:49.937343 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:49:50.005304 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:49:50.010955 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:49:50.037383 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:49:50.040784 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:49:50.043268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:49:50.046696 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:49:50.054443 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:49:50.090188 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 4 23:49:50.092897 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:49:50.100442 kernel: scsi host0: Virtio SCSI HBA Sep 4 23:49:50.104220 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 4 23:49:50.121322 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:49:50.121397 kernel: GPT:9289727 != 125829119 Sep 4 23:49:50.121419 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:49:50.122128 kernel: GPT:9289727 != 125829119 Sep 4 23:49:50.124200 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:49:50.124293 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:49:50.145103 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 23:49:50.168712 kernel: ACPI: bus type USB registered Sep 4 23:49:50.168812 kernel: usbcore: registered new interface driver usbfs Sep 4 23:49:50.182110 kernel: usbcore: registered new interface driver hub Sep 4 23:49:50.183320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:49:50.183463 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:49:50.193104 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 4 23:49:50.190158 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:49:50.190733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:49:50.201128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:49:50.206709 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 4 23:49:50.207003 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 23:49:50.203221 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:49:50.212452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:49:50.217577 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:49:50.219163 kernel: usbcore: registered new device driver usb Sep 4 23:49:50.223083 kernel: AES CTR mode by8 optimization enabled Sep 4 23:49:50.255082 kernel: BTRFS: device fsid 185ffa67-4184-4488-b7c8-7c0711a63b2d devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (447) Sep 4 23:49:50.261092 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (458) Sep 4 23:49:50.273183 kernel: libata version 3.00 loaded. Sep 4 23:49:50.284446 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 23:49:50.288083 kernel: scsi host1: ata_piix Sep 4 23:49:50.291259 kernel: scsi host2: ata_piix Sep 4 23:49:50.291489 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 4 23:49:50.291503 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 4 23:49:50.326669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 23:49:50.363491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:49:50.375880 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:49:50.392858 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 23:49:50.401396 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 23:49:50.402186 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 23:49:50.409383 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:49:50.413016 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:49:50.419953 disk-uuid[532]: Primary Header is updated. Sep 4 23:49:50.419953 disk-uuid[532]: Secondary Entries is updated. Sep 4 23:49:50.419953 disk-uuid[532]: Secondary Header is updated. Sep 4 23:49:50.426124 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:49:50.433104 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:49:50.438157 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:49:50.508970 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 4 23:49:50.509329 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 4 23:49:50.509462 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 4 23:49:50.513088 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 4 23:49:50.515091 kernel: hub 1-0:1.0: USB hub found Sep 4 23:49:50.518588 kernel: hub 1-0:1.0: 2 ports detected Sep 4 23:49:51.436132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:49:51.436494 disk-uuid[533]: The operation has completed successfully. Sep 4 23:49:51.492620 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:49:51.492810 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:49:51.543359 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:49:51.548103 sh[562]: Success Sep 4 23:49:51.567189 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 23:49:51.637086 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:49:51.645248 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:49:51.651157 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:49:51.674652 kernel: BTRFS info (device dm-0): first mount of filesystem 185ffa67-4184-4488-b7c8-7c0711a63b2d Sep 4 23:49:51.674727 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:49:51.676568 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:49:51.678492 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:49:51.680834 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:49:51.688247 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:49:51.690212 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:49:51.700362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:49:51.703106 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:49:51.722099 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:49:51.724275 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:49:51.724353 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:49:51.733125 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:49:51.739136 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:49:51.745947 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:49:51.755331 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:49:51.878329 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:49:51.892510 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:49:51.930932 ignition[648]: Ignition 2.20.0 Sep 4 23:49:51.930952 ignition[648]: Stage: fetch-offline Sep 4 23:49:51.931045 ignition[648]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:49:51.933219 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:49:51.931093 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 23:49:51.931279 ignition[648]: parsed url from cmdline: "" Sep 4 23:49:51.931285 ignition[648]: no config URL provided Sep 4 23:49:51.931293 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:49:51.931305 ignition[648]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:49:51.931313 ignition[648]: failed to fetch config: resource requires networking Sep 4 23:49:51.931524 ignition[648]: Ignition finished successfully Sep 4 23:49:51.940905 systemd-networkd[743]: lo: Link UP Sep 4 23:49:51.940909 systemd-networkd[743]: lo: Gained carrier Sep 4 23:49:51.943757 systemd-networkd[743]: Enumeration completed Sep 4 23:49:51.944149 systemd-networkd[743]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 4 23:49:51.944153 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 4 23:49:51.945373 systemd-networkd[743]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:49:51.945379 systemd-networkd[743]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:49:51.946006 systemd-networkd[743]: eth0: Link UP Sep 4 23:49:51.946010 systemd-networkd[743]: eth0: Gained carrier Sep 4 23:49:51.946019 systemd-networkd[743]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 4 23:49:51.946206 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:49:51.946888 systemd[1]: Reached target network.target - Network. Sep 4 23:49:51.953338 systemd-networkd[743]: eth1: Link UP Sep 4 23:49:51.953343 systemd-networkd[743]: eth1: Gained carrier Sep 4 23:49:51.953358 systemd-networkd[743]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:49:51.953457 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:49:51.968154 systemd-networkd[743]: eth0: DHCPv4 address 143.110.229.161/20, gateway 143.110.224.1 acquired from 169.254.169.253 Sep 4 23:49:51.971255 systemd-networkd[743]: eth1: DHCPv4 address 10.124.0.25/20 acquired from 169.254.169.253 Sep 4 23:49:51.987838 ignition[751]: Ignition 2.20.0 Sep 4 23:49:51.987855 ignition[751]: Stage: fetch Sep 4 23:49:51.988150 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:49:51.988164 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 23:49:51.988331 ignition[751]: parsed url from cmdline: "" Sep 4 23:49:51.988341 ignition[751]: no config URL provided Sep 4 23:49:51.988351 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:49:51.988366 ignition[751]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:49:51.988409 ignition[751]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 4 23:49:52.006477 ignition[751]: GET result: OK Sep 4 23:49:52.007502 ignition[751]: parsing config with SHA512: 762d27036ab65a07cecc8597cb47aaa135208053c41e682740796ec5142512a5f510fdbe11b2d115cb06a168c4c9ce5b42c744a30c66251e95e9a1073f34d7c9 Sep 4 23:49:52.016630 unknown[751]: fetched base config from "system" Sep 4 23:49:52.016644 unknown[751]: fetched base config from "system" Sep 4 23:49:52.017278 ignition[751]: fetch: fetch complete Sep 4 23:49:52.016651 unknown[751]: fetched user config from "digitalocean" Sep 4 23:49:52.017285 ignition[751]: fetch: fetch passed Sep 4 23:49:52.019935 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:49:52.017354 ignition[751]: Ignition finished successfully Sep 4 23:49:52.026340 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:49:52.044735 ignition[759]: Ignition 2.20.0 Sep 4 23:49:52.044749 ignition[759]: Stage: kargs Sep 4 23:49:52.044943 ignition[759]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:49:52.044953 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 23:49:52.047222 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:49:52.045894 ignition[759]: kargs: kargs passed Sep 4 23:49:52.045952 ignition[759]: Ignition finished successfully Sep 4 23:49:52.056410 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:49:52.075705 ignition[765]: Ignition 2.20.0 Sep 4 23:49:52.075724 ignition[765]: Stage: disks Sep 4 23:49:52.076035 ignition[765]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:49:52.076073 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 23:49:52.080530 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:49:52.077665 ignition[765]: disks: disks passed Sep 4 23:49:52.085298 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:49:52.077756 ignition[765]: Ignition finished successfully Sep 4 23:49:52.086172 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:49:52.087196 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:49:52.088415 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:49:52.089775 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:49:52.103411 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:49:52.122836 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 23:49:52.127503 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:49:52.135281 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:49:52.266336 kernel: EXT4-fs (vda9): mounted filesystem 86dd2c20-900e-43ec-8fda-e9f0f484a013 r/w with ordered data mode. Quota mode: none. Sep 4 23:49:52.267812 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:49:52.269483 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:49:52.282330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:49:52.285374 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:49:52.289300 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Sep 4 23:49:52.297111 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (782) Sep 4 23:49:52.302103 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:49:52.302349 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 23:49:52.306091 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:49:52.306144 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:49:52.309802 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:49:52.320179 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:49:52.309884 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:49:52.325130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:49:52.326657 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:49:52.334318 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:49:52.396218 initrd-setup-root[808]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:49:52.412092 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:49:52.427103 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:49:52.439613 coreos-metadata[785]: Sep 04 23:49:52.439 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 23:49:52.443536 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:49:52.453574 coreos-metadata[784]: Sep 04 23:49:52.453 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 23:49:52.457257 coreos-metadata[785]: Sep 04 23:49:52.457 INFO Fetch successful Sep 4 23:49:52.463911 coreos-metadata[784]: Sep 04 23:49:52.463 INFO Fetch successful Sep 4 23:49:52.468698 coreos-metadata[785]: Sep 04 23:49:52.468 INFO wrote hostname ci-4230.2.2-n-136bc82296 to /sysroot/etc/hostname Sep 4 23:49:52.469988 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:49:52.479290 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Sep 4 23:49:52.479444 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Sep 4 23:49:52.581570 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:49:52.590324 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:49:52.597766 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:49:52.604106 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:49:52.636410 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:49:52.654900 ignition[903]: INFO : Ignition 2.20.0 Sep 4 23:49:52.654900 ignition[903]: INFO : Stage: mount Sep 4 23:49:52.656607 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:49:52.656607 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 23:49:52.656607 ignition[903]: INFO : mount: mount passed Sep 4 23:49:52.656607 ignition[903]: INFO : Ignition finished successfully Sep 4 23:49:52.657232 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:49:52.670265 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:49:52.673955 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:49:52.689392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:49:52.703124 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (914) Sep 4 23:49:52.703204 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:49:52.705545 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:49:52.707637 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:49:52.713116 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:49:52.713791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:49:52.743584 ignition[931]: INFO : Ignition 2.20.0 Sep 4 23:49:52.743584 ignition[931]: INFO : Stage: files Sep 4 23:49:52.744930 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:49:52.744930 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 23:49:52.746261 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:49:52.746261 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:49:52.746261 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:49:52.748933 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:49:52.749714 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:49:52.749714 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:49:52.749368 unknown[931]: wrote ssh authorized keys file for user: core Sep 4 23:49:52.752067 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:49:52.752067 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 23:49:53.007624 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:49:53.425627 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:49:53.425627 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:49:53.428475 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 23:49:53.592324 systemd-networkd[743]: eth0: Gained IPv6LL Sep 4 23:49:53.631872 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:49:53.704307 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:49:53.704307 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:49:53.706576 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 23:49:53.977022 systemd-networkd[743]: eth1: Gained IPv6LL Sep 4 23:49:54.043295 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:49:54.564312 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:49:54.564312 ignition[931]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:49:54.566388 ignition[931]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:49:54.566388 ignition[931]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:49:54.566388 ignition[931]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:49:54.566388 ignition[931]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:49:54.570713 ignition[931]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:49:54.570713 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:49:54.570713 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:49:54.570713 ignition[931]: INFO : files: files passed Sep 4 23:49:54.570713 ignition[931]: INFO : Ignition finished successfully Sep 4 23:49:54.568136 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:49:54.576438 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:49:54.582334 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:49:54.584863 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:49:54.585797 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:49:54.602705 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:49:54.602705 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:49:54.606163 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:49:54.609351 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:49:54.610148 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:49:54.616284 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:49:54.650428 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:49:54.650558 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:49:54.652024 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:49:54.652926 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:49:54.654211 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:49:54.665337 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:49:54.680438 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:49:54.688383 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:49:54.705842 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:49:54.706921 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:49:54.708411 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:49:54.709630 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:49:54.709834 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:49:54.711451 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:49:54.712977 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:49:54.714154 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:49:54.715359 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:49:54.716769 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:49:54.718209 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:49:54.719526 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:49:54.720961 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:49:54.722364 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:49:54.723914 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:49:54.725039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:49:54.725274 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:49:54.726723 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:49:54.727878 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:49:54.729213 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:49:54.729390 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:49:54.730743 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:49:54.731099 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:49:54.732577 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:49:54.732794 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:49:54.734374 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:49:54.734556 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:49:54.735738 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 23:49:54.735951 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:49:54.743530 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:49:54.744295 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:49:54.744607 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:49:54.756237 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:49:54.757130 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:49:54.757429 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:49:54.760379 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:49:54.762191 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:49:54.773432 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:49:54.773672 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:49:54.790875 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:49:54.793852 ignition[985]: INFO : Ignition 2.20.0 Sep 4 23:49:54.793852 ignition[985]: INFO : Stage: umount Sep 4 23:49:54.796104 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:49:54.796104 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 4 23:49:54.798557 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:49:54.798745 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:49:54.801828 ignition[985]: INFO : umount: umount passed Sep 4 23:49:54.801828 ignition[985]: INFO : Ignition finished successfully Sep 4 23:49:54.803626 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:49:54.803889 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:49:54.806233 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:49:54.806414 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:49:54.807882 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:49:54.807975 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:49:54.809141 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:49:54.809217 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:49:54.810386 systemd[1]: Stopped target network.target - Network. Sep 4 23:49:54.811566 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:49:54.811658 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:49:54.812930 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:49:54.813962 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:49:54.817153 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:49:54.818192 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:49:54.819684 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:49:54.820814 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:49:54.820897 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:49:54.821950 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:49:54.821999 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:49:54.823118 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:49:54.823188 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:49:54.824212 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:49:54.824265 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:49:54.825049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:49:54.825105 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:49:54.826220 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:49:54.827602 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:49:54.836682 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:49:54.836875 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:49:54.843046 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:49:54.843573 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:49:54.843749 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:49:54.846367 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:49:54.847892 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:49:54.847981 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:49:54.854423 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:49:54.855254 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:49:54.855378 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:49:54.856760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:49:54.856847 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:49:54.860467 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:49:54.860547 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:49:54.862029 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:49:54.862130 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:49:54.863705 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:49:54.867734 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:49:54.867845 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:49:54.883042 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:49:54.884083 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:49:54.885386 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:49:54.885530 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:49:54.888027 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:49:54.888472 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:49:54.889976 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:49:54.890032 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:49:54.891196 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:49:54.891274 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:49:54.892910 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:49:54.892976 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:49:54.894090 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:49:54.894159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:49:54.907463 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:49:54.909399 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:49:54.909500 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:49:54.912544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:49:54.912615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:49:54.915448 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:49:54.915521 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:49:54.916032 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:49:54.916176 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:49:54.918563 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:49:54.925309 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:49:54.936868 systemd[1]: Switching root. Sep 4 23:49:55.022678 systemd-journald[183]: Journal stopped Sep 4 23:49:56.594050 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 4 23:49:56.594211 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:49:56.594237 kernel: SELinux: policy capability open_perms=1 Sep 4 23:49:56.594263 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:49:56.594294 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:49:56.594314 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:49:56.594339 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:49:56.594360 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:49:56.594378 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:49:56.594397 kernel: audit: type=1403 audit(1757029795.284:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:49:56.594427 systemd[1]: Successfully loaded SELinux policy in 45.903ms. Sep 4 23:49:56.594459 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.467ms. Sep 4 23:49:56.594484 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:49:56.594505 systemd[1]: Detected virtualization kvm. Sep 4 23:49:56.594525 systemd[1]: Detected architecture x86-64. Sep 4 23:49:56.594545 systemd[1]: Detected first boot. Sep 4 23:49:56.594564 systemd[1]: Hostname set to . Sep 4 23:49:56.594584 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:49:56.594603 zram_generator::config[1030]: No configuration found. Sep 4 23:49:56.594631 kernel: Guest personality initialized and is inactive Sep 4 23:49:56.594655 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 23:49:56.594674 kernel: Initialized host personality Sep 4 23:49:56.594693 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:49:56.594712 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:49:56.594733 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:49:56.594752 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:49:56.594786 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:49:56.594807 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:49:56.594828 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:49:56.594852 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:49:56.594872 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:49:56.594894 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:49:56.594921 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:49:56.594941 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:49:56.594961 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:49:56.594982 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:49:56.595003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:49:56.595023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:49:56.595047 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:49:56.596966 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:49:56.597000 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:49:56.597022 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:49:56.597043 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:49:56.597852 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:49:56.597889 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:49:56.597911 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:49:56.597930 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:49:56.597951 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:49:56.597972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:49:56.597991 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:49:56.598011 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:49:56.598032 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:49:56.598052 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:49:56.598672 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:49:56.598696 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:49:56.598716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:49:56.598737 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:49:56.598757 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:49:56.598795 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:49:56.598816 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:49:56.598838 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:49:56.598857 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:49:56.598882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:56.598902 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:49:56.598923 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:49:56.598943 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:49:56.598964 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:49:56.598985 systemd[1]: Reached target machines.target - Containers. Sep 4 23:49:56.599005 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:49:56.599026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:49:56.599049 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:49:56.600890 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:49:56.600918 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:49:56.600939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:49:56.600959 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:49:56.600980 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:49:56.600999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:49:56.601020 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:49:56.601040 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:49:56.601096 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:49:56.601116 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:49:56.601136 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:49:56.601156 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:49:56.601176 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:49:56.601196 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:49:56.601216 kernel: ACPI: bus type drm_connector registered Sep 4 23:49:56.601237 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:49:56.601262 kernel: fuse: init (API version 7.39) Sep 4 23:49:56.601281 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:49:56.601302 kernel: loop: module loaded Sep 4 23:49:56.601321 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:49:56.601342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:49:56.601364 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:49:56.601389 systemd[1]: Stopped verity-setup.service. Sep 4 23:49:56.601410 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:56.601431 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:49:56.601451 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:49:56.601472 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:49:56.601496 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:49:56.601516 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:49:56.601537 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:49:56.601558 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:49:56.601578 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:49:56.601598 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:49:56.601618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:49:56.601638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:49:56.601661 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:49:56.601681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:49:56.601700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:49:56.601720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:49:56.601783 systemd-journald[1107]: Collecting audit messages is disabled. Sep 4 23:49:56.601823 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:49:56.601843 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:49:56.601864 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:49:56.601887 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:49:56.601908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:49:56.601928 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:49:56.601949 systemd-journald[1107]: Journal started Sep 4 23:49:56.601994 systemd-journald[1107]: Runtime Journal (/run/log/journal/5e73d040dba0412faa4e821c1ced2991) is 4.9M, max 39.3M, 34.4M free. Sep 4 23:49:56.107047 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:49:56.123344 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 23:49:56.606253 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:49:56.124133 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:49:56.610866 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:49:56.613004 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:49:56.614558 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:49:56.633392 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:49:56.643249 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:49:56.655242 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:49:56.658230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:49:56.658297 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:49:56.663453 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:49:56.675319 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:49:56.683284 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:49:56.684326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:49:56.693690 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:49:56.697341 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:49:56.700224 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:49:56.708355 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:49:56.709369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:49:56.716556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:49:56.719296 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:49:56.723300 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:49:56.729201 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:49:56.730353 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:49:56.732496 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:49:56.733664 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:49:56.757644 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:49:56.787833 systemd-journald[1107]: Time spent on flushing to /var/log/journal/5e73d040dba0412faa4e821c1ced2991 is 93.325ms for 1006 entries. Sep 4 23:49:56.787833 systemd-journald[1107]: System Journal (/var/log/journal/5e73d040dba0412faa4e821c1ced2991) is 8M, max 195.6M, 187.6M free. Sep 4 23:49:56.898572 systemd-journald[1107]: Received client request to flush runtime journal. Sep 4 23:49:56.898667 kernel: loop0: detected capacity change from 0 to 8 Sep 4 23:49:56.898695 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:49:56.808998 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:49:56.812738 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:49:56.825408 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:49:56.866819 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:49:56.886216 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:49:56.892200 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 23:49:56.902176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:49:56.909597 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:49:56.921513 kernel: loop1: detected capacity change from 0 to 224512 Sep 4 23:49:56.928900 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:49:56.976553 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Sep 4 23:49:56.983534 kernel: loop2: detected capacity change from 0 to 147912 Sep 4 23:49:56.976584 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Sep 4 23:49:57.016977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:49:57.040160 kernel: loop3: detected capacity change from 0 to 138176 Sep 4 23:49:57.106119 kernel: loop4: detected capacity change from 0 to 8 Sep 4 23:49:57.111111 kernel: loop5: detected capacity change from 0 to 224512 Sep 4 23:49:57.128737 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:49:57.130096 kernel: loop6: detected capacity change from 0 to 147912 Sep 4 23:49:57.154108 kernel: loop7: detected capacity change from 0 to 138176 Sep 4 23:49:57.171191 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 4 23:49:57.172039 (sd-merge)[1181]: Merged extensions into '/usr'. Sep 4 23:49:57.178685 systemd[1]: Reload requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:49:57.178715 systemd[1]: Reloading... Sep 4 23:49:57.404516 zram_generator::config[1209]: No configuration found. Sep 4 23:49:57.627110 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:49:57.758635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:49:57.865385 systemd[1]: Reloading finished in 685 ms. Sep 4 23:49:57.885985 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:49:57.887943 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:49:57.917468 systemd[1]: Starting ensure-sysext.service... Sep 4 23:49:57.926974 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:49:57.952245 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:49:57.952306 systemd[1]: Reloading... Sep 4 23:49:57.995868 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:49:57.997615 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:49:57.999662 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:49:58.000557 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 4 23:49:58.000716 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 4 23:49:58.008914 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:49:58.008932 systemd-tmpfiles[1253]: Skipping /boot Sep 4 23:49:58.045400 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:49:58.045425 systemd-tmpfiles[1253]: Skipping /boot Sep 4 23:49:58.148230 zram_generator::config[1285]: No configuration found. Sep 4 23:49:58.346706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:49:58.458291 systemd[1]: Reloading finished in 505 ms. Sep 4 23:49:58.473691 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:49:58.488138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:49:58.505649 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:49:58.513685 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:49:58.520572 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:49:58.533018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:49:58.538412 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:49:58.551557 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:49:58.559451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:58.559765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:49:58.566484 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:49:58.570612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:49:58.576264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:49:58.577587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:49:58.577820 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:49:58.586706 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:49:58.587527 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:58.591091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:58.591386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:49:58.591683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:49:58.591818 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:49:58.591961 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:58.606159 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:58.606651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:49:58.615626 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:49:58.616752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:49:58.617096 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:49:58.617411 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:58.621260 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:49:58.625508 systemd[1]: Finished ensure-sysext.service. Sep 4 23:49:58.646354 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 23:49:58.649468 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:49:58.662412 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:49:58.696170 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:49:58.696866 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:49:58.699699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:49:58.700852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:49:58.704843 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:49:58.709434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:49:58.710383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:49:58.713602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:49:58.717789 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:49:58.718170 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:49:58.736212 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:49:58.738015 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:49:58.742483 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:49:58.752880 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Sep 4 23:49:58.768957 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:49:58.791164 augenrules[1372]: No rules Sep 4 23:49:58.793662 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:49:58.795816 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:49:58.825001 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:49:58.838429 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:49:58.949206 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 23:49:58.950367 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:49:58.971270 systemd-resolved[1329]: Positive Trust Anchors: Sep 4 23:49:58.971741 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:49:58.971795 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:49:58.979448 systemd-resolved[1329]: Using system hostname 'ci-4230.2.2-n-136bc82296'. Sep 4 23:49:58.981442 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:49:58.984399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:49:59.036828 systemd-networkd[1383]: lo: Link UP Sep 4 23:49:59.036840 systemd-networkd[1383]: lo: Gained carrier Sep 4 23:49:59.041252 systemd-networkd[1383]: Enumeration completed Sep 4 23:49:59.041384 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:49:59.042887 systemd[1]: Reached target network.target - Network. Sep 4 23:49:59.054414 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:49:59.065276 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:49:59.072915 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Sep 4 23:49:59.093274 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 4 23:49:59.095286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:59.095434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:49:59.099668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:49:59.101747 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:49:59.110328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:49:59.111636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:49:59.111708 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:49:59.111754 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:49:59.111778 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:49:59.132103 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1398) Sep 4 23:49:59.135124 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:49:59.142642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:49:59.142969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:49:59.165195 kernel: ISO 9660 Extensions: RRIP_1991A Sep 4 23:49:59.166724 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:49:59.175225 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 4 23:49:59.177480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:49:59.177747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:49:59.179600 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:49:59.181483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:49:59.188520 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:49:59.188621 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:49:59.193435 systemd-networkd[1383]: eth1: Configuring with /run/systemd/network/10-42:7c:61:fc:fa:4b.network. Sep 4 23:49:59.198526 systemd-networkd[1383]: eth1: Link UP Sep 4 23:49:59.198541 systemd-networkd[1383]: eth1: Gained carrier Sep 4 23:49:59.201856 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 4 23:49:59.242082 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 23:49:59.243897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:49:59.253421 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:49:59.260120 kernel: ACPI: button: Power Button [PWRF] Sep 4 23:49:59.271909 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:49:59.279338 systemd-networkd[1383]: eth0: Configuring with /run/systemd/network/10-2e:b4:6d:4d:89:3e.network. Sep 4 23:49:59.281739 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 4 23:49:59.282500 systemd-networkd[1383]: eth0: Link UP Sep 4 23:49:59.282510 systemd-networkd[1383]: eth0: Gained carrier Sep 4 23:49:59.289676 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 23:49:59.295386 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 4 23:49:59.303185 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 23:49:59.381666 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:49:59.398108 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:49:59.521322 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 4 23:49:59.529956 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 4 23:49:59.541483 kernel: Console: switching to colour dummy device 80x25 Sep 4 23:49:59.546383 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 4 23:49:59.546451 kernel: [drm] features: -context_init Sep 4 23:49:59.558089 kernel: [drm] number of scanouts: 1 Sep 4 23:49:59.559939 kernel: [drm] number of cap sets: 0 Sep 4 23:49:59.559019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:49:59.559994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:49:59.564812 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:49:59.566111 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 4 23:49:59.590606 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 4 23:49:59.590705 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 23:49:59.591446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:49:59.603332 kernel: EDAC MC: Ver: 3.0.0 Sep 4 23:49:59.608104 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 4 23:49:59.628355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:49:59.628628 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:49:59.642408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:49:59.644576 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:49:59.654261 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:49:59.676555 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:49:59.689319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:49:59.716862 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:49:59.719406 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:49:59.721625 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:49:59.723572 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:49:59.724133 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:49:59.725191 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:49:59.725751 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:49:59.725849 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:49:59.725954 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:49:59.725994 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:49:59.726114 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:49:59.728646 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:49:59.731697 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:49:59.750893 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:49:59.751510 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:49:59.752312 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:49:59.766043 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:49:59.768510 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:49:59.776343 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:49:59.779656 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:49:59.782002 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:49:59.782506 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:49:59.783140 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:49:59.783547 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:49:59.783602 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:49:59.790384 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:49:59.817881 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:49:59.825378 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:49:59.839321 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:49:59.846456 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:49:59.847425 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:49:59.851405 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:49:59.865315 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:49:59.869331 jq[1456]: false Sep 4 23:49:59.879545 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:49:59.884869 coreos-metadata[1452]: Sep 04 23:49:59.883 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 23:49:59.890372 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:49:59.909123 coreos-metadata[1452]: Sep 04 23:49:59.907 INFO Fetch successful Sep 4 23:49:59.910501 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:49:59.915924 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:49:59.934725 extend-filesystems[1457]: Found loop4 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found loop5 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found loop6 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found loop7 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found vda Sep 4 23:49:59.934725 extend-filesystems[1457]: Found vda1 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found vda2 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found vda3 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found usr Sep 4 23:49:59.934725 extend-filesystems[1457]: Found vda4 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found vda6 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found vda7 Sep 4 23:49:59.934725 extend-filesystems[1457]: Found vda9 Sep 4 23:49:59.919517 dbus-daemon[1453]: [system] SELinux support is enabled Sep 4 23:50:00.047283 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 4 23:49:59.916953 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:50:00.047379 extend-filesystems[1457]: Checking size of /dev/vda9 Sep 4 23:50:00.047379 extend-filesystems[1457]: Resized partition /dev/vda9 Sep 4 23:50:00.060476 update_engine[1464]: I20250904 23:49:59.960333 1464 main.cc:92] Flatcar Update Engine starting Sep 4 23:50:00.060476 update_engine[1464]: I20250904 23:49:59.995536 1464 update_check_scheduler.cc:74] Next update check in 8m4s Sep 4 23:49:59.920144 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:50:00.069718 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:49:59.933469 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:49:59.941359 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:49:59.963347 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:49:59.988698 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:49:59.989049 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:50:00.016240 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:50:00.016352 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:50:00.033868 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:50:00.035926 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 4 23:50:00.035975 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:50:00.036802 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:50:00.053470 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:50:00.057941 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:50:00.058386 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:50:00.089637 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:50:00.091442 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:50:00.111223 jq[1465]: true Sep 4 23:50:00.134851 tar[1472]: linux-amd64/LICENSE Sep 4 23:50:00.134851 tar[1472]: linux-amd64/helm Sep 4 23:50:00.151344 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1389) Sep 4 23:50:00.156680 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:50:00.190117 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 4 23:50:00.220832 jq[1490]: true Sep 4 23:50:00.226176 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 23:50:00.226176 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 4 23:50:00.226176 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 4 23:50:00.235449 extend-filesystems[1457]: Resized filesystem in /dev/vda9 Sep 4 23:50:00.235449 extend-filesystems[1457]: Found vdb Sep 4 23:50:00.232697 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:50:00.234246 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:50:00.241142 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:50:00.280425 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:50:00.382582 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:50:00.394726 systemd-logind[1463]: New seat seat0. Sep 4 23:50:00.398951 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 23:50:00.398973 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:50:00.399310 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:50:00.451219 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:50:00.460188 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:50:00.475888 systemd[1]: Starting sshkeys.service... Sep 4 23:50:00.544482 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 23:50:00.555897 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 23:50:00.632804 systemd-networkd[1383]: eth0: Gained IPv6LL Sep 4 23:50:00.637298 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 4 23:50:00.643670 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:50:00.648921 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:50:00.662551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:00.675404 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:50:00.686107 coreos-metadata[1527]: Sep 04 23:50:00.686 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 4 23:50:00.704709 coreos-metadata[1527]: Sep 04 23:50:00.703 INFO Fetch successful Sep 4 23:50:00.752823 unknown[1527]: wrote ssh authorized keys file for user: core Sep 4 23:50:00.784917 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:50:00.841582 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:50:00.843395 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 23:50:00.849132 systemd[1]: Finished sshkeys.service. Sep 4 23:50:00.863141 containerd[1487]: time="2025-09-04T23:50:00.861658319Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:50:00.901173 containerd[1487]: time="2025-09-04T23:50:00.900563296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.913341243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.913389012Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.913416390Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.913596977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.913619989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.913683078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.913700030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.914002294Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.914036057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.914097086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915189 containerd[1487]: time="2025-09-04T23:50:00.914127462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915699 containerd[1487]: time="2025-09-04T23:50:00.914249780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915699 containerd[1487]: time="2025-09-04T23:50:00.914509704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915699 containerd[1487]: time="2025-09-04T23:50:00.914701098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:50:00.915699 containerd[1487]: time="2025-09-04T23:50:00.914718803Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:50:00.915699 containerd[1487]: time="2025-09-04T23:50:00.914830522Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:50:00.915699 containerd[1487]: time="2025-09-04T23:50:00.914902058Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:50:00.922853 containerd[1487]: time="2025-09-04T23:50:00.922793154Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:50:00.923664 containerd[1487]: time="2025-09-04T23:50:00.923038218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:50:00.923664 containerd[1487]: time="2025-09-04T23:50:00.923379961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:50:00.923664 containerd[1487]: time="2025-09-04T23:50:00.923411929Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:50:00.923664 containerd[1487]: time="2025-09-04T23:50:00.923430407Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:50:00.923664 containerd[1487]: time="2025-09-04T23:50:00.923615577Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925100291Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925303264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925323401Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925338674Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925354600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925369244Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925381615Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925398547Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925413028Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925427723Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925440410Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925453019Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925472086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.925838 containerd[1487]: time="2025-09-04T23:50:00.925487221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925533114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925550461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925563386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925576699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925588987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925601864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925616022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925638268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925650266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925662501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925677949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925701831Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925731491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925750631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.926179 containerd[1487]: time="2025-09-04T23:50:00.925767977Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:50:00.929101 containerd[1487]: time="2025-09-04T23:50:00.927127049Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:50:00.929101 containerd[1487]: time="2025-09-04T23:50:00.927161634Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:50:00.929101 containerd[1487]: time="2025-09-04T23:50:00.927173847Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:50:00.929101 containerd[1487]: time="2025-09-04T23:50:00.927187044Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:50:00.929101 containerd[1487]: time="2025-09-04T23:50:00.927198255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.929101 containerd[1487]: time="2025-09-04T23:50:00.927211923Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:50:00.929101 containerd[1487]: time="2025-09-04T23:50:00.927222380Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:50:00.929101 containerd[1487]: time="2025-09-04T23:50:00.927234690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:50:00.929331 containerd[1487]: time="2025-09-04T23:50:00.927626411Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:50:00.929331 containerd[1487]: time="2025-09-04T23:50:00.927679587Z" level=info msg="Connect containerd service" Sep 4 23:50:00.929331 containerd[1487]: time="2025-09-04T23:50:00.927730359Z" level=info msg="using legacy CRI server" Sep 4 23:50:00.929331 containerd[1487]: time="2025-09-04T23:50:00.927739909Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:50:00.929331 containerd[1487]: time="2025-09-04T23:50:00.927891144Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:50:00.931083 containerd[1487]: time="2025-09-04T23:50:00.930512351Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:50:00.931365 containerd[1487]: time="2025-09-04T23:50:00.931325827Z" level=info msg="Start subscribing containerd event" Sep 4 23:50:00.931761 containerd[1487]: time="2025-09-04T23:50:00.931741730Z" level=info msg="Start recovering state" Sep 4 23:50:00.931985 containerd[1487]: time="2025-09-04T23:50:00.931970763Z" level=info msg="Start event monitor" Sep 4 23:50:00.936097 containerd[1487]: time="2025-09-04T23:50:00.933092974Z" level=info msg="Start snapshots syncer" Sep 4 23:50:00.936097 containerd[1487]: time="2025-09-04T23:50:00.933116355Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:50:00.936097 containerd[1487]: time="2025-09-04T23:50:00.933127212Z" level=info msg="Start streaming server" Sep 4 23:50:00.936097 containerd[1487]: time="2025-09-04T23:50:00.932279564Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:50:00.936097 containerd[1487]: time="2025-09-04T23:50:00.933367028Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:50:00.936097 containerd[1487]: time="2025-09-04T23:50:00.935166426Z" level=info msg="containerd successfully booted in 0.074675s" Sep 4 23:50:00.933550 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:50:01.198605 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:50:01.210193 systemd-networkd[1383]: eth1: Gained IPv6LL Sep 4 23:50:01.214295 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 4 23:50:01.294253 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:50:01.317828 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:50:01.339773 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:50:01.340327 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:50:01.355728 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:50:01.411463 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:50:01.428365 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:50:01.442890 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:50:01.446365 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:50:02.034335 tar[1472]: linux-amd64/README.md Sep 4 23:50:02.063904 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:50:02.722026 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:50:02.743076 systemd[1]: Started sshd@0-143.110.229.161:22-147.75.109.163:42424.service - OpenSSH per-connection server daemon (147.75.109.163:42424). Sep 4 23:50:02.894211 sshd[1572]: Accepted publickey for core from 147.75.109.163 port 42424 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:50:02.897396 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:02.910359 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:50:02.923727 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:50:02.953288 systemd-logind[1463]: New session 1 of user core. Sep 4 23:50:02.977036 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:50:03.002910 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:50:03.020968 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:50:03.037632 systemd-logind[1463]: New session c1 of user core. Sep 4 23:50:03.041392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:03.049304 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:50:03.059993 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:50:03.327832 systemd[1578]: Queued start job for default target default.target. Sep 4 23:50:03.337732 systemd[1578]: Created slice app.slice - User Application Slice. Sep 4 23:50:03.337800 systemd[1578]: Reached target paths.target - Paths. Sep 4 23:50:03.337881 systemd[1578]: Reached target timers.target - Timers. Sep 4 23:50:03.341417 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:50:03.382555 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:50:03.382879 systemd[1578]: Reached target sockets.target - Sockets. Sep 4 23:50:03.382971 systemd[1578]: Reached target basic.target - Basic System. Sep 4 23:50:03.383029 systemd[1578]: Reached target default.target - Main User Target. Sep 4 23:50:03.383099 systemd[1578]: Startup finished in 314ms. Sep 4 23:50:03.383949 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:50:03.399532 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:50:03.404184 systemd[1]: Startup finished in 1.240s (kernel) + 6.547s (initrd) + 8.164s (userspace) = 15.952s. Sep 4 23:50:03.508791 systemd[1]: Started sshd@1-143.110.229.161:22-147.75.109.163:42432.service - OpenSSH per-connection server daemon (147.75.109.163:42432). Sep 4 23:50:03.579676 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 42432 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:50:03.583006 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:03.595046 systemd-logind[1463]: New session 2 of user core. Sep 4 23:50:03.596324 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:50:03.663173 sshd[1603]: Connection closed by 147.75.109.163 port 42432 Sep 4 23:50:03.667353 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:03.687269 systemd[1]: sshd@1-143.110.229.161:22-147.75.109.163:42432.service: Deactivated successfully. Sep 4 23:50:03.691106 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:50:03.695981 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:50:03.706871 systemd[1]: Started sshd@2-143.110.229.161:22-147.75.109.163:42442.service - OpenSSH per-connection server daemon (147.75.109.163:42442). Sep 4 23:50:03.713584 systemd-logind[1463]: Removed session 2. Sep 4 23:50:03.789708 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 42442 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:50:03.790014 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:03.807944 systemd-logind[1463]: New session 3 of user core. Sep 4 23:50:03.822010 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:50:03.895249 sshd[1611]: Connection closed by 147.75.109.163 port 42442 Sep 4 23:50:03.899750 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:03.916775 systemd[1]: sshd@2-143.110.229.161:22-147.75.109.163:42442.service: Deactivated successfully. Sep 4 23:50:03.922493 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:50:03.927105 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:50:03.942635 systemd[1]: Started sshd@3-143.110.229.161:22-147.75.109.163:42448.service - OpenSSH per-connection server daemon (147.75.109.163:42448). Sep 4 23:50:03.946839 systemd-logind[1463]: Removed session 3. Sep 4 23:50:04.053906 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 42448 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:50:04.056277 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:04.084485 systemd-logind[1463]: New session 4 of user core. Sep 4 23:50:04.095991 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:50:04.188507 sshd[1619]: Connection closed by 147.75.109.163 port 42448 Sep 4 23:50:04.192282 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:04.213050 systemd[1]: Started sshd@4-143.110.229.161:22-147.75.109.163:42456.service - OpenSSH per-connection server daemon (147.75.109.163:42456). Sep 4 23:50:04.214628 systemd[1]: sshd@3-143.110.229.161:22-147.75.109.163:42448.service: Deactivated successfully. Sep 4 23:50:04.222429 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:50:04.227460 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:50:04.238671 systemd-logind[1463]: Removed session 4. Sep 4 23:50:04.333093 kubelet[1582]: E0904 23:50:04.332937 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:50:04.337384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:50:04.338341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:50:04.339473 systemd[1]: kubelet.service: Consumed 1.784s CPU time, 265.3M memory peak. Sep 4 23:50:04.347086 sshd[1624]: Accepted publickey for core from 147.75.109.163 port 42456 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:50:04.348290 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:04.357987 systemd-logind[1463]: New session 5 of user core. Sep 4 23:50:04.369096 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:50:04.463080 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:50:04.464299 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:50:04.486363 sudo[1631]: pam_unix(sudo:session): session closed for user root Sep 4 23:50:04.490247 sshd[1630]: Connection closed by 147.75.109.163 port 42456 Sep 4 23:50:04.493012 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:04.509495 systemd[1]: sshd@4-143.110.229.161:22-147.75.109.163:42456.service: Deactivated successfully. Sep 4 23:50:04.512430 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:50:04.513919 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:50:04.521634 systemd[1]: Started sshd@5-143.110.229.161:22-147.75.109.163:42460.service - OpenSSH per-connection server daemon (147.75.109.163:42460). Sep 4 23:50:04.523743 systemd-logind[1463]: Removed session 5. Sep 4 23:50:04.580618 sshd[1636]: Accepted publickey for core from 147.75.109.163 port 42460 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:50:04.584699 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:04.593089 systemd-logind[1463]: New session 6 of user core. Sep 4 23:50:04.608548 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:50:04.678573 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:50:04.678972 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:50:04.685191 sudo[1641]: pam_unix(sudo:session): session closed for user root Sep 4 23:50:04.697478 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:50:04.697964 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:50:04.721755 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:50:04.762629 augenrules[1663]: No rules Sep 4 23:50:04.764662 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:50:04.765002 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:50:04.767041 sudo[1640]: pam_unix(sudo:session): session closed for user root Sep 4 23:50:04.771178 sshd[1639]: Connection closed by 147.75.109.163 port 42460 Sep 4 23:50:04.773362 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:04.783769 systemd[1]: sshd@5-143.110.229.161:22-147.75.109.163:42460.service: Deactivated successfully. Sep 4 23:50:04.786488 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:50:04.789414 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:50:04.793884 systemd[1]: Started sshd@6-143.110.229.161:22-147.75.109.163:42466.service - OpenSSH per-connection server daemon (147.75.109.163:42466). Sep 4 23:50:04.795737 systemd-logind[1463]: Removed session 6. Sep 4 23:50:04.867853 sshd[1671]: Accepted publickey for core from 147.75.109.163 port 42466 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:50:04.869975 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:04.880720 systemd-logind[1463]: New session 7 of user core. Sep 4 23:50:04.887518 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:50:04.952529 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:50:04.953488 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:50:05.594665 (dockerd)[1693]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:50:05.595219 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:50:06.232111 dockerd[1693]: time="2025-09-04T23:50:06.231553094Z" level=info msg="Starting up" Sep 4 23:50:06.370661 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3342816035-merged.mount: Deactivated successfully. Sep 4 23:50:06.420845 dockerd[1693]: time="2025-09-04T23:50:06.420751857Z" level=info msg="Loading containers: start." Sep 4 23:50:06.723009 kernel: Initializing XFRM netlink socket Sep 4 23:50:06.766549 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 4 23:50:06.768485 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 4 23:50:06.783265 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 4 23:50:06.877890 systemd-networkd[1383]: docker0: Link UP Sep 4 23:50:06.879508 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 4 23:50:06.932007 dockerd[1693]: time="2025-09-04T23:50:06.931945199Z" level=info msg="Loading containers: done." Sep 4 23:50:06.965911 dockerd[1693]: time="2025-09-04T23:50:06.965116571Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:50:06.965911 dockerd[1693]: time="2025-09-04T23:50:06.965303861Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:50:06.965911 dockerd[1693]: time="2025-09-04T23:50:06.965533349Z" level=info msg="Daemon has completed initialization" Sep 4 23:50:07.034038 dockerd[1693]: time="2025-09-04T23:50:07.033584513Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:50:07.033918 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:50:08.213764 containerd[1487]: time="2025-09-04T23:50:08.213035982Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:50:08.951870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536649061.mount: Deactivated successfully. Sep 4 23:50:10.541256 containerd[1487]: time="2025-09-04T23:50:10.540169030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:10.542412 containerd[1487]: time="2025-09-04T23:50:10.542176603Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 4 23:50:10.543089 containerd[1487]: time="2025-09-04T23:50:10.543021317Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:10.547940 containerd[1487]: time="2025-09-04T23:50:10.547853583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:10.548884 containerd[1487]: time="2025-09-04T23:50:10.548542357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.335410532s" Sep 4 23:50:10.548884 containerd[1487]: time="2025-09-04T23:50:10.548580703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 23:50:10.549230 containerd[1487]: time="2025-09-04T23:50:10.549203779Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:50:12.372687 containerd[1487]: time="2025-09-04T23:50:12.372549538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:12.374662 containerd[1487]: time="2025-09-04T23:50:12.374551596Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 4 23:50:12.377186 containerd[1487]: time="2025-09-04T23:50:12.376244168Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:12.380625 containerd[1487]: time="2025-09-04T23:50:12.380565355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:12.383250 containerd[1487]: time="2025-09-04T23:50:12.383187306Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.833947047s" Sep 4 23:50:12.383481 containerd[1487]: time="2025-09-04T23:50:12.383459077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 23:50:12.384778 containerd[1487]: time="2025-09-04T23:50:12.384740875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:50:13.790586 containerd[1487]: time="2025-09-04T23:50:13.788880386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:13.790586 containerd[1487]: time="2025-09-04T23:50:13.790346716Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 4 23:50:13.792409 containerd[1487]: time="2025-09-04T23:50:13.792346170Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:13.798987 containerd[1487]: time="2025-09-04T23:50:13.798907282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:13.801094 containerd[1487]: time="2025-09-04T23:50:13.801003471Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.416032611s" Sep 4 23:50:13.801094 containerd[1487]: time="2025-09-04T23:50:13.801101370Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 23:50:13.803176 containerd[1487]: time="2025-09-04T23:50:13.803117430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:50:14.413051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:50:14.422080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:14.681416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:14.701724 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:50:14.840084 kubelet[1963]: E0904 23:50:14.839968 1963 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:50:14.848939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:50:14.852412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:50:14.853749 systemd[1]: kubelet.service: Consumed 272ms CPU time, 107.6M memory peak. Sep 4 23:50:15.270596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008241936.mount: Deactivated successfully. Sep 4 23:50:16.089568 containerd[1487]: time="2025-09-04T23:50:16.089443550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:16.091445 containerd[1487]: time="2025-09-04T23:50:16.091090388Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 4 23:50:16.092725 containerd[1487]: time="2025-09-04T23:50:16.092252409Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:16.096664 containerd[1487]: time="2025-09-04T23:50:16.096611184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:16.097349 containerd[1487]: time="2025-09-04T23:50:16.097312041Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.294145007s" Sep 4 23:50:16.097415 containerd[1487]: time="2025-09-04T23:50:16.097353483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 23:50:16.098408 containerd[1487]: time="2025-09-04T23:50:16.098381568Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:50:16.100552 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 4 23:50:16.686450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3911075985.mount: Deactivated successfully. Sep 4 23:50:17.986162 containerd[1487]: time="2025-09-04T23:50:17.984524756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:17.990851 containerd[1487]: time="2025-09-04T23:50:17.990102228Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 23:50:17.991155 containerd[1487]: time="2025-09-04T23:50:17.990961628Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:17.998967 containerd[1487]: time="2025-09-04T23:50:17.998393861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:18.001208 containerd[1487]: time="2025-09-04T23:50:18.001127479Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.902622376s" Sep 4 23:50:18.002828 containerd[1487]: time="2025-09-04T23:50:18.001428478Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 23:50:18.003589 containerd[1487]: time="2025-09-04T23:50:18.003476150Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:50:18.599151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320164719.mount: Deactivated successfully. Sep 4 23:50:18.610123 containerd[1487]: time="2025-09-04T23:50:18.609375047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:18.611116 containerd[1487]: time="2025-09-04T23:50:18.611012580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 23:50:18.612708 containerd[1487]: time="2025-09-04T23:50:18.612624895Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:18.617105 containerd[1487]: time="2025-09-04T23:50:18.616727104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:18.617997 containerd[1487]: time="2025-09-04T23:50:18.617938833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 614.400849ms" Sep 4 23:50:18.617997 containerd[1487]: time="2025-09-04T23:50:18.617998830Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 23:50:18.619748 containerd[1487]: time="2025-09-04T23:50:18.619670067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:50:19.192483 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 4 23:50:19.290029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870155441.mount: Deactivated successfully. Sep 4 23:50:21.702327 containerd[1487]: time="2025-09-04T23:50:21.702235721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:21.704224 containerd[1487]: time="2025-09-04T23:50:21.704138132Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 4 23:50:21.705884 containerd[1487]: time="2025-09-04T23:50:21.705796227Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:21.711641 containerd[1487]: time="2025-09-04T23:50:21.711574428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:21.714133 containerd[1487]: time="2025-09-04T23:50:21.713537715Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.093815734s" Sep 4 23:50:21.714133 containerd[1487]: time="2025-09-04T23:50:21.713608767Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 23:50:24.963475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:50:24.971584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:25.193438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:25.204734 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:50:25.267085 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:25.270351 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:50:25.270879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:25.271339 systemd[1]: kubelet.service: Consumed 177ms CPU time, 110.6M memory peak. Sep 4 23:50:25.279624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:25.331268 systemd[1]: Reload requested from client PID 2127 ('systemctl') (unit session-7.scope)... Sep 4 23:50:25.331515 systemd[1]: Reloading... Sep 4 23:50:25.507140 zram_generator::config[2174]: No configuration found. Sep 4 23:50:25.651026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:50:25.798078 systemd[1]: Reloading finished in 465 ms. Sep 4 23:50:25.886629 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:25.887975 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:50:25.888345 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:25.889272 systemd[1]: kubelet.service: Consumed 126ms CPU time, 97.3M memory peak. Sep 4 23:50:25.897682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:26.116375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:26.116844 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:50:26.181092 kubelet[2228]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:50:26.181092 kubelet[2228]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:50:26.181092 kubelet[2228]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:50:26.181092 kubelet[2228]: I0904 23:50:26.179855 2228 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:50:26.931319 kubelet[2228]: I0904 23:50:26.931156 2228 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:50:26.931319 kubelet[2228]: I0904 23:50:26.931217 2228 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:50:26.931844 kubelet[2228]: I0904 23:50:26.931784 2228 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:50:26.968082 kubelet[2228]: E0904 23:50:26.967969 2228 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.110.229.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.110.229.161:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:50:26.968708 kubelet[2228]: I0904 23:50:26.968483 2228 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:50:26.982884 kubelet[2228]: E0904 23:50:26.982817 2228 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:50:26.982884 kubelet[2228]: I0904 23:50:26.982869 2228 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:50:26.989362 kubelet[2228]: I0904 23:50:26.988842 2228 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:50:26.991777 kubelet[2228]: I0904 23:50:26.990760 2228 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:50:26.991777 kubelet[2228]: I0904 23:50:26.990854 2228 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-136bc82296","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:50:26.991777 kubelet[2228]: I0904 23:50:26.991417 2228 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:50:26.991777 kubelet[2228]: I0904 23:50:26.991436 2228 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:50:26.995646 kubelet[2228]: I0904 23:50:26.995369 2228 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:50:27.000219 kubelet[2228]: I0904 23:50:27.000164 2228 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:50:27.000451 kubelet[2228]: I0904 23:50:27.000436 2228 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:50:27.000577 kubelet[2228]: I0904 23:50:27.000567 2228 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:50:27.000628 kubelet[2228]: I0904 23:50:27.000621 2228 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:50:27.010565 kubelet[2228]: W0904 23:50:27.010333 2228 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.229.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-136bc82296&limit=500&resourceVersion=0": dial tcp 143.110.229.161:6443: connect: connection refused Sep 4 23:50:27.010565 kubelet[2228]: E0904 23:50:27.010432 2228 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.110.229.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-136bc82296&limit=500&resourceVersion=0\": dial tcp 143.110.229.161:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:50:27.011146 kubelet[2228]: W0904 23:50:27.011092 2228 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.229.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.110.229.161:6443: connect: connection refused Sep 4 23:50:27.011266 kubelet[2228]: E0904 23:50:27.011152 2228 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.110.229.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.229.161:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:50:27.013555 kubelet[2228]: I0904 23:50:27.012914 2228 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:50:27.017423 kubelet[2228]: I0904 23:50:27.016633 2228 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:50:27.017741 kubelet[2228]: W0904 23:50:27.017599 2228 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:50:27.018573 kubelet[2228]: I0904 23:50:27.018538 2228 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:50:27.018722 kubelet[2228]: I0904 23:50:27.018653 2228 server.go:1287] "Started kubelet" Sep 4 23:50:27.019391 kubelet[2228]: I0904 23:50:27.019345 2228 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:50:27.021552 kubelet[2228]: I0904 23:50:27.021514 2228 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:50:27.023098 kubelet[2228]: I0904 23:50:27.022485 2228 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:50:27.024180 kubelet[2228]: I0904 23:50:27.024096 2228 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:50:27.024654 kubelet[2228]: I0904 23:50:27.024627 2228 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:50:27.032151 kubelet[2228]: I0904 23:50:27.031714 2228 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:50:27.032386 kubelet[2228]: E0904 23:50:27.032184 2228 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-136bc82296\" not found" Sep 4 23:50:27.032603 kubelet[2228]: I0904 23:50:27.032572 2228 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:50:27.032703 kubelet[2228]: I0904 23:50:27.032662 2228 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:50:27.034018 kubelet[2228]: I0904 23:50:27.033986 2228 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:50:27.041559 kubelet[2228]: W0904 23:50:27.039723 2228 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.229.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.229.161:6443: connect: connection refused Sep 4 23:50:27.041559 kubelet[2228]: E0904 23:50:27.041248 2228 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.110.229.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.229.161:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:50:27.044405 kubelet[2228]: E0904 23:50:27.041352 2228 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.110.229.161:6443/api/v1/namespaces/default/events\": dial tcp 143.110.229.161:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-136bc82296.186239518b350cbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-136bc82296,UID:ci-4230.2.2-n-136bc82296,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-136bc82296,},FirstTimestamp:2025-09-04 23:50:27.018558655 +0000 UTC m=+0.894607656,LastTimestamp:2025-09-04 23:50:27.018558655 +0000 UTC m=+0.894607656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-136bc82296,}" Sep 4 23:50:27.044405 kubelet[2228]: E0904 23:50:27.043717 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.229.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-136bc82296?timeout=10s\": dial tcp 143.110.229.161:6443: connect: connection refused" interval="200ms" Sep 4 23:50:27.049175 kubelet[2228]: I0904 23:50:27.049144 2228 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:50:27.049445 kubelet[2228]: I0904 23:50:27.049426 2228 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:50:27.064491 kubelet[2228]: I0904 23:50:27.064264 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:50:27.064491 kubelet[2228]: I0904 23:50:27.064380 2228 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:50:27.068969 kubelet[2228]: I0904 23:50:27.068810 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:50:27.071047 kubelet[2228]: I0904 23:50:27.070999 2228 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:50:27.071308 kubelet[2228]: I0904 23:50:27.071291 2228 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:50:27.071391 kubelet[2228]: I0904 23:50:27.071379 2228 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:50:27.071574 kubelet[2228]: E0904 23:50:27.071545 2228 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:50:27.087502 kubelet[2228]: W0904 23:50:27.087434 2228 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.229.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.229.161:6443: connect: connection refused Sep 4 23:50:27.088276 kubelet[2228]: E0904 23:50:27.088171 2228 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.110.229.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.229.161:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:50:27.088454 kubelet[2228]: E0904 23:50:27.088380 2228 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:50:27.100960 kubelet[2228]: I0904 23:50:27.100917 2228 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:50:27.101476 kubelet[2228]: I0904 23:50:27.101125 2228 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:50:27.101476 kubelet[2228]: I0904 23:50:27.101151 2228 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:50:27.105674 kubelet[2228]: I0904 23:50:27.105249 2228 policy_none.go:49] "None policy: Start" Sep 4 23:50:27.105674 kubelet[2228]: I0904 23:50:27.105306 2228 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:50:27.105674 kubelet[2228]: I0904 23:50:27.105348 2228 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:50:27.117743 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:50:27.132915 kubelet[2228]: E0904 23:50:27.132875 2228 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-136bc82296\" not found" Sep 4 23:50:27.139213 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:50:27.142920 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:50:27.148596 kubelet[2228]: I0904 23:50:27.148419 2228 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:50:27.148782 kubelet[2228]: I0904 23:50:27.148753 2228 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:50:27.148836 kubelet[2228]: I0904 23:50:27.148779 2228 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:50:27.150039 kubelet[2228]: I0904 23:50:27.149945 2228 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:50:27.155087 kubelet[2228]: E0904 23:50:27.154203 2228 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:50:27.155087 kubelet[2228]: E0904 23:50:27.154273 2228 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-n-136bc82296\" not found" Sep 4 23:50:27.183767 systemd[1]: Created slice kubepods-burstable-pod64a37271a0c6f998d44bb96591564ed2.slice - libcontainer container kubepods-burstable-pod64a37271a0c6f998d44bb96591564ed2.slice. Sep 4 23:50:27.195436 kubelet[2228]: E0904 23:50:27.195367 2228 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.201117 systemd[1]: Created slice kubepods-burstable-pod7e36e6238c205d19e3fa62609a8a0881.slice - libcontainer container kubepods-burstable-pod7e36e6238c205d19e3fa62609a8a0881.slice. Sep 4 23:50:27.212839 kubelet[2228]: E0904 23:50:27.212791 2228 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.217192 systemd[1]: Created slice kubepods-burstable-pod4f9980a20d8fa98c111a3c739c26599d.slice - libcontainer container kubepods-burstable-pod4f9980a20d8fa98c111a3c739c26599d.slice. Sep 4 23:50:27.219470 kubelet[2228]: E0904 23:50:27.219434 2228 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.244860 kubelet[2228]: E0904 23:50:27.244795 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.229.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-136bc82296?timeout=10s\": dial tcp 143.110.229.161:6443: connect: connection refused" interval="400ms" Sep 4 23:50:27.251280 kubelet[2228]: I0904 23:50:27.251244 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.251713 kubelet[2228]: E0904 23:50:27.251682 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.229.161:6443/api/v1/nodes\": dial tcp 143.110.229.161:6443: connect: connection refused" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.333264 kubelet[2228]: I0904 23:50:27.333130 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64a37271a0c6f998d44bb96591564ed2-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-136bc82296\" (UID: \"64a37271a0c6f998d44bb96591564ed2\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.333264 kubelet[2228]: I0904 23:50:27.333253 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64a37271a0c6f998d44bb96591564ed2-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-136bc82296\" (UID: \"64a37271a0c6f998d44bb96591564ed2\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.333502 kubelet[2228]: I0904 23:50:27.333306 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.333502 kubelet[2228]: I0904 23:50:27.333340 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.333502 kubelet[2228]: I0904 23:50:27.333414 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.333623 kubelet[2228]: I0904 23:50:27.333497 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.333623 kubelet[2228]: I0904 23:50:27.333523 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64a37271a0c6f998d44bb96591564ed2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-136bc82296\" (UID: \"64a37271a0c6f998d44bb96591564ed2\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.333623 kubelet[2228]: I0904 23:50:27.333595 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.333746 kubelet[2228]: I0904 23:50:27.333656 2228 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e36e6238c205d19e3fa62609a8a0881-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-136bc82296\" (UID: \"7e36e6238c205d19e3fa62609a8a0881\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.454878 kubelet[2228]: I0904 23:50:27.454046 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.454878 kubelet[2228]: E0904 23:50:27.454544 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.229.161:6443/api/v1/nodes\": dial tcp 143.110.229.161:6443: connect: connection refused" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.497371 kubelet[2228]: E0904 23:50:27.496771 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:27.499411 containerd[1487]: time="2025-09-04T23:50:27.499334213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-136bc82296,Uid:64a37271a0c6f998d44bb96591564ed2,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:27.502157 systemd-resolved[1329]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 4 23:50:27.514232 kubelet[2228]: E0904 23:50:27.513761 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:27.515811 containerd[1487]: time="2025-09-04T23:50:27.515353177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-136bc82296,Uid:7e36e6238c205d19e3fa62609a8a0881,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:27.520202 kubelet[2228]: E0904 23:50:27.520166 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:27.521203 containerd[1487]: time="2025-09-04T23:50:27.520681393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-136bc82296,Uid:4f9980a20d8fa98c111a3c739c26599d,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:27.646210 kubelet[2228]: E0904 23:50:27.646144 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.229.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-136bc82296?timeout=10s\": dial tcp 143.110.229.161:6443: connect: connection refused" interval="800ms" Sep 4 23:50:27.859223 kubelet[2228]: I0904 23:50:27.859181 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:27.859680 kubelet[2228]: E0904 23:50:27.859631 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.229.161:6443/api/v1/nodes\": dial tcp 143.110.229.161:6443: connect: connection refused" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:28.011877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003097413.mount: Deactivated successfully. Sep 4 23:50:28.017958 containerd[1487]: time="2025-09-04T23:50:28.017872591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:50:28.019365 containerd[1487]: time="2025-09-04T23:50:28.019295955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 23:50:28.020779 containerd[1487]: time="2025-09-04T23:50:28.020719127Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:50:28.022109 containerd[1487]: time="2025-09-04T23:50:28.022035172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:50:28.022899 containerd[1487]: time="2025-09-04T23:50:28.022854456Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:50:28.025177 containerd[1487]: time="2025-09-04T23:50:28.025137387Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:50:28.025739 containerd[1487]: time="2025-09-04T23:50:28.025545901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:50:28.026366 containerd[1487]: time="2025-09-04T23:50:28.026314285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:50:28.028343 containerd[1487]: time="2025-09-04T23:50:28.028229351Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.722481ms" Sep 4 23:50:28.030053 containerd[1487]: time="2025-09-04T23:50:28.029945003Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.460996ms" Sep 4 23:50:28.038311 containerd[1487]: time="2025-09-04T23:50:28.038255107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.45255ms" Sep 4 23:50:28.115496 kubelet[2228]: W0904 23:50:28.115083 2228 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.229.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-136bc82296&limit=500&resourceVersion=0": dial tcp 143.110.229.161:6443: connect: connection refused Sep 4 23:50:28.115496 kubelet[2228]: E0904 23:50:28.115190 2228 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.110.229.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-136bc82296&limit=500&resourceVersion=0\": dial tcp 143.110.229.161:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:50:28.216940 containerd[1487]: time="2025-09-04T23:50:28.215304222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:28.216940 containerd[1487]: time="2025-09-04T23:50:28.216795681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:28.216940 containerd[1487]: time="2025-09-04T23:50:28.216818546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:28.217468 containerd[1487]: time="2025-09-04T23:50:28.217218711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:28.225833 containerd[1487]: time="2025-09-04T23:50:28.225559309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:28.225833 containerd[1487]: time="2025-09-04T23:50:28.225627485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:28.225833 containerd[1487]: time="2025-09-04T23:50:28.225640999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:28.225833 containerd[1487]: time="2025-09-04T23:50:28.225732660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:28.228396 containerd[1487]: time="2025-09-04T23:50:28.227900538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:28.228786 containerd[1487]: time="2025-09-04T23:50:28.228206840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:28.228786 containerd[1487]: time="2025-09-04T23:50:28.228569978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:28.232308 containerd[1487]: time="2025-09-04T23:50:28.232202144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:28.247630 systemd[1]: Started cri-containerd-abe423a6010365fcb15a502a236f587be06073c39ec1c9786cce0193fdff2929.scope - libcontainer container abe423a6010365fcb15a502a236f587be06073c39ec1c9786cce0193fdff2929. Sep 4 23:50:28.248957 kubelet[2228]: W0904 23:50:28.247778 2228 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.229.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.229.161:6443: connect: connection refused Sep 4 23:50:28.248957 kubelet[2228]: E0904 23:50:28.248050 2228 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.110.229.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.229.161:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:50:28.278409 systemd[1]: Started cri-containerd-bbe1811001503a5bbbcdddd6ec5f0571c12a263774c1b277f2a5344192b24646.scope - libcontainer container bbe1811001503a5bbbcdddd6ec5f0571c12a263774c1b277f2a5344192b24646. Sep 4 23:50:28.296244 systemd[1]: Started cri-containerd-50eded07748f20ed030b3c28d4439fea9cce318e8c0ddffb30df2831082fe04a.scope - libcontainer container 50eded07748f20ed030b3c28d4439fea9cce318e8c0ddffb30df2831082fe04a. Sep 4 23:50:28.373205 containerd[1487]: time="2025-09-04T23:50:28.372649707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-136bc82296,Uid:7e36e6238c205d19e3fa62609a8a0881,Namespace:kube-system,Attempt:0,} returns sandbox id \"abe423a6010365fcb15a502a236f587be06073c39ec1c9786cce0193fdff2929\"" Sep 4 23:50:28.376097 kubelet[2228]: E0904 23:50:28.375914 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:28.380758 containerd[1487]: time="2025-09-04T23:50:28.380494852Z" level=info msg="CreateContainer within sandbox \"abe423a6010365fcb15a502a236f587be06073c39ec1c9786cce0193fdff2929\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:50:28.397100 containerd[1487]: time="2025-09-04T23:50:28.397003077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-136bc82296,Uid:64a37271a0c6f998d44bb96591564ed2,Namespace:kube-system,Attempt:0,} returns sandbox id \"50eded07748f20ed030b3c28d4439fea9cce318e8c0ddffb30df2831082fe04a\"" Sep 4 23:50:28.400116 kubelet[2228]: E0904 23:50:28.399783 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:28.404190 containerd[1487]: time="2025-09-04T23:50:28.403835533Z" level=info msg="CreateContainer within sandbox \"50eded07748f20ed030b3c28d4439fea9cce318e8c0ddffb30df2831082fe04a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:50:28.404190 containerd[1487]: time="2025-09-04T23:50:28.404100951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-136bc82296,Uid:4f9980a20d8fa98c111a3c739c26599d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbe1811001503a5bbbcdddd6ec5f0571c12a263774c1b277f2a5344192b24646\"" Sep 4 23:50:28.405084 kubelet[2228]: E0904 23:50:28.405002 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:28.416342 containerd[1487]: time="2025-09-04T23:50:28.416295855Z" level=info msg="CreateContainer within sandbox \"bbe1811001503a5bbbcdddd6ec5f0571c12a263774c1b277f2a5344192b24646\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:50:28.419039 containerd[1487]: time="2025-09-04T23:50:28.418970026Z" level=info msg="CreateContainer within sandbox \"abe423a6010365fcb15a502a236f587be06073c39ec1c9786cce0193fdff2929\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f3c3c0e4fea9abaa69f5d6716fc5a5f4c0e43c36c743cfdfc1da4478110b8fe\"" Sep 4 23:50:28.419774 containerd[1487]: time="2025-09-04T23:50:28.419615093Z" level=info msg="StartContainer for \"4f3c3c0e4fea9abaa69f5d6716fc5a5f4c0e43c36c743cfdfc1da4478110b8fe\"" Sep 4 23:50:28.442596 containerd[1487]: time="2025-09-04T23:50:28.442452566Z" level=info msg="CreateContainer within sandbox \"bbe1811001503a5bbbcdddd6ec5f0571c12a263774c1b277f2a5344192b24646\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"80ec8c56a75ea19fabe50b9c7fbcf7d7433254dbf9626bf4577ed3904b10fc4c\"" Sep 4 23:50:28.443235 containerd[1487]: time="2025-09-04T23:50:28.443096902Z" level=info msg="CreateContainer within sandbox \"50eded07748f20ed030b3c28d4439fea9cce318e8c0ddffb30df2831082fe04a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b11a8abcaf09911c54c41536c69a4b282d6dd65bf625276e1fb2bea2393d232d\"" Sep 4 23:50:28.443907 containerd[1487]: time="2025-09-04T23:50:28.443873773Z" level=info msg="StartContainer for \"80ec8c56a75ea19fabe50b9c7fbcf7d7433254dbf9626bf4577ed3904b10fc4c\"" Sep 4 23:50:28.444303 containerd[1487]: time="2025-09-04T23:50:28.444036486Z" level=info msg="StartContainer for \"b11a8abcaf09911c54c41536c69a4b282d6dd65bf625276e1fb2bea2393d232d\"" Sep 4 23:50:28.447110 kubelet[2228]: E0904 23:50:28.447020 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.229.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-136bc82296?timeout=10s\": dial tcp 143.110.229.161:6443: connect: connection refused" interval="1.6s" Sep 4 23:50:28.458651 kubelet[2228]: W0904 23:50:28.458522 2228 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.229.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.229.161:6443: connect: connection refused Sep 4 23:50:28.458651 kubelet[2228]: E0904 23:50:28.458624 2228 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.110.229.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.229.161:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:50:28.470773 systemd[1]: Started cri-containerd-4f3c3c0e4fea9abaa69f5d6716fc5a5f4c0e43c36c743cfdfc1da4478110b8fe.scope - libcontainer container 4f3c3c0e4fea9abaa69f5d6716fc5a5f4c0e43c36c743cfdfc1da4478110b8fe. Sep 4 23:50:28.508745 systemd[1]: Started cri-containerd-80ec8c56a75ea19fabe50b9c7fbcf7d7433254dbf9626bf4577ed3904b10fc4c.scope - libcontainer container 80ec8c56a75ea19fabe50b9c7fbcf7d7433254dbf9626bf4577ed3904b10fc4c. Sep 4 23:50:28.527508 systemd[1]: Started cri-containerd-b11a8abcaf09911c54c41536c69a4b282d6dd65bf625276e1fb2bea2393d232d.scope - libcontainer container b11a8abcaf09911c54c41536c69a4b282d6dd65bf625276e1fb2bea2393d232d. Sep 4 23:50:28.540760 kubelet[2228]: W0904 23:50:28.540694 2228 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.229.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.110.229.161:6443: connect: connection refused Sep 4 23:50:28.540930 kubelet[2228]: E0904 23:50:28.540770 2228 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.110.229.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.229.161:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:50:28.606462 containerd[1487]: time="2025-09-04T23:50:28.606263840Z" level=info msg="StartContainer for \"4f3c3c0e4fea9abaa69f5d6716fc5a5f4c0e43c36c743cfdfc1da4478110b8fe\" returns successfully" Sep 4 23:50:28.635803 containerd[1487]: time="2025-09-04T23:50:28.633416668Z" level=info msg="StartContainer for \"b11a8abcaf09911c54c41536c69a4b282d6dd65bf625276e1fb2bea2393d232d\" returns successfully" Sep 4 23:50:28.635803 containerd[1487]: time="2025-09-04T23:50:28.633560694Z" level=info msg="StartContainer for \"80ec8c56a75ea19fabe50b9c7fbcf7d7433254dbf9626bf4577ed3904b10fc4c\" returns successfully" Sep 4 23:50:28.661452 kubelet[2228]: I0904 23:50:28.661321 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:28.661866 kubelet[2228]: E0904 23:50:28.661777 2228 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.229.161:6443/api/v1/nodes\": dial tcp 143.110.229.161:6443: connect: connection refused" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:29.126042 kubelet[2228]: E0904 23:50:29.124286 2228 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:29.126042 kubelet[2228]: E0904 23:50:29.124467 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:29.126544 kubelet[2228]: E0904 23:50:29.126492 2228 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:29.126749 kubelet[2228]: E0904 23:50:29.126702 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:29.132360 kubelet[2228]: E0904 23:50:29.132323 2228 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:29.132510 kubelet[2228]: E0904 23:50:29.132465 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:30.134422 kubelet[2228]: E0904 23:50:30.134029 2228 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:30.134422 kubelet[2228]: E0904 23:50:30.134169 2228 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:30.134422 kubelet[2228]: E0904 23:50:30.134283 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:30.134422 kubelet[2228]: E0904 23:50:30.134347 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:30.264182 kubelet[2228]: I0904 23:50:30.264131 2228 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:31.137192 kubelet[2228]: E0904 23:50:31.137136 2228 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:31.137813 kubelet[2228]: E0904 23:50:31.137335 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:31.199797 kubelet[2228]: E0904 23:50:31.199562 2228 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-n-136bc82296\" not found" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:31.278391 kubelet[2228]: I0904 23:50:31.278317 2228 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:31.333973 kubelet[2228]: I0904 23:50:31.333917 2228 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:31.353739 kubelet[2228]: E0904 23:50:31.353404 2228 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-136bc82296\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:31.353739 kubelet[2228]: I0904 23:50:31.353462 2228 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:31.361478 kubelet[2228]: E0904 23:50:31.361424 2228 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:31.361478 kubelet[2228]: I0904 23:50:31.361474 2228 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" Sep 4 23:50:31.365535 kubelet[2228]: E0904 23:50:31.365488 2228 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-136bc82296\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" Sep 4 23:50:32.014236 kubelet[2228]: I0904 23:50:32.014117 2228 apiserver.go:52] "Watching apiserver" Sep 4 23:50:32.033805 kubelet[2228]: I0904 23:50:32.033535 2228 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:50:32.138705 kubelet[2228]: I0904 23:50:32.138638 2228 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:32.150438 kubelet[2228]: W0904 23:50:32.150386 2228 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:50:32.151212 kubelet[2228]: E0904 23:50:32.151045 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:32.574094 kubelet[2228]: I0904 23:50:32.573576 2228 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" Sep 4 23:50:32.583649 kubelet[2228]: W0904 23:50:32.583596 2228 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:50:32.584029 kubelet[2228]: E0904 23:50:32.584000 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:33.140140 kubelet[2228]: E0904 23:50:33.140097 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:33.140612 kubelet[2228]: E0904 23:50:33.140403 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:33.592843 systemd[1]: Reload requested from client PID 2505 ('systemctl') (unit session-7.scope)... Sep 4 23:50:33.592875 systemd[1]: Reloading... Sep 4 23:50:33.745102 zram_generator::config[2549]: No configuration found. Sep 4 23:50:33.965143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:50:34.178437 systemd[1]: Reloading finished in 584 ms. Sep 4 23:50:34.210185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:34.227958 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:50:34.228689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:34.228808 systemd[1]: kubelet.service: Consumed 1.412s CPU time, 127.1M memory peak. Sep 4 23:50:34.241769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:34.457268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:34.474900 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:50:34.584191 kubelet[2600]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:50:34.584922 kubelet[2600]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:50:34.585013 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:50:34.585288 kubelet[2600]: I0904 23:50:34.585240 2600 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:50:34.608987 kubelet[2600]: I0904 23:50:34.608926 2600 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:50:34.611630 kubelet[2600]: I0904 23:50:34.609553 2600 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:50:34.611630 kubelet[2600]: I0904 23:50:34.610027 2600 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:50:34.614947 kubelet[2600]: I0904 23:50:34.614740 2600 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:50:34.620418 kubelet[2600]: I0904 23:50:34.619995 2600 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:50:34.634630 kubelet[2600]: E0904 23:50:34.631836 2600 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:50:34.634630 kubelet[2600]: I0904 23:50:34.631874 2600 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:50:34.637834 kubelet[2600]: I0904 23:50:34.637742 2600 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:50:34.640161 kubelet[2600]: I0904 23:50:34.639487 2600 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:50:34.640161 kubelet[2600]: I0904 23:50:34.639575 2600 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-136bc82296","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:50:34.640161 kubelet[2600]: I0904 23:50:34.639857 2600 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:50:34.640161 kubelet[2600]: I0904 23:50:34.639873 2600 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:50:34.640550 kubelet[2600]: I0904 23:50:34.639956 2600 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:50:34.644157 kubelet[2600]: I0904 23:50:34.644107 2600 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:50:34.644157 kubelet[2600]: I0904 23:50:34.644168 2600 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:50:34.644370 kubelet[2600]: I0904 23:50:34.644194 2600 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:50:34.644370 kubelet[2600]: I0904 23:50:34.644208 2600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:50:34.644733 sudo[2616]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:50:34.645875 sudo[2616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:50:34.651613 kubelet[2600]: I0904 23:50:34.649810 2600 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:50:34.652842 kubelet[2600]: I0904 23:50:34.652761 2600 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:50:34.662390 kubelet[2600]: I0904 23:50:34.662295 2600 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:50:34.662724 kubelet[2600]: I0904 23:50:34.662701 2600 server.go:1287] "Started kubelet" Sep 4 23:50:34.676580 kubelet[2600]: I0904 23:50:34.676536 2600 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:50:34.694223 kubelet[2600]: I0904 23:50:34.694144 2600 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:50:34.700101 kubelet[2600]: I0904 23:50:34.695973 2600 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:50:34.700754 kubelet[2600]: E0904 23:50:34.700710 2600 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:50:34.711850 kubelet[2600]: I0904 23:50:34.701998 2600 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:50:34.716310 kubelet[2600]: I0904 23:50:34.715350 2600 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:50:34.716310 kubelet[2600]: I0904 23:50:34.704200 2600 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:50:34.716310 kubelet[2600]: I0904 23:50:34.703636 2600 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:50:34.716310 kubelet[2600]: I0904 23:50:34.715806 2600 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:50:34.716310 kubelet[2600]: I0904 23:50:34.715996 2600 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:50:34.724732 kubelet[2600]: E0904 23:50:34.705099 2600 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-136bc82296\" not found" Sep 4 23:50:34.734990 kubelet[2600]: I0904 23:50:34.734938 2600 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:50:34.740047 kubelet[2600]: I0904 23:50:34.739510 2600 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:50:34.770811 kubelet[2600]: I0904 23:50:34.768130 2600 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:50:34.802611 kubelet[2600]: I0904 23:50:34.802246 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:50:34.805848 kubelet[2600]: I0904 23:50:34.804432 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:50:34.805848 kubelet[2600]: I0904 23:50:34.804489 2600 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:50:34.805848 kubelet[2600]: I0904 23:50:34.804520 2600 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:50:34.805848 kubelet[2600]: I0904 23:50:34.804531 2600 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:50:34.805848 kubelet[2600]: E0904 23:50:34.804613 2600 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:50:34.891091 kubelet[2600]: I0904 23:50:34.889809 2600 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:50:34.891091 kubelet[2600]: I0904 23:50:34.889839 2600 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:50:34.891091 kubelet[2600]: I0904 23:50:34.889874 2600 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:50:34.891091 kubelet[2600]: I0904 23:50:34.890440 2600 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:50:34.891091 kubelet[2600]: I0904 23:50:34.890462 2600 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:50:34.891091 kubelet[2600]: I0904 23:50:34.890513 2600 policy_none.go:49] "None policy: Start" Sep 4 23:50:34.891091 kubelet[2600]: I0904 23:50:34.890570 2600 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:50:34.891091 kubelet[2600]: I0904 23:50:34.890668 2600 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:50:34.891091 kubelet[2600]: I0904 23:50:34.890914 2600 state_mem.go:75] "Updated machine memory state" Sep 4 23:50:34.899043 kubelet[2600]: I0904 23:50:34.898483 2600 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:50:34.899043 kubelet[2600]: I0904 23:50:34.898720 2600 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:50:34.899043 kubelet[2600]: I0904 23:50:34.898733 2600 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:50:34.900280 kubelet[2600]: I0904 23:50:34.900261 2600 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:50:34.910696 kubelet[2600]: E0904 23:50:34.910099 2600 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:50:34.913271 kubelet[2600]: I0904 23:50:34.913134 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.918103 kubelet[2600]: I0904 23:50:34.917483 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.918103 kubelet[2600]: I0904 23:50:34.917860 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.919815 kubelet[2600]: I0904 23:50:34.919342 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.919815 kubelet[2600]: I0904 23:50:34.919394 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.919815 kubelet[2600]: I0904 23:50:34.919430 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.919815 kubelet[2600]: I0904 23:50:34.919460 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.919815 kubelet[2600]: I0904 23:50:34.919498 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f9980a20d8fa98c111a3c739c26599d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" (UID: \"4f9980a20d8fa98c111a3c739c26599d\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.920752 kubelet[2600]: I0904 23:50:34.919533 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e36e6238c205d19e3fa62609a8a0881-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-136bc82296\" (UID: \"7e36e6238c205d19e3fa62609a8a0881\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.937521 kubelet[2600]: W0904 23:50:34.937457 2600 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:50:34.950087 kubelet[2600]: W0904 23:50:34.948794 2600 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:50:34.950087 kubelet[2600]: E0904 23:50:34.948913 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-136bc82296\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" Sep 4 23:50:34.950087 kubelet[2600]: W0904 23:50:34.949177 2600 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:50:34.950087 kubelet[2600]: E0904 23:50:34.949221 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-136bc82296\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.012117 kubelet[2600]: I0904 23:50:35.012072 2600 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.021381 kubelet[2600]: I0904 23:50:35.021318 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64a37271a0c6f998d44bb96591564ed2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-136bc82296\" (UID: \"64a37271a0c6f998d44bb96591564ed2\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.021604 kubelet[2600]: I0904 23:50:35.021424 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64a37271a0c6f998d44bb96591564ed2-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-136bc82296\" (UID: \"64a37271a0c6f998d44bb96591564ed2\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.021604 kubelet[2600]: I0904 23:50:35.021450 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64a37271a0c6f998d44bb96591564ed2-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-136bc82296\" (UID: \"64a37271a0c6f998d44bb96591564ed2\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.031048 kubelet[2600]: I0904 23:50:35.030810 2600 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.031048 kubelet[2600]: I0904 23:50:35.030906 2600 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.240360 kubelet[2600]: E0904 23:50:35.239631 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:35.250393 kubelet[2600]: E0904 23:50:35.250197 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:35.250393 kubelet[2600]: E0904 23:50:35.250295 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:35.640842 sudo[2616]: pam_unix(sudo:session): session closed for user root Sep 4 23:50:35.663537 kubelet[2600]: I0904 23:50:35.663477 2600 apiserver.go:52] "Watching apiserver" Sep 4 23:50:35.715734 kubelet[2600]: I0904 23:50:35.715672 2600 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:50:35.804872 kubelet[2600]: I0904 23:50:35.804587 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" podStartSLOduration=1.804556616 podStartE2EDuration="1.804556616s" podCreationTimestamp="2025-09-04 23:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:35.79009194 +0000 UTC m=+1.306324768" watchObservedRunningTime="2025-09-04 23:50:35.804556616 +0000 UTC m=+1.320789313" Sep 4 23:50:35.831772 kubelet[2600]: I0904 23:50:35.831204 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-n-136bc82296" podStartSLOduration=3.8311508229999998 podStartE2EDuration="3.831150823s" podCreationTimestamp="2025-09-04 23:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:35.806407404 +0000 UTC m=+1.322640120" watchObservedRunningTime="2025-09-04 23:50:35.831150823 +0000 UTC m=+1.347383511" Sep 4 23:50:35.847104 kubelet[2600]: I0904 23:50:35.846244 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.847104 kubelet[2600]: I0904 23:50:35.846868 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.848351 kubelet[2600]: E0904 23:50:35.847715 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:35.865087 kubelet[2600]: I0904 23:50:35.862183 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" podStartSLOduration=3.862154657 podStartE2EDuration="3.862154657s" podCreationTimestamp="2025-09-04 23:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:35.831511279 +0000 UTC m=+1.347743988" watchObservedRunningTime="2025-09-04 23:50:35.862154657 +0000 UTC m=+1.378387363" Sep 4 23:50:35.867091 kubelet[2600]: W0904 23:50:35.865818 2600 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:50:35.867091 kubelet[2600]: E0904 23:50:35.865901 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-136bc82296\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.867091 kubelet[2600]: E0904 23:50:35.866139 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:35.882233 kubelet[2600]: W0904 23:50:35.882182 2600 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:50:35.882447 kubelet[2600]: E0904 23:50:35.882290 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.2-n-136bc82296\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-136bc82296" Sep 4 23:50:35.882904 kubelet[2600]: E0904 23:50:35.882601 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:36.849181 kubelet[2600]: E0904 23:50:36.848622 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:36.849181 kubelet[2600]: E0904 23:50:36.848813 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:36.849181 kubelet[2600]: E0904 23:50:36.849124 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:37.057362 systemd-timesyncd[1346]: Contacted time server 23.186.168.126:123 (2.flatcar.pool.ntp.org). Sep 4 23:50:37.057460 systemd-timesyncd[1346]: Initial clock synchronization to Thu 2025-09-04 23:50:37.257118 UTC. Sep 4 23:50:37.670937 sudo[1675]: pam_unix(sudo:session): session closed for user root Sep 4 23:50:37.675378 sshd[1674]: Connection closed by 147.75.109.163 port 42466 Sep 4 23:50:37.676281 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:37.685821 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:50:37.687423 systemd[1]: sshd@6-143.110.229.161:22-147.75.109.163:42466.service: Deactivated successfully. Sep 4 23:50:37.692024 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:50:37.692688 systemd[1]: session-7.scope: Consumed 6.646s CPU time, 219.9M memory peak. Sep 4 23:50:37.697757 systemd-logind[1463]: Removed session 7. Sep 4 23:50:37.856126 kubelet[2600]: E0904 23:50:37.852814 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:37.856126 kubelet[2600]: E0904 23:50:37.853639 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:37.989659 kubelet[2600]: I0904 23:50:37.989610 2600 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:50:37.990775 containerd[1487]: time="2025-09-04T23:50:37.990087009Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:50:37.991398 kubelet[2600]: I0904 23:50:37.990297 2600 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:50:38.688159 systemd[1]: Created slice kubepods-besteffort-pod0e062a0f_4ced_4c50_9c6d_eb5c0374fb34.slice - libcontainer container kubepods-besteffort-pod0e062a0f_4ced_4c50_9c6d_eb5c0374fb34.slice. Sep 4 23:50:38.715357 systemd[1]: Created slice kubepods-burstable-podd84075aa_d4c9_4b6b_8edd_20eaa7fa1270.slice - libcontainer container kubepods-burstable-podd84075aa_d4c9_4b6b_8edd_20eaa7fa1270.slice. Sep 4 23:50:38.847820 kubelet[2600]: I0904 23:50:38.847576 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-clustermesh-secrets\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.847820 kubelet[2600]: I0904 23:50:38.847673 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e062a0f-4ced-4c50-9c6d-eb5c0374fb34-lib-modules\") pod \"kube-proxy-l7ln2\" (UID: \"0e062a0f-4ced-4c50-9c6d-eb5c0374fb34\") " pod="kube-system/kube-proxy-l7ln2" Sep 4 23:50:38.847820 kubelet[2600]: I0904 23:50:38.847699 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-hostproc\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.847820 kubelet[2600]: I0904 23:50:38.847731 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-etc-cni-netd\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.847820 kubelet[2600]: I0904 23:50:38.847753 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8kw\" (UniqueName: \"kubernetes.io/projected/0e062a0f-4ced-4c50-9c6d-eb5c0374fb34-kube-api-access-ld8kw\") pod \"kube-proxy-l7ln2\" (UID: \"0e062a0f-4ced-4c50-9c6d-eb5c0374fb34\") " pod="kube-system/kube-proxy-l7ln2" Sep 4 23:50:38.847820 kubelet[2600]: I0904 23:50:38.847789 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-lib-modules\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848620 kubelet[2600]: I0904 23:50:38.848216 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-xtables-lock\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848620 kubelet[2600]: I0904 23:50:38.848322 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-host-proc-sys-kernel\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848620 kubelet[2600]: I0904 23:50:38.848362 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e062a0f-4ced-4c50-9c6d-eb5c0374fb34-kube-proxy\") pod \"kube-proxy-l7ln2\" (UID: \"0e062a0f-4ced-4c50-9c6d-eb5c0374fb34\") " pod="kube-system/kube-proxy-l7ln2" Sep 4 23:50:38.848620 kubelet[2600]: I0904 23:50:38.848389 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-bpf-maps\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848620 kubelet[2600]: I0904 23:50:38.848418 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e062a0f-4ced-4c50-9c6d-eb5c0374fb34-xtables-lock\") pod \"kube-proxy-l7ln2\" (UID: \"0e062a0f-4ced-4c50-9c6d-eb5c0374fb34\") " pod="kube-system/kube-proxy-l7ln2" Sep 4 23:50:38.848620 kubelet[2600]: I0904 23:50:38.848446 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-run\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848959 kubelet[2600]: I0904 23:50:38.848471 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cni-path\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848959 kubelet[2600]: I0904 23:50:38.848501 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-hubble-tls\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848959 kubelet[2600]: I0904 23:50:38.848530 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-host-proc-sys-net\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848959 kubelet[2600]: I0904 23:50:38.848580 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcxjs\" (UniqueName: \"kubernetes.io/projected/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-kube-api-access-mcxjs\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848959 kubelet[2600]: I0904 23:50:38.848614 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-cgroup\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:38.848959 kubelet[2600]: I0904 23:50:38.848640 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-config-path\") pod \"cilium-8skzg\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " pod="kube-system/cilium-8skzg" Sep 4 23:50:39.000281 kubelet[2600]: E0904 23:50:39.000236 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:39.007828 containerd[1487]: time="2025-09-04T23:50:39.007767502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7ln2,Uid:0e062a0f-4ced-4c50-9c6d-eb5c0374fb34,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:39.022006 kubelet[2600]: E0904 23:50:39.021957 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:39.023081 containerd[1487]: time="2025-09-04T23:50:39.023024714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8skzg,Uid:d84075aa-d4c9-4b6b-8edd-20eaa7fa1270,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:39.128152 containerd[1487]: time="2025-09-04T23:50:39.125372451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:39.128152 containerd[1487]: time="2025-09-04T23:50:39.125437082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:39.128152 containerd[1487]: time="2025-09-04T23:50:39.125451211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:39.128152 containerd[1487]: time="2025-09-04T23:50:39.125538467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:39.147716 systemd[1]: Created slice kubepods-besteffort-pod651449c6_af85_4739_bcef_4ebd79a4971d.slice - libcontainer container kubepods-besteffort-pod651449c6_af85_4739_bcef_4ebd79a4971d.slice. Sep 4 23:50:39.152983 containerd[1487]: time="2025-09-04T23:50:39.148962737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:39.152983 containerd[1487]: time="2025-09-04T23:50:39.149048058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:39.152983 containerd[1487]: time="2025-09-04T23:50:39.150234977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:39.152983 containerd[1487]: time="2025-09-04T23:50:39.150419631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:39.189704 systemd[1]: Started cri-containerd-6cad7fc7ffe6a4666fd664933dc32a9bc92a0c5acfbcef6440e352a47ddd2184.scope - libcontainer container 6cad7fc7ffe6a4666fd664933dc32a9bc92a0c5acfbcef6440e352a47ddd2184. Sep 4 23:50:39.210399 systemd[1]: Started cri-containerd-48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5.scope - libcontainer container 48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5. Sep 4 23:50:39.251901 kubelet[2600]: I0904 23:50:39.251710 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651449c6-af85-4739-bcef-4ebd79a4971d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-56rjf\" (UID: \"651449c6-af85-4739-bcef-4ebd79a4971d\") " pod="kube-system/cilium-operator-6c4d7847fc-56rjf" Sep 4 23:50:39.251901 kubelet[2600]: I0904 23:50:39.251804 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltnn6\" (UniqueName: \"kubernetes.io/projected/651449c6-af85-4739-bcef-4ebd79a4971d-kube-api-access-ltnn6\") pod \"cilium-operator-6c4d7847fc-56rjf\" (UID: \"651449c6-af85-4739-bcef-4ebd79a4971d\") " pod="kube-system/cilium-operator-6c4d7847fc-56rjf" Sep 4 23:50:39.333804 containerd[1487]: time="2025-09-04T23:50:39.333581669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8skzg,Uid:d84075aa-d4c9-4b6b-8edd-20eaa7fa1270,Namespace:kube-system,Attempt:0,} returns sandbox id \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\"" Sep 4 23:50:39.334700 containerd[1487]: time="2025-09-04T23:50:39.334322430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7ln2,Uid:0e062a0f-4ced-4c50-9c6d-eb5c0374fb34,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cad7fc7ffe6a4666fd664933dc32a9bc92a0c5acfbcef6440e352a47ddd2184\"" Sep 4 23:50:39.334992 kubelet[2600]: E0904 23:50:39.334947 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:39.338441 kubelet[2600]: E0904 23:50:39.338367 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:39.341911 containerd[1487]: time="2025-09-04T23:50:39.340390406Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:50:39.345513 containerd[1487]: time="2025-09-04T23:50:39.344277196Z" level=info msg="CreateContainer within sandbox \"6cad7fc7ffe6a4666fd664933dc32a9bc92a0c5acfbcef6440e352a47ddd2184\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:50:39.377966 containerd[1487]: time="2025-09-04T23:50:39.377813373Z" level=info msg="CreateContainer within sandbox \"6cad7fc7ffe6a4666fd664933dc32a9bc92a0c5acfbcef6440e352a47ddd2184\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f67371f9bfc6d441258470e8213887fa6d03f5451a14e92b1de04136a568130\"" Sep 4 23:50:39.379264 containerd[1487]: time="2025-09-04T23:50:39.379003500Z" level=info msg="StartContainer for \"4f67371f9bfc6d441258470e8213887fa6d03f5451a14e92b1de04136a568130\"" Sep 4 23:50:39.417866 systemd[1]: Started cri-containerd-4f67371f9bfc6d441258470e8213887fa6d03f5451a14e92b1de04136a568130.scope - libcontainer container 4f67371f9bfc6d441258470e8213887fa6d03f5451a14e92b1de04136a568130. Sep 4 23:50:39.457829 kubelet[2600]: E0904 23:50:39.457777 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:39.459097 containerd[1487]: time="2025-09-04T23:50:39.459021328Z" level=info msg="StartContainer for \"4f67371f9bfc6d441258470e8213887fa6d03f5451a14e92b1de04136a568130\" returns successfully" Sep 4 23:50:39.459389 containerd[1487]: time="2025-09-04T23:50:39.459361958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-56rjf,Uid:651449c6-af85-4739-bcef-4ebd79a4971d,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:39.495874 containerd[1487]: time="2025-09-04T23:50:39.494530159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:39.495874 containerd[1487]: time="2025-09-04T23:50:39.495777765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:39.496299 containerd[1487]: time="2025-09-04T23:50:39.496231524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:39.499105 containerd[1487]: time="2025-09-04T23:50:39.496490743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:39.534508 systemd[1]: Started cri-containerd-45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268.scope - libcontainer container 45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268. Sep 4 23:50:39.596361 containerd[1487]: time="2025-09-04T23:50:39.595744560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-56rjf,Uid:651449c6-af85-4739-bcef-4ebd79a4971d,Namespace:kube-system,Attempt:0,} returns sandbox id \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\"" Sep 4 23:50:39.596984 kubelet[2600]: E0904 23:50:39.596928 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:39.861882 kubelet[2600]: E0904 23:50:39.860823 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:42.333880 kubelet[2600]: E0904 23:50:42.333718 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:42.364917 kubelet[2600]: I0904 23:50:42.362676 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l7ln2" podStartSLOduration=4.36264983 podStartE2EDuration="4.36264983s" podCreationTimestamp="2025-09-04 23:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:39.876121676 +0000 UTC m=+5.392354376" watchObservedRunningTime="2025-09-04 23:50:42.36264983 +0000 UTC m=+7.878882547" Sep 4 23:50:42.887948 kubelet[2600]: E0904 23:50:42.887465 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:45.457272 update_engine[1464]: I20250904 23:50:45.457182 1464 update_attempter.cc:509] Updating boot flags... Sep 4 23:50:45.546009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2977) Sep 4 23:50:46.246896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620781955.mount: Deactivated successfully. Sep 4 23:50:46.371047 kubelet[2600]: E0904 23:50:46.370536 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:47.591804 kubelet[2600]: E0904 23:50:47.591398 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:49.456235 containerd[1487]: time="2025-09-04T23:50:49.455896534Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:49.457419 containerd[1487]: time="2025-09-04T23:50:49.457168249Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 23:50:49.458613 containerd[1487]: time="2025-09-04T23:50:49.458520242Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:49.462712 containerd[1487]: time="2025-09-04T23:50:49.461756539Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.121026586s" Sep 4 23:50:49.462712 containerd[1487]: time="2025-09-04T23:50:49.461822874Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 23:50:49.470551 containerd[1487]: time="2025-09-04T23:50:49.470489724Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:50:49.474265 containerd[1487]: time="2025-09-04T23:50:49.473662010Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:50:49.568969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124402529.mount: Deactivated successfully. Sep 4 23:50:49.572944 containerd[1487]: time="2025-09-04T23:50:49.572871265Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\"" Sep 4 23:50:49.574150 containerd[1487]: time="2025-09-04T23:50:49.574105524Z" level=info msg="StartContainer for \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\"" Sep 4 23:50:49.736398 systemd[1]: Started cri-containerd-5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29.scope - libcontainer container 5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29. Sep 4 23:50:49.803702 containerd[1487]: time="2025-09-04T23:50:49.803631178Z" level=info msg="StartContainer for \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\" returns successfully" Sep 4 23:50:49.820509 systemd[1]: cri-containerd-5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29.scope: Deactivated successfully. Sep 4 23:50:49.913375 kubelet[2600]: E0904 23:50:49.913331 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:49.943818 containerd[1487]: time="2025-09-04T23:50:49.898191150Z" level=info msg="shim disconnected" id=5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29 namespace=k8s.io Sep 4 23:50:49.945859 containerd[1487]: time="2025-09-04T23:50:49.945756216Z" level=warning msg="cleaning up after shim disconnected" id=5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29 namespace=k8s.io Sep 4 23:50:49.945859 containerd[1487]: time="2025-09-04T23:50:49.945848272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:50:50.559818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29-rootfs.mount: Deactivated successfully. Sep 4 23:50:50.877849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1367854898.mount: Deactivated successfully. Sep 4 23:50:50.918310 kubelet[2600]: E0904 23:50:50.917577 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:50.923376 containerd[1487]: time="2025-09-04T23:50:50.923325039Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:50:50.970503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3136930847.mount: Deactivated successfully. Sep 4 23:50:50.976005 containerd[1487]: time="2025-09-04T23:50:50.975955403Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\"" Sep 4 23:50:50.978455 containerd[1487]: time="2025-09-04T23:50:50.978298342Z" level=info msg="StartContainer for \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\"" Sep 4 23:50:51.036398 systemd[1]: Started cri-containerd-72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f.scope - libcontainer container 72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f. Sep 4 23:50:51.102363 containerd[1487]: time="2025-09-04T23:50:51.102295514Z" level=info msg="StartContainer for \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\" returns successfully" Sep 4 23:50:51.126789 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:50:51.127381 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:50:51.128110 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:50:51.135192 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:50:51.139980 systemd[1]: cri-containerd-72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f.scope: Deactivated successfully. Sep 4 23:50:51.182074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:50:51.219991 containerd[1487]: time="2025-09-04T23:50:51.219912644Z" level=info msg="shim disconnected" id=72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f namespace=k8s.io Sep 4 23:50:51.221291 containerd[1487]: time="2025-09-04T23:50:51.221252798Z" level=warning msg="cleaning up after shim disconnected" id=72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f namespace=k8s.io Sep 4 23:50:51.221428 containerd[1487]: time="2025-09-04T23:50:51.221413687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:50:51.897229 containerd[1487]: time="2025-09-04T23:50:51.897158305Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:51.899249 containerd[1487]: time="2025-09-04T23:50:51.899015289Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 23:50:51.901304 containerd[1487]: time="2025-09-04T23:50:51.901240533Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:51.912475 containerd[1487]: time="2025-09-04T23:50:51.912259387Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.441703349s" Sep 4 23:50:51.912475 containerd[1487]: time="2025-09-04T23:50:51.912327240Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 23:50:51.922468 containerd[1487]: time="2025-09-04T23:50:51.922214173Z" level=info msg="CreateContainer within sandbox \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:50:51.926108 kubelet[2600]: E0904 23:50:51.925675 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:51.994123 containerd[1487]: time="2025-09-04T23:50:51.991357117Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:50:52.015430 containerd[1487]: time="2025-09-04T23:50:52.015085985Z" level=info msg="CreateContainer within sandbox \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\"" Sep 4 23:50:52.031745 containerd[1487]: time="2025-09-04T23:50:52.031450312Z" level=info msg="StartContainer for \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\"" Sep 4 23:50:52.034963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425832602.mount: Deactivated successfully. Sep 4 23:50:52.038624 containerd[1487]: time="2025-09-04T23:50:52.038554565Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\"" Sep 4 23:50:52.041143 containerd[1487]: time="2025-09-04T23:50:52.039462151Z" level=info msg="StartContainer for \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\"" Sep 4 23:50:52.111260 systemd[1]: Started cri-containerd-eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118.scope - libcontainer container eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118. Sep 4 23:50:52.125326 systemd[1]: Started cri-containerd-17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a.scope - libcontainer container 17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a. Sep 4 23:50:52.174591 containerd[1487]: time="2025-09-04T23:50:52.174431864Z" level=info msg="StartContainer for \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\" returns successfully" Sep 4 23:50:52.178892 systemd[1]: cri-containerd-eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118.scope: Deactivated successfully. Sep 4 23:50:52.223101 containerd[1487]: time="2025-09-04T23:50:52.221602084Z" level=info msg="StartContainer for \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\" returns successfully" Sep 4 23:50:52.245536 containerd[1487]: time="2025-09-04T23:50:52.245428941Z" level=info msg="shim disconnected" id=eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118 namespace=k8s.io Sep 4 23:50:52.245870 containerd[1487]: time="2025-09-04T23:50:52.245552315Z" level=warning msg="cleaning up after shim disconnected" id=eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118 namespace=k8s.io Sep 4 23:50:52.245870 containerd[1487]: time="2025-09-04T23:50:52.245566475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:50:52.934227 kubelet[2600]: E0904 23:50:52.932611 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:52.935950 kubelet[2600]: E0904 23:50:52.935878 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:52.938728 containerd[1487]: time="2025-09-04T23:50:52.938682375Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:50:52.989882 containerd[1487]: time="2025-09-04T23:50:52.989807998Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\"" Sep 4 23:50:52.993429 containerd[1487]: time="2025-09-04T23:50:52.990482585Z" level=info msg="StartContainer for \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\"" Sep 4 23:50:53.068333 systemd[1]: Started cri-containerd-323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4.scope - libcontainer container 323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4. Sep 4 23:50:53.142862 containerd[1487]: time="2025-09-04T23:50:53.141293929Z" level=info msg="StartContainer for \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\" returns successfully" Sep 4 23:50:53.151019 systemd[1]: cri-containerd-323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4.scope: Deactivated successfully. Sep 4 23:50:53.200181 containerd[1487]: time="2025-09-04T23:50:53.199950866Z" level=info msg="shim disconnected" id=323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4 namespace=k8s.io Sep 4 23:50:53.200641 containerd[1487]: time="2025-09-04T23:50:53.200042541Z" level=warning msg="cleaning up after shim disconnected" id=323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4 namespace=k8s.io Sep 4 23:50:53.200641 containerd[1487]: time="2025-09-04T23:50:53.200422606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:50:53.560795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4-rootfs.mount: Deactivated successfully. Sep 4 23:50:53.943346 kubelet[2600]: E0904 23:50:53.941942 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:53.943346 kubelet[2600]: E0904 23:50:53.942553 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:53.947673 containerd[1487]: time="2025-09-04T23:50:53.947617018Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:50:53.979624 containerd[1487]: time="2025-09-04T23:50:53.977972165Z" level=info msg="CreateContainer within sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\"" Sep 4 23:50:53.979624 containerd[1487]: time="2025-09-04T23:50:53.978817241Z" level=info msg="StartContainer for \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\"" Sep 4 23:50:54.002971 kubelet[2600]: I0904 23:50:54.002886 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-56rjf" podStartSLOduration=2.688957334 podStartE2EDuration="15.002856466s" podCreationTimestamp="2025-09-04 23:50:39 +0000 UTC" firstStartedPulling="2025-09-04 23:50:39.599685271 +0000 UTC m=+5.115917961" lastFinishedPulling="2025-09-04 23:50:51.913584411 +0000 UTC m=+17.429817093" observedRunningTime="2025-09-04 23:50:53.193129097 +0000 UTC m=+18.709361799" watchObservedRunningTime="2025-09-04 23:50:54.002856466 +0000 UTC m=+19.519089180" Sep 4 23:50:54.042423 systemd[1]: Started cri-containerd-4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b.scope - libcontainer container 4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b. Sep 4 23:50:54.089128 containerd[1487]: time="2025-09-04T23:50:54.089037160Z" level=info msg="StartContainer for \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\" returns successfully" Sep 4 23:50:54.316492 kubelet[2600]: I0904 23:50:54.316449 2600 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:50:54.376507 systemd[1]: Created slice kubepods-burstable-pod599e3931_ba1e_4e05_a70a_5c7b61dc6c52.slice - libcontainer container kubepods-burstable-pod599e3931_ba1e_4e05_a70a_5c7b61dc6c52.slice. Sep 4 23:50:54.386620 systemd[1]: Created slice kubepods-burstable-pod92b6e53d_49dc_45ae_bce0_87512b059bb7.slice - libcontainer container kubepods-burstable-pod92b6e53d_49dc_45ae_bce0_87512b059bb7.slice. Sep 4 23:50:54.478157 kubelet[2600]: I0904 23:50:54.478097 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/599e3931-ba1e-4e05-a70a-5c7b61dc6c52-config-volume\") pod \"coredns-668d6bf9bc-rqsrp\" (UID: \"599e3931-ba1e-4e05-a70a-5c7b61dc6c52\") " pod="kube-system/coredns-668d6bf9bc-rqsrp" Sep 4 23:50:54.478157 kubelet[2600]: I0904 23:50:54.478146 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdszf\" (UniqueName: \"kubernetes.io/projected/92b6e53d-49dc-45ae-bce0-87512b059bb7-kube-api-access-qdszf\") pod \"coredns-668d6bf9bc-blnjk\" (UID: \"92b6e53d-49dc-45ae-bce0-87512b059bb7\") " pod="kube-system/coredns-668d6bf9bc-blnjk" Sep 4 23:50:54.478368 kubelet[2600]: I0904 23:50:54.478177 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92b6e53d-49dc-45ae-bce0-87512b059bb7-config-volume\") pod \"coredns-668d6bf9bc-blnjk\" (UID: \"92b6e53d-49dc-45ae-bce0-87512b059bb7\") " pod="kube-system/coredns-668d6bf9bc-blnjk" Sep 4 23:50:54.478368 kubelet[2600]: I0904 23:50:54.478202 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrmxm\" (UniqueName: \"kubernetes.io/projected/599e3931-ba1e-4e05-a70a-5c7b61dc6c52-kube-api-access-rrmxm\") pod \"coredns-668d6bf9bc-rqsrp\" (UID: \"599e3931-ba1e-4e05-a70a-5c7b61dc6c52\") " pod="kube-system/coredns-668d6bf9bc-rqsrp" Sep 4 23:50:54.682670 kubelet[2600]: E0904 23:50:54.681384 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:54.686112 containerd[1487]: time="2025-09-04T23:50:54.685581496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqsrp,Uid:599e3931-ba1e-4e05-a70a-5c7b61dc6c52,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:54.692006 kubelet[2600]: E0904 23:50:54.691401 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:54.692194 containerd[1487]: time="2025-09-04T23:50:54.691936637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-blnjk,Uid:92b6e53d-49dc-45ae-bce0-87512b059bb7,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:54.952987 kubelet[2600]: E0904 23:50:54.951940 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:55.954093 kubelet[2600]: E0904 23:50:55.954002 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:56.695815 systemd-networkd[1383]: cilium_host: Link UP Sep 4 23:50:56.696016 systemd-networkd[1383]: cilium_net: Link UP Sep 4 23:50:56.696021 systemd-networkd[1383]: cilium_net: Gained carrier Sep 4 23:50:56.698689 systemd-networkd[1383]: cilium_host: Gained carrier Sep 4 23:50:56.848344 systemd-networkd[1383]: cilium_net: Gained IPv6LL Sep 4 23:50:56.871351 systemd-networkd[1383]: cilium_vxlan: Link UP Sep 4 23:50:56.871360 systemd-networkd[1383]: cilium_vxlan: Gained carrier Sep 4 23:50:56.955735 kubelet[2600]: E0904 23:50:56.955590 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:57.295151 kernel: NET: Registered PF_ALG protocol family Sep 4 23:50:57.593896 systemd-networkd[1383]: cilium_host: Gained IPv6LL Sep 4 23:50:58.269170 systemd-networkd[1383]: lxc_health: Link UP Sep 4 23:50:58.269647 systemd-networkd[1383]: lxc_health: Gained carrier Sep 4 23:50:58.449833 systemd-networkd[1383]: lxc561cd27441e4: Link UP Sep 4 23:50:58.454229 kernel: eth0: renamed from tmp7d20a Sep 4 23:50:58.461115 kernel: eth0: renamed from tmp306d0 Sep 4 23:50:58.466417 systemd-networkd[1383]: lxcb5823ca0d2e6: Link UP Sep 4 23:50:58.466752 systemd-networkd[1383]: lxc561cd27441e4: Gained carrier Sep 4 23:50:58.473299 systemd-networkd[1383]: lxcb5823ca0d2e6: Gained carrier Sep 4 23:50:58.745315 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Sep 4 23:50:59.025788 kubelet[2600]: E0904 23:50:59.025652 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:50:59.080111 kubelet[2600]: I0904 23:50:59.078333 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8skzg" podStartSLOduration=10.949289546 podStartE2EDuration="21.078303326s" podCreationTimestamp="2025-09-04 23:50:38 +0000 UTC" firstStartedPulling="2025-09-04 23:50:39.338430236 +0000 UTC m=+4.854662924" lastFinishedPulling="2025-09-04 23:50:49.467444002 +0000 UTC m=+14.983676704" observedRunningTime="2025-09-04 23:50:54.988840476 +0000 UTC m=+20.505073177" watchObservedRunningTime="2025-09-04 23:50:59.078303326 +0000 UTC m=+24.594536034" Sep 4 23:50:59.960309 systemd-networkd[1383]: lxc561cd27441e4: Gained IPv6LL Sep 4 23:50:59.966537 kubelet[2600]: E0904 23:50:59.963813 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:00.155140 systemd-networkd[1383]: lxcb5823ca0d2e6: Gained IPv6LL Sep 4 23:51:00.216333 systemd-networkd[1383]: lxc_health: Gained IPv6LL Sep 4 23:51:00.969053 kubelet[2600]: E0904 23:51:00.967629 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:04.322667 containerd[1487]: time="2025-09-04T23:51:04.321357715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:51:04.322667 containerd[1487]: time="2025-09-04T23:51:04.321461588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:51:04.322667 containerd[1487]: time="2025-09-04T23:51:04.321491294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:04.322667 containerd[1487]: time="2025-09-04T23:51:04.321638111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:04.392535 systemd[1]: Started cri-containerd-7d20aae062e065578c34d67bb6b836ba843b768cf8016ab539964fedc35e5c5e.scope - libcontainer container 7d20aae062e065578c34d67bb6b836ba843b768cf8016ab539964fedc35e5c5e. Sep 4 23:51:04.407672 containerd[1487]: time="2025-09-04T23:51:04.407550701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:51:04.408183 containerd[1487]: time="2025-09-04T23:51:04.407628315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:51:04.408183 containerd[1487]: time="2025-09-04T23:51:04.407646826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:04.408183 containerd[1487]: time="2025-09-04T23:51:04.407739646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:04.468233 systemd[1]: Started cri-containerd-306d0dc0ad5e16cfe464ffd0c5fe08b27802e0b870cde25983960aa885332c51.scope - libcontainer container 306d0dc0ad5e16cfe464ffd0c5fe08b27802e0b870cde25983960aa885332c51. Sep 4 23:51:04.520675 containerd[1487]: time="2025-09-04T23:51:04.520038230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqsrp,Uid:599e3931-ba1e-4e05-a70a-5c7b61dc6c52,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d20aae062e065578c34d67bb6b836ba843b768cf8016ab539964fedc35e5c5e\"" Sep 4 23:51:04.524979 kubelet[2600]: E0904 23:51:04.523228 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:04.531469 containerd[1487]: time="2025-09-04T23:51:04.531321313Z" level=info msg="CreateContainer within sandbox \"7d20aae062e065578c34d67bb6b836ba843b768cf8016ab539964fedc35e5c5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:51:04.568007 containerd[1487]: time="2025-09-04T23:51:04.567926602Z" level=info msg="CreateContainer within sandbox \"7d20aae062e065578c34d67bb6b836ba843b768cf8016ab539964fedc35e5c5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b24d8c7384e90e3b122ad2baaa4ba8fd50c0d2b0c37d3f244e7a79cc5c641f6f\"" Sep 4 23:51:04.570138 containerd[1487]: time="2025-09-04T23:51:04.569526557Z" level=info msg="StartContainer for \"b24d8c7384e90e3b122ad2baaa4ba8fd50c0d2b0c37d3f244e7a79cc5c641f6f\"" Sep 4 23:51:04.620017 systemd[1]: Started cri-containerd-b24d8c7384e90e3b122ad2baaa4ba8fd50c0d2b0c37d3f244e7a79cc5c641f6f.scope - libcontainer container b24d8c7384e90e3b122ad2baaa4ba8fd50c0d2b0c37d3f244e7a79cc5c641f6f. Sep 4 23:51:04.639775 containerd[1487]: time="2025-09-04T23:51:04.639683572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-blnjk,Uid:92b6e53d-49dc-45ae-bce0-87512b059bb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"306d0dc0ad5e16cfe464ffd0c5fe08b27802e0b870cde25983960aa885332c51\"" Sep 4 23:51:04.642909 kubelet[2600]: E0904 23:51:04.642221 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:04.646710 containerd[1487]: time="2025-09-04T23:51:04.645891083Z" level=info msg="CreateContainer within sandbox \"306d0dc0ad5e16cfe464ffd0c5fe08b27802e0b870cde25983960aa885332c51\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:51:04.672881 containerd[1487]: time="2025-09-04T23:51:04.670341590Z" level=info msg="CreateContainer within sandbox \"306d0dc0ad5e16cfe464ffd0c5fe08b27802e0b870cde25983960aa885332c51\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"643425e397fba788f7086c269889bd1934e0513ee40643ba6c6ccffb76b777a9\"" Sep 4 23:51:04.673437 containerd[1487]: time="2025-09-04T23:51:04.673383481Z" level=info msg="StartContainer for \"643425e397fba788f7086c269889bd1934e0513ee40643ba6c6ccffb76b777a9\"" Sep 4 23:51:04.710846 containerd[1487]: time="2025-09-04T23:51:04.710767648Z" level=info msg="StartContainer for \"b24d8c7384e90e3b122ad2baaa4ba8fd50c0d2b0c37d3f244e7a79cc5c641f6f\" returns successfully" Sep 4 23:51:04.735850 systemd[1]: Started cri-containerd-643425e397fba788f7086c269889bd1934e0513ee40643ba6c6ccffb76b777a9.scope - libcontainer container 643425e397fba788f7086c269889bd1934e0513ee40643ba6c6ccffb76b777a9. Sep 4 23:51:04.797970 containerd[1487]: time="2025-09-04T23:51:04.797808218Z" level=info msg="StartContainer for \"643425e397fba788f7086c269889bd1934e0513ee40643ba6c6ccffb76b777a9\" returns successfully" Sep 4 23:51:04.995895 kubelet[2600]: E0904 23:51:04.995327 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:05.000332 kubelet[2600]: E0904 23:51:05.000284 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:05.031382 kubelet[2600]: I0904 23:51:05.030940 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rqsrp" podStartSLOduration=26.0309182 podStartE2EDuration="26.0309182s" podCreationTimestamp="2025-09-04 23:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:51:05.026381742 +0000 UTC m=+30.542614452" watchObservedRunningTime="2025-09-04 23:51:05.0309182 +0000 UTC m=+30.547150922" Sep 4 23:51:05.065521 kubelet[2600]: I0904 23:51:05.064870 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-blnjk" podStartSLOduration=26.06474728 podStartE2EDuration="26.06474728s" podCreationTimestamp="2025-09-04 23:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:51:05.064278755 +0000 UTC m=+30.580511521" watchObservedRunningTime="2025-09-04 23:51:05.06474728 +0000 UTC m=+30.580979977" Sep 4 23:51:06.003250 kubelet[2600]: E0904 23:51:06.002791 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:07.005961 kubelet[2600]: E0904 23:51:07.005248 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:15.002781 kubelet[2600]: E0904 23:51:15.002730 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:15.032110 kubelet[2600]: E0904 23:51:15.030214 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:15.700710 systemd[1]: Started sshd@7-143.110.229.161:22-147.75.109.163:34654.service - OpenSSH per-connection server daemon (147.75.109.163:34654). Sep 4 23:51:15.807671 sshd[4005]: Accepted publickey for core from 147.75.109.163 port 34654 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:15.809411 sshd-session[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:15.817444 systemd-logind[1463]: New session 8 of user core. Sep 4 23:51:15.832449 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:51:16.516274 sshd[4007]: Connection closed by 147.75.109.163 port 34654 Sep 4 23:51:16.517486 sshd-session[4005]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:16.528732 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:51:16.531446 systemd[1]: sshd@7-143.110.229.161:22-147.75.109.163:34654.service: Deactivated successfully. Sep 4 23:51:16.534751 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:51:16.538870 systemd-logind[1463]: Removed session 8. Sep 4 23:51:21.543615 systemd[1]: Started sshd@8-143.110.229.161:22-147.75.109.163:51292.service - OpenSSH per-connection server daemon (147.75.109.163:51292). Sep 4 23:51:21.615106 sshd[4020]: Accepted publickey for core from 147.75.109.163 port 51292 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:21.618429 sshd-session[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:21.629285 systemd-logind[1463]: New session 9 of user core. Sep 4 23:51:21.635435 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:51:21.819400 sshd[4022]: Connection closed by 147.75.109.163 port 51292 Sep 4 23:51:21.820543 sshd-session[4020]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:21.825265 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:51:21.825811 systemd[1]: sshd@8-143.110.229.161:22-147.75.109.163:51292.service: Deactivated successfully. Sep 4 23:51:21.829921 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:51:21.833917 systemd-logind[1463]: Removed session 9. Sep 4 23:51:26.848632 systemd[1]: Started sshd@9-143.110.229.161:22-147.75.109.163:51294.service - OpenSSH per-connection server daemon (147.75.109.163:51294). Sep 4 23:51:26.928118 sshd[4036]: Accepted publickey for core from 147.75.109.163 port 51294 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:26.929885 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:26.937643 systemd-logind[1463]: New session 10 of user core. Sep 4 23:51:26.944490 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:51:27.106537 sshd[4038]: Connection closed by 147.75.109.163 port 51294 Sep 4 23:51:27.109445 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:27.114273 systemd[1]: sshd@9-143.110.229.161:22-147.75.109.163:51294.service: Deactivated successfully. Sep 4 23:51:27.119026 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:51:27.124188 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:51:27.126713 systemd-logind[1463]: Removed session 10. Sep 4 23:51:32.136469 systemd[1]: Started sshd@10-143.110.229.161:22-147.75.109.163:52012.service - OpenSSH per-connection server daemon (147.75.109.163:52012). Sep 4 23:51:32.188705 sshd[4052]: Accepted publickey for core from 147.75.109.163 port 52012 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:32.190273 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:32.195439 systemd-logind[1463]: New session 11 of user core. Sep 4 23:51:32.204407 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:51:32.351916 sshd[4054]: Connection closed by 147.75.109.163 port 52012 Sep 4 23:51:32.353520 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:32.368016 systemd[1]: sshd@10-143.110.229.161:22-147.75.109.163:52012.service: Deactivated successfully. Sep 4 23:51:32.370700 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:51:32.372153 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:51:32.382216 systemd[1]: Started sshd@11-143.110.229.161:22-147.75.109.163:52020.service - OpenSSH per-connection server daemon (147.75.109.163:52020). Sep 4 23:51:32.384578 systemd-logind[1463]: Removed session 11. Sep 4 23:51:32.440974 sshd[4066]: Accepted publickey for core from 147.75.109.163 port 52020 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:32.443109 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:32.450249 systemd-logind[1463]: New session 12 of user core. Sep 4 23:51:32.457426 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:51:32.648630 sshd[4069]: Connection closed by 147.75.109.163 port 52020 Sep 4 23:51:32.649802 sshd-session[4066]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:32.663161 systemd[1]: sshd@11-143.110.229.161:22-147.75.109.163:52020.service: Deactivated successfully. Sep 4 23:51:32.666690 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:51:32.672597 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:51:32.682492 systemd[1]: Started sshd@12-143.110.229.161:22-147.75.109.163:52024.service - OpenSSH per-connection server daemon (147.75.109.163:52024). Sep 4 23:51:32.686464 systemd-logind[1463]: Removed session 12. Sep 4 23:51:32.758116 sshd[4078]: Accepted publickey for core from 147.75.109.163 port 52024 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:32.760644 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:32.775820 systemd-logind[1463]: New session 13 of user core. Sep 4 23:51:32.781402 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:51:32.952482 sshd[4081]: Connection closed by 147.75.109.163 port 52024 Sep 4 23:51:32.953370 sshd-session[4078]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:32.959497 systemd[1]: sshd@12-143.110.229.161:22-147.75.109.163:52024.service: Deactivated successfully. Sep 4 23:51:32.961822 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:51:32.963034 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:51:32.964547 systemd-logind[1463]: Removed session 13. Sep 4 23:51:37.981515 systemd[1]: Started sshd@13-143.110.229.161:22-147.75.109.163:52040.service - OpenSSH per-connection server daemon (147.75.109.163:52040). Sep 4 23:51:38.038278 sshd[4096]: Accepted publickey for core from 147.75.109.163 port 52040 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:38.040909 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:38.049692 systemd-logind[1463]: New session 14 of user core. Sep 4 23:51:38.056965 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:51:38.220669 sshd[4098]: Connection closed by 147.75.109.163 port 52040 Sep 4 23:51:38.221588 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:38.226832 systemd[1]: sshd@13-143.110.229.161:22-147.75.109.163:52040.service: Deactivated successfully. Sep 4 23:51:38.229619 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:51:38.231049 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:51:38.233191 systemd-logind[1463]: Removed session 14. Sep 4 23:51:43.243471 systemd[1]: Started sshd@14-143.110.229.161:22-147.75.109.163:58476.service - OpenSSH per-connection server daemon (147.75.109.163:58476). Sep 4 23:51:43.298106 sshd[4112]: Accepted publickey for core from 147.75.109.163 port 58476 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:43.299913 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:43.306049 systemd-logind[1463]: New session 15 of user core. Sep 4 23:51:43.311351 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:51:43.475763 sshd[4114]: Connection closed by 147.75.109.163 port 58476 Sep 4 23:51:43.474858 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:43.480457 systemd[1]: sshd@14-143.110.229.161:22-147.75.109.163:58476.service: Deactivated successfully. Sep 4 23:51:43.480863 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:51:43.486017 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:51:43.489891 systemd-logind[1463]: Removed session 15. Sep 4 23:51:48.503731 systemd[1]: Started sshd@15-143.110.229.161:22-147.75.109.163:58484.service - OpenSSH per-connection server daemon (147.75.109.163:58484). Sep 4 23:51:48.571708 sshd[4125]: Accepted publickey for core from 147.75.109.163 port 58484 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:48.574046 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:48.582466 systemd-logind[1463]: New session 16 of user core. Sep 4 23:51:48.590543 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:51:48.749809 sshd[4127]: Connection closed by 147.75.109.163 port 58484 Sep 4 23:51:48.751014 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:48.763260 systemd[1]: sshd@15-143.110.229.161:22-147.75.109.163:58484.service: Deactivated successfully. Sep 4 23:51:48.766336 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:51:48.768937 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:51:48.775889 systemd[1]: Started sshd@16-143.110.229.161:22-147.75.109.163:58492.service - OpenSSH per-connection server daemon (147.75.109.163:58492). Sep 4 23:51:48.779467 systemd-logind[1463]: Removed session 16. Sep 4 23:51:48.842544 sshd[4137]: Accepted publickey for core from 147.75.109.163 port 58492 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:48.844566 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:48.850833 systemd-logind[1463]: New session 17 of user core. Sep 4 23:51:48.858506 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:51:49.195477 sshd[4140]: Connection closed by 147.75.109.163 port 58492 Sep 4 23:51:49.197220 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:49.268742 systemd[1]: sshd@16-143.110.229.161:22-147.75.109.163:58492.service: Deactivated successfully. Sep 4 23:51:49.272523 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:51:49.276874 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:51:49.286775 systemd[1]: Started sshd@17-143.110.229.161:22-147.75.109.163:58508.service - OpenSSH per-connection server daemon (147.75.109.163:58508). Sep 4 23:51:49.290107 systemd-logind[1463]: Removed session 17. Sep 4 23:51:49.384372 sshd[4149]: Accepted publickey for core from 147.75.109.163 port 58508 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:49.387322 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:49.396639 systemd-logind[1463]: New session 18 of user core. Sep 4 23:51:49.402595 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:51:50.149009 sshd[4152]: Connection closed by 147.75.109.163 port 58508 Sep 4 23:51:50.153338 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:50.171531 systemd[1]: sshd@17-143.110.229.161:22-147.75.109.163:58508.service: Deactivated successfully. Sep 4 23:51:50.178931 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:51:50.182630 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:51:50.196670 systemd[1]: Started sshd@18-143.110.229.161:22-147.75.109.163:47054.service - OpenSSH per-connection server daemon (147.75.109.163:47054). Sep 4 23:51:50.203561 systemd-logind[1463]: Removed session 18. Sep 4 23:51:50.257644 sshd[4167]: Accepted publickey for core from 147.75.109.163 port 47054 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:50.260868 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:50.270641 systemd-logind[1463]: New session 19 of user core. Sep 4 23:51:50.286443 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:51:50.615851 sshd[4170]: Connection closed by 147.75.109.163 port 47054 Sep 4 23:51:50.617158 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:50.630663 systemd[1]: sshd@18-143.110.229.161:22-147.75.109.163:47054.service: Deactivated successfully. Sep 4 23:51:50.637153 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:51:50.644210 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:51:50.659159 systemd[1]: Started sshd@19-143.110.229.161:22-147.75.109.163:47062.service - OpenSSH per-connection server daemon (147.75.109.163:47062). Sep 4 23:51:50.662928 systemd-logind[1463]: Removed session 19. Sep 4 23:51:50.726925 sshd[4179]: Accepted publickey for core from 147.75.109.163 port 47062 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:50.729344 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:50.737545 systemd-logind[1463]: New session 20 of user core. Sep 4 23:51:50.744430 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:51:50.897163 sshd[4182]: Connection closed by 147.75.109.163 port 47062 Sep 4 23:51:50.898294 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:50.903540 systemd[1]: sshd@19-143.110.229.161:22-147.75.109.163:47062.service: Deactivated successfully. Sep 4 23:51:50.907320 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:51:50.908633 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:51:50.910017 systemd-logind[1463]: Removed session 20. Sep 4 23:51:54.806562 kubelet[2600]: E0904 23:51:54.806053 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:51:55.925647 systemd[1]: Started sshd@20-143.110.229.161:22-147.75.109.163:47076.service - OpenSSH per-connection server daemon (147.75.109.163:47076). Sep 4 23:51:55.985089 sshd[4194]: Accepted publickey for core from 147.75.109.163 port 47076 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:51:55.987170 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:55.993521 systemd-logind[1463]: New session 21 of user core. Sep 4 23:51:55.999418 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:51:56.132755 sshd[4196]: Connection closed by 147.75.109.163 port 47076 Sep 4 23:51:56.134710 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:56.142174 systemd[1]: sshd@20-143.110.229.161:22-147.75.109.163:47076.service: Deactivated successfully. Sep 4 23:51:56.145336 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:51:56.147016 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:51:56.148842 systemd-logind[1463]: Removed session 21. Sep 4 23:51:57.805818 kubelet[2600]: E0904 23:51:57.805770 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:01.154567 systemd[1]: Started sshd@21-143.110.229.161:22-147.75.109.163:56162.service - OpenSSH per-connection server daemon (147.75.109.163:56162). Sep 4 23:52:01.212006 sshd[4212]: Accepted publickey for core from 147.75.109.163 port 56162 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:52:01.214668 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:01.222024 systemd-logind[1463]: New session 22 of user core. Sep 4 23:52:01.228453 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:52:01.391555 sshd[4214]: Connection closed by 147.75.109.163 port 56162 Sep 4 23:52:01.391399 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:01.395805 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:52:01.396338 systemd[1]: sshd@21-143.110.229.161:22-147.75.109.163:56162.service: Deactivated successfully. Sep 4 23:52:01.400046 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:52:01.404097 systemd-logind[1463]: Removed session 22. Sep 4 23:52:05.807033 kubelet[2600]: E0904 23:52:05.806154 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:06.419951 systemd[1]: Started sshd@22-143.110.229.161:22-147.75.109.163:56178.service - OpenSSH per-connection server daemon (147.75.109.163:56178). Sep 4 23:52:06.485316 sshd[4226]: Accepted publickey for core from 147.75.109.163 port 56178 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:52:06.487800 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:06.495314 systemd-logind[1463]: New session 23 of user core. Sep 4 23:52:06.503448 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:52:06.719809 sshd[4228]: Connection closed by 147.75.109.163 port 56178 Sep 4 23:52:06.721020 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:06.727300 systemd[1]: sshd@22-143.110.229.161:22-147.75.109.163:56178.service: Deactivated successfully. Sep 4 23:52:06.732394 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:52:06.736221 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:52:06.739523 systemd-logind[1463]: Removed session 23. Sep 4 23:52:07.806441 kubelet[2600]: E0904 23:52:07.806213 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:11.744632 systemd[1]: Started sshd@23-143.110.229.161:22-147.75.109.163:44478.service - OpenSSH per-connection server daemon (147.75.109.163:44478). Sep 4 23:52:11.819256 sshd[4242]: Accepted publickey for core from 147.75.109.163 port 44478 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:52:11.821011 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:11.829346 systemd-logind[1463]: New session 24 of user core. Sep 4 23:52:11.833595 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:52:12.015256 sshd[4244]: Connection closed by 147.75.109.163 port 44478 Sep 4 23:52:12.016301 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:12.032474 systemd[1]: sshd@23-143.110.229.161:22-147.75.109.163:44478.service: Deactivated successfully. Sep 4 23:52:12.036227 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:52:12.040389 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:52:12.063918 systemd[1]: Started sshd@24-143.110.229.161:22-147.75.109.163:44482.service - OpenSSH per-connection server daemon (147.75.109.163:44482). Sep 4 23:52:12.065991 systemd-logind[1463]: Removed session 24. Sep 4 23:52:12.122808 sshd[4255]: Accepted publickey for core from 147.75.109.163 port 44482 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:52:12.125117 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:12.139434 systemd-logind[1463]: New session 25 of user core. Sep 4 23:52:12.146559 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:52:14.403620 containerd[1487]: time="2025-09-04T23:52:14.403465290Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:52:14.421460 containerd[1487]: time="2025-09-04T23:52:14.421382429Z" level=info msg="StopContainer for \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\" with timeout 30 (s)" Sep 4 23:52:14.424155 containerd[1487]: time="2025-09-04T23:52:14.424098499Z" level=info msg="Stop container \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\" with signal terminated" Sep 4 23:52:14.452559 containerd[1487]: time="2025-09-04T23:52:14.452497854Z" level=info msg="StopContainer for \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\" with timeout 2 (s)" Sep 4 23:52:14.453526 containerd[1487]: time="2025-09-04T23:52:14.453464187Z" level=info msg="Stop container \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\" with signal terminated" Sep 4 23:52:14.481191 systemd-networkd[1383]: lxc_health: Link DOWN Sep 4 23:52:14.481208 systemd-networkd[1383]: lxc_health: Lost carrier Sep 4 23:52:14.506176 systemd[1]: cri-containerd-17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a.scope: Deactivated successfully. Sep 4 23:52:14.526579 systemd[1]: cri-containerd-4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b.scope: Deactivated successfully. Sep 4 23:52:14.528824 systemd[1]: cri-containerd-4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b.scope: Consumed 9.613s CPU time, 191.1M memory peak, 67M read from disk, 13.3M written to disk. Sep 4 23:52:14.547395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a-rootfs.mount: Deactivated successfully. Sep 4 23:52:14.557141 containerd[1487]: time="2025-09-04T23:52:14.556418134Z" level=info msg="shim disconnected" id=17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a namespace=k8s.io Sep 4 23:52:14.557141 containerd[1487]: time="2025-09-04T23:52:14.556486184Z" level=warning msg="cleaning up after shim disconnected" id=17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a namespace=k8s.io Sep 4 23:52:14.557141 containerd[1487]: time="2025-09-04T23:52:14.556494685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:14.592303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b-rootfs.mount: Deactivated successfully. Sep 4 23:52:14.595039 containerd[1487]: time="2025-09-04T23:52:14.594877272Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:52:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:52:14.600993 containerd[1487]: time="2025-09-04T23:52:14.600888868Z" level=info msg="StopContainer for \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\" returns successfully" Sep 4 23:52:14.608369 containerd[1487]: time="2025-09-04T23:52:14.608283935Z" level=info msg="shim disconnected" id=4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b namespace=k8s.io Sep 4 23:52:14.609008 containerd[1487]: time="2025-09-04T23:52:14.608741176Z" level=warning msg="cleaning up after shim disconnected" id=4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b namespace=k8s.io Sep 4 23:52:14.609008 containerd[1487]: time="2025-09-04T23:52:14.608801527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:14.620299 containerd[1487]: time="2025-09-04T23:52:14.619266301Z" level=info msg="StopPodSandbox for \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\"" Sep 4 23:52:14.623687 containerd[1487]: time="2025-09-04T23:52:14.623566428Z" level=info msg="Container to stop \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:52:14.632972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268-shm.mount: Deactivated successfully. Sep 4 23:52:14.658336 containerd[1487]: time="2025-09-04T23:52:14.658090900Z" level=info msg="StopContainer for \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\" returns successfully" Sep 4 23:52:14.659436 containerd[1487]: time="2025-09-04T23:52:14.658913888Z" level=info msg="StopPodSandbox for \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\"" Sep 4 23:52:14.659436 containerd[1487]: time="2025-09-04T23:52:14.658962728Z" level=info msg="Container to stop \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:52:14.659436 containerd[1487]: time="2025-09-04T23:52:14.659001089Z" level=info msg="Container to stop \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:52:14.659436 containerd[1487]: time="2025-09-04T23:52:14.659010978Z" level=info msg="Container to stop \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:52:14.659436 containerd[1487]: time="2025-09-04T23:52:14.659024478Z" level=info msg="Container to stop \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:52:14.659436 containerd[1487]: time="2025-09-04T23:52:14.659041587Z" level=info msg="Container to stop \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:52:14.662391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5-shm.mount: Deactivated successfully. Sep 4 23:52:14.667976 systemd[1]: cri-containerd-45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268.scope: Deactivated successfully. Sep 4 23:52:14.684511 systemd[1]: cri-containerd-48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5.scope: Deactivated successfully. Sep 4 23:52:14.729381 containerd[1487]: time="2025-09-04T23:52:14.729288221Z" level=info msg="shim disconnected" id=45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268 namespace=k8s.io Sep 4 23:52:14.729381 containerd[1487]: time="2025-09-04T23:52:14.729366170Z" level=warning msg="cleaning up after shim disconnected" id=45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268 namespace=k8s.io Sep 4 23:52:14.729381 containerd[1487]: time="2025-09-04T23:52:14.729378851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:14.732284 containerd[1487]: time="2025-09-04T23:52:14.731096499Z" level=info msg="shim disconnected" id=48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5 namespace=k8s.io Sep 4 23:52:14.732284 containerd[1487]: time="2025-09-04T23:52:14.731176780Z" level=warning msg="cleaning up after shim disconnected" id=48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5 namespace=k8s.io Sep 4 23:52:14.732284 containerd[1487]: time="2025-09-04T23:52:14.731192237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:14.760458 containerd[1487]: time="2025-09-04T23:52:14.760359835Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:52:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:52:14.763642 containerd[1487]: time="2025-09-04T23:52:14.763553780Z" level=info msg="TearDown network for sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" successfully" Sep 4 23:52:14.763929 containerd[1487]: time="2025-09-04T23:52:14.763900524Z" level=info msg="StopPodSandbox for \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" returns successfully" Sep 4 23:52:14.767373 containerd[1487]: time="2025-09-04T23:52:14.767308470Z" level=info msg="TearDown network for sandbox \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\" successfully" Sep 4 23:52:14.768495 containerd[1487]: time="2025-09-04T23:52:14.768423773Z" level=info msg="StopPodSandbox for \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\" returns successfully" Sep 4 23:52:14.880271 kubelet[2600]: I0904 23:52:14.880151 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-run\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.880271 kubelet[2600]: I0904 23:52:14.880217 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-cgroup\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.880271 kubelet[2600]: I0904 23:52:14.880253 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-host-proc-sys-net\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.880902 kubelet[2600]: I0904 23:52:14.880296 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-clustermesh-secrets\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.880902 kubelet[2600]: I0904 23:52:14.880343 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-hubble-tls\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.880902 kubelet[2600]: I0904 23:52:14.880368 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cni-path\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.880902 kubelet[2600]: I0904 23:52:14.880397 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcxjs\" (UniqueName: \"kubernetes.io/projected/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-kube-api-access-mcxjs\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.880902 kubelet[2600]: I0904 23:52:14.880431 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-config-path\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.880902 kubelet[2600]: I0904 23:52:14.880459 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651449c6-af85-4739-bcef-4ebd79a4971d-cilium-config-path\") pod \"651449c6-af85-4739-bcef-4ebd79a4971d\" (UID: \"651449c6-af85-4739-bcef-4ebd79a4971d\") " Sep 4 23:52:14.881113 kubelet[2600]: I0904 23:52:14.880486 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-hostproc\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.881113 kubelet[2600]: I0904 23:52:14.880515 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltnn6\" (UniqueName: \"kubernetes.io/projected/651449c6-af85-4739-bcef-4ebd79a4971d-kube-api-access-ltnn6\") pod \"651449c6-af85-4739-bcef-4ebd79a4971d\" (UID: \"651449c6-af85-4739-bcef-4ebd79a4971d\") " Sep 4 23:52:14.881113 kubelet[2600]: I0904 23:52:14.880543 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-xtables-lock\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.881113 kubelet[2600]: I0904 23:52:14.880571 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-bpf-maps\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.881113 kubelet[2600]: I0904 23:52:14.880598 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-etc-cni-netd\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.881113 kubelet[2600]: I0904 23:52:14.880625 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-host-proc-sys-kernel\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.881256 kubelet[2600]: I0904 23:52:14.880683 2600 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-lib-modules\") pod \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\" (UID: \"d84075aa-d4c9-4b6b-8edd-20eaa7fa1270\") " Sep 4 23:52:14.881256 kubelet[2600]: I0904 23:52:14.880813 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.881256 kubelet[2600]: I0904 23:52:14.880868 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.881256 kubelet[2600]: I0904 23:52:14.880891 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.881256 kubelet[2600]: I0904 23:52:14.880914 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.891241 kubelet[2600]: I0904 23:52:14.890578 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:52:14.899733 kubelet[2600]: I0904 23:52:14.899178 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:52:14.899733 kubelet[2600]: I0904 23:52:14.899299 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cni-path" (OuterVolumeSpecName: "cni-path") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.906805 kubelet[2600]: I0904 23:52:14.906718 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-kube-api-access-mcxjs" (OuterVolumeSpecName: "kube-api-access-mcxjs") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "kube-api-access-mcxjs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:52:14.910890 kubelet[2600]: I0904 23:52:14.909899 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:52:14.914234 kubelet[2600]: I0904 23:52:14.913839 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651449c6-af85-4739-bcef-4ebd79a4971d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "651449c6-af85-4739-bcef-4ebd79a4971d" (UID: "651449c6-af85-4739-bcef-4ebd79a4971d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:52:14.914234 kubelet[2600]: I0904 23:52:14.913920 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-hostproc" (OuterVolumeSpecName: "hostproc") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.918285 kubelet[2600]: I0904 23:52:14.918229 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651449c6-af85-4739-bcef-4ebd79a4971d-kube-api-access-ltnn6" (OuterVolumeSpecName: "kube-api-access-ltnn6") pod "651449c6-af85-4739-bcef-4ebd79a4971d" (UID: "651449c6-af85-4739-bcef-4ebd79a4971d"). InnerVolumeSpecName "kube-api-access-ltnn6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:52:14.918611 kubelet[2600]: I0904 23:52:14.918533 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.918611 kubelet[2600]: I0904 23:52:14.918559 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.918611 kubelet[2600]: I0904 23:52:14.918575 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.918611 kubelet[2600]: I0904 23:52:14.918591 2600 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" (UID: "d84075aa-d4c9-4b6b-8edd-20eaa7fa1270"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:52:14.981421 kubelet[2600]: I0904 23:52:14.981097 2600 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-run\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981421 kubelet[2600]: I0904 23:52:14.981146 2600 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-cgroup\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981421 kubelet[2600]: I0904 23:52:14.981163 2600 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-clustermesh-secrets\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981421 kubelet[2600]: I0904 23:52:14.981178 2600 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-hubble-tls\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981421 kubelet[2600]: I0904 23:52:14.981200 2600 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-host-proc-sys-net\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981421 kubelet[2600]: I0904 23:52:14.981219 2600 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cilium-config-path\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981421 kubelet[2600]: I0904 23:52:14.981234 2600 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651449c6-af85-4739-bcef-4ebd79a4971d-cilium-config-path\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981421 kubelet[2600]: I0904 23:52:14.981250 2600 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-hostproc\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981788 kubelet[2600]: I0904 23:52:14.981266 2600 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-cni-path\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981788 kubelet[2600]: I0904 23:52:14.981279 2600 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mcxjs\" (UniqueName: \"kubernetes.io/projected/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-kube-api-access-mcxjs\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981788 kubelet[2600]: I0904 23:52:14.981296 2600 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ltnn6\" (UniqueName: \"kubernetes.io/projected/651449c6-af85-4739-bcef-4ebd79a4971d-kube-api-access-ltnn6\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981788 kubelet[2600]: I0904 23:52:14.981311 2600 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-xtables-lock\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981788 kubelet[2600]: I0904 23:52:14.981325 2600 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-etc-cni-netd\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981788 kubelet[2600]: I0904 23:52:14.981336 2600 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-host-proc-sys-kernel\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981788 kubelet[2600]: I0904 23:52:14.981346 2600 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-bpf-maps\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.981788 kubelet[2600]: I0904 23:52:14.981355 2600 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270-lib-modules\") on node \"ci-4230.2.2-n-136bc82296\" DevicePath \"\"" Sep 4 23:52:14.990800 kubelet[2600]: E0904 23:52:14.990700 2600 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:52:15.191638 kubelet[2600]: I0904 23:52:15.190946 2600 scope.go:117] "RemoveContainer" containerID="17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a" Sep 4 23:52:15.204010 containerd[1487]: time="2025-09-04T23:52:15.203870704Z" level=info msg="RemoveContainer for \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\"" Sep 4 23:52:15.218778 containerd[1487]: time="2025-09-04T23:52:15.218320833Z" level=info msg="RemoveContainer for \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\" returns successfully" Sep 4 23:52:15.220949 kubelet[2600]: I0904 23:52:15.220416 2600 scope.go:117] "RemoveContainer" containerID="17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a" Sep 4 23:52:15.221589 containerd[1487]: time="2025-09-04T23:52:15.221000365Z" level=error msg="ContainerStatus for \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\": not found" Sep 4 23:52:15.222998 systemd[1]: Removed slice kubepods-besteffort-pod651449c6_af85_4739_bcef_4ebd79a4971d.slice - libcontainer container kubepods-besteffort-pod651449c6_af85_4739_bcef_4ebd79a4971d.slice. Sep 4 23:52:15.224499 kubelet[2600]: E0904 23:52:15.223888 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\": not found" containerID="17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a" Sep 4 23:52:15.225915 kubelet[2600]: I0904 23:52:15.223981 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a"} err="failed to get container status \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"17f0a5602d4a16b87975a90bbc290c07a67c9069260a100fa89af7180be95b7a\": not found" Sep 4 23:52:15.225915 kubelet[2600]: I0904 23:52:15.225439 2600 scope.go:117] "RemoveContainer" containerID="4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b" Sep 4 23:52:15.230727 containerd[1487]: time="2025-09-04T23:52:15.230651053Z" level=info msg="RemoveContainer for \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\"" Sep 4 23:52:15.231240 systemd[1]: Removed slice kubepods-burstable-podd84075aa_d4c9_4b6b_8edd_20eaa7fa1270.slice - libcontainer container kubepods-burstable-podd84075aa_d4c9_4b6b_8edd_20eaa7fa1270.slice. Sep 4 23:52:15.231562 systemd[1]: kubepods-burstable-podd84075aa_d4c9_4b6b_8edd_20eaa7fa1270.slice: Consumed 9.742s CPU time, 191.5M memory peak, 67.1M read from disk, 13.3M written to disk. Sep 4 23:52:15.236854 containerd[1487]: time="2025-09-04T23:52:15.235787602Z" level=info msg="RemoveContainer for \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\" returns successfully" Sep 4 23:52:15.237776 kubelet[2600]: I0904 23:52:15.237625 2600 scope.go:117] "RemoveContainer" containerID="323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4" Sep 4 23:52:15.241321 containerd[1487]: time="2025-09-04T23:52:15.240539100Z" level=info msg="RemoveContainer for \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\"" Sep 4 23:52:15.244902 containerd[1487]: time="2025-09-04T23:52:15.244803082Z" level=info msg="RemoveContainer for \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\" returns successfully" Sep 4 23:52:15.245499 kubelet[2600]: I0904 23:52:15.245246 2600 scope.go:117] "RemoveContainer" containerID="eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118" Sep 4 23:52:15.253731 containerd[1487]: time="2025-09-04T23:52:15.253644750Z" level=info msg="RemoveContainer for \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\"" Sep 4 23:52:15.260556 containerd[1487]: time="2025-09-04T23:52:15.260503571Z" level=info msg="RemoveContainer for \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\" returns successfully" Sep 4 23:52:15.262642 kubelet[2600]: I0904 23:52:15.262500 2600 scope.go:117] "RemoveContainer" containerID="72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f" Sep 4 23:52:15.267655 containerd[1487]: time="2025-09-04T23:52:15.267169417Z" level=info msg="RemoveContainer for \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\"" Sep 4 23:52:15.273803 containerd[1487]: time="2025-09-04T23:52:15.273572942Z" level=info msg="RemoveContainer for \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\" returns successfully" Sep 4 23:52:15.274448 kubelet[2600]: I0904 23:52:15.274403 2600 scope.go:117] "RemoveContainer" containerID="5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29" Sep 4 23:52:15.281693 containerd[1487]: time="2025-09-04T23:52:15.281633025Z" level=info msg="RemoveContainer for \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\"" Sep 4 23:52:15.292207 containerd[1487]: time="2025-09-04T23:52:15.292128017Z" level=info msg="RemoveContainer for \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\" returns successfully" Sep 4 23:52:15.294332 kubelet[2600]: I0904 23:52:15.294274 2600 scope.go:117] "RemoveContainer" containerID="4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b" Sep 4 23:52:15.294665 containerd[1487]: time="2025-09-04T23:52:15.294576516Z" level=error msg="ContainerStatus for \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\": not found" Sep 4 23:52:15.294858 kubelet[2600]: E0904 23:52:15.294828 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\": not found" containerID="4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b" Sep 4 23:52:15.294930 kubelet[2600]: I0904 23:52:15.294868 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b"} err="failed to get container status \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e3e0a799b3e35f0ae2ce108acc3dce4ca26c5c7245f628f3495e1e69524d43b\": not found" Sep 4 23:52:15.294930 kubelet[2600]: I0904 23:52:15.294896 2600 scope.go:117] "RemoveContainer" containerID="323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4" Sep 4 23:52:15.295495 containerd[1487]: time="2025-09-04T23:52:15.295376216Z" level=error msg="ContainerStatus for \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\": not found" Sep 4 23:52:15.295706 kubelet[2600]: E0904 23:52:15.295570 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\": not found" containerID="323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4" Sep 4 23:52:15.295706 kubelet[2600]: I0904 23:52:15.295632 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4"} err="failed to get container status \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"323d62d8f09b3d567865e871a30546999263c509cfde2488859ee5d4bf9f8eb4\": not found" Sep 4 23:52:15.295706 kubelet[2600]: I0904 23:52:15.295659 2600 scope.go:117] "RemoveContainer" containerID="eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118" Sep 4 23:52:15.296174 containerd[1487]: time="2025-09-04T23:52:15.295913296Z" level=error msg="ContainerStatus for \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\": not found" Sep 4 23:52:15.296223 kubelet[2600]: E0904 23:52:15.296180 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\": not found" containerID="eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118" Sep 4 23:52:15.296223 kubelet[2600]: I0904 23:52:15.296202 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118"} err="failed to get container status \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\": rpc error: code = NotFound desc = an error occurred when try to find container \"eab6146a5df6c4393e72802e21c4c86eb9f5b255f39a8e0134e1eaa7320e4118\": not found" Sep 4 23:52:15.296223 kubelet[2600]: I0904 23:52:15.296220 2600 scope.go:117] "RemoveContainer" containerID="72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f" Sep 4 23:52:15.296954 kubelet[2600]: E0904 23:52:15.296541 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\": not found" containerID="72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f" Sep 4 23:52:15.296954 kubelet[2600]: I0904 23:52:15.296593 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f"} err="failed to get container status \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\": rpc error: code = NotFound desc = an error occurred when try to find container \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\": not found" Sep 4 23:52:15.296954 kubelet[2600]: I0904 23:52:15.296616 2600 scope.go:117] "RemoveContainer" containerID="5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29" Sep 4 23:52:15.297154 containerd[1487]: time="2025-09-04T23:52:15.296413914Z" level=error msg="ContainerStatus for \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72937808511ffc6632c5bb261b7b13d74377c84885ce45106cdd4b1fd914e62f\": not found" Sep 4 23:52:15.297154 containerd[1487]: time="2025-09-04T23:52:15.296831603Z" level=error msg="ContainerStatus for \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\": not found" Sep 4 23:52:15.297240 kubelet[2600]: E0904 23:52:15.296938 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\": not found" containerID="5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29" Sep 4 23:52:15.297240 kubelet[2600]: I0904 23:52:15.296978 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29"} err="failed to get container status \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ffea88c7246f266f343777efc6aad49818a5c837ae116b189d87d7aa1ce7d29\": not found" Sep 4 23:52:15.343035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268-rootfs.mount: Deactivated successfully. Sep 4 23:52:15.343670 systemd[1]: var-lib-kubelet-pods-651449c6\x2daf85\x2d4739\x2dbcef\x2d4ebd79a4971d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dltnn6.mount: Deactivated successfully. Sep 4 23:52:15.343829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5-rootfs.mount: Deactivated successfully. Sep 4 23:52:15.343925 systemd[1]: var-lib-kubelet-pods-d84075aa\x2dd4c9\x2d4b6b\x2d8edd\x2d20eaa7fa1270-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmcxjs.mount: Deactivated successfully. Sep 4 23:52:15.344026 systemd[1]: var-lib-kubelet-pods-d84075aa\x2dd4c9\x2d4b6b\x2d8edd\x2d20eaa7fa1270-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:52:15.344144 systemd[1]: var-lib-kubelet-pods-d84075aa\x2dd4c9\x2d4b6b\x2d8edd\x2d20eaa7fa1270-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:52:16.269543 sshd[4258]: Connection closed by 147.75.109.163 port 44482 Sep 4 23:52:16.275843 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:16.301862 systemd[1]: sshd@24-143.110.229.161:22-147.75.109.163:44482.service: Deactivated successfully. Sep 4 23:52:16.306746 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:52:16.307157 systemd[1]: session-25.scope: Consumed 1.416s CPU time, 28.2M memory peak. Sep 4 23:52:16.308839 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:52:16.320687 systemd[1]: Started sshd@25-143.110.229.161:22-147.75.109.163:44494.service - OpenSSH per-connection server daemon (147.75.109.163:44494). Sep 4 23:52:16.323250 systemd-logind[1463]: Removed session 25. Sep 4 23:52:16.408598 sshd[4416]: Accepted publickey for core from 147.75.109.163 port 44494 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:52:16.411675 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:16.421219 systemd-logind[1463]: New session 26 of user core. Sep 4 23:52:16.430631 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:52:16.817123 kubelet[2600]: I0904 23:52:16.815371 2600 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651449c6-af85-4739-bcef-4ebd79a4971d" path="/var/lib/kubelet/pods/651449c6-af85-4739-bcef-4ebd79a4971d/volumes" Sep 4 23:52:16.817123 kubelet[2600]: I0904 23:52:16.815985 2600 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" path="/var/lib/kubelet/pods/d84075aa-d4c9-4b6b-8edd-20eaa7fa1270/volumes" Sep 4 23:52:17.466896 sshd[4419]: Connection closed by 147.75.109.163 port 44494 Sep 4 23:52:17.467407 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:17.483547 systemd[1]: sshd@25-143.110.229.161:22-147.75.109.163:44494.service: Deactivated successfully. Sep 4 23:52:17.487917 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:52:17.493541 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:52:17.503779 systemd[1]: Started sshd@26-143.110.229.161:22-147.75.109.163:44504.service - OpenSSH per-connection server daemon (147.75.109.163:44504). Sep 4 23:52:17.509491 systemd-logind[1463]: Removed session 26. Sep 4 23:52:17.544340 kubelet[2600]: I0904 23:52:17.544226 2600 memory_manager.go:355] "RemoveStaleState removing state" podUID="651449c6-af85-4739-bcef-4ebd79a4971d" containerName="cilium-operator" Sep 4 23:52:17.544340 kubelet[2600]: I0904 23:52:17.544287 2600 memory_manager.go:355] "RemoveStaleState removing state" podUID="d84075aa-d4c9-4b6b-8edd-20eaa7fa1270" containerName="cilium-agent" Sep 4 23:52:17.564871 kubelet[2600]: I0904 23:52:17.564667 2600 setters.go:602] "Node became not ready" node="ci-4230.2.2-n-136bc82296" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:52:17Z","lastTransitionTime":"2025-09-04T23:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:52:17.574671 systemd[1]: Created slice kubepods-burstable-podfce95b40_258c_4f56_8588_92a52df96e89.slice - libcontainer container kubepods-burstable-podfce95b40_258c_4f56_8588_92a52df96e89.slice. Sep 4 23:52:17.610473 kubelet[2600]: I0904 23:52:17.610410 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-cni-path\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.613598 kubelet[2600]: I0904 23:52:17.610738 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-etc-cni-netd\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.613598 kubelet[2600]: I0904 23:52:17.610817 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-xtables-lock\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.613598 kubelet[2600]: I0904 23:52:17.610867 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fce95b40-258c-4f56-8588-92a52df96e89-clustermesh-secrets\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.613598 kubelet[2600]: I0904 23:52:17.610894 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-host-proc-sys-kernel\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.613598 kubelet[2600]: I0904 23:52:17.612794 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fce95b40-258c-4f56-8588-92a52df96e89-hubble-tls\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.613598 kubelet[2600]: I0904 23:52:17.612847 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-hostproc\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.613892 sshd[4429]: Accepted publickey for core from 147.75.109.163 port 44504 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:52:17.614324 kubelet[2600]: I0904 23:52:17.612869 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-cilium-run\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.614324 kubelet[2600]: I0904 23:52:17.612954 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fce95b40-258c-4f56-8588-92a52df96e89-cilium-config-path\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.614324 kubelet[2600]: I0904 23:52:17.613085 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlkj\" (UniqueName: \"kubernetes.io/projected/fce95b40-258c-4f56-8588-92a52df96e89-kube-api-access-5zlkj\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.614324 kubelet[2600]: I0904 23:52:17.613108 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-cilium-cgroup\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.614324 kubelet[2600]: I0904 23:52:17.613148 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fce95b40-258c-4f56-8588-92a52df96e89-cilium-ipsec-secrets\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.614324 kubelet[2600]: I0904 23:52:17.613165 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-bpf-maps\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.614612 kubelet[2600]: I0904 23:52:17.613180 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-lib-modules\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.614612 kubelet[2600]: I0904 23:52:17.613313 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fce95b40-258c-4f56-8588-92a52df96e89-host-proc-sys-net\") pod \"cilium-xzp2v\" (UID: \"fce95b40-258c-4f56-8588-92a52df96e89\") " pod="kube-system/cilium-xzp2v" Sep 4 23:52:17.616581 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:17.633179 systemd-logind[1463]: New session 27 of user core. Sep 4 23:52:17.641403 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:52:17.711511 sshd[4432]: Connection closed by 147.75.109.163 port 44504 Sep 4 23:52:17.712471 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:17.774870 systemd[1]: sshd@26-143.110.229.161:22-147.75.109.163:44504.service: Deactivated successfully. Sep 4 23:52:17.777690 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:52:17.780760 systemd-logind[1463]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:52:17.791582 systemd[1]: Started sshd@27-143.110.229.161:22-147.75.109.163:44518.service - OpenSSH per-connection server daemon (147.75.109.163:44518). Sep 4 23:52:17.799735 systemd-logind[1463]: Removed session 27. Sep 4 23:52:17.866374 sshd[4443]: Accepted publickey for core from 147.75.109.163 port 44518 ssh2: RSA SHA256:cSynQpeZyVHGpQvxjz1yZ77OmLC1i0AR3C5x6uiJiwA Sep 4 23:52:17.869805 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:17.879811 systemd-logind[1463]: New session 28 of user core. Sep 4 23:52:17.884375 kubelet[2600]: E0904 23:52:17.880941 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:17.887937 containerd[1487]: time="2025-09-04T23:52:17.884580104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xzp2v,Uid:fce95b40-258c-4f56-8588-92a52df96e89,Namespace:kube-system,Attempt:0,}" Sep 4 23:52:17.887538 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 23:52:17.948199 containerd[1487]: time="2025-09-04T23:52:17.944469512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:17.948199 containerd[1487]: time="2025-09-04T23:52:17.944539039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:17.948199 containerd[1487]: time="2025-09-04T23:52:17.944552349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:17.948199 containerd[1487]: time="2025-09-04T23:52:17.944680143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:17.984459 systemd[1]: Started cri-containerd-8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b.scope - libcontainer container 8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b. Sep 4 23:52:18.048656 containerd[1487]: time="2025-09-04T23:52:18.048321554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xzp2v,Uid:fce95b40-258c-4f56-8588-92a52df96e89,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\"" Sep 4 23:52:18.050201 kubelet[2600]: E0904 23:52:18.050090 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:18.067756 containerd[1487]: time="2025-09-04T23:52:18.067673434Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:52:18.097375 containerd[1487]: time="2025-09-04T23:52:18.097276294Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c07f994eb7af11243e0543ae3896cf5957980cc222cd8501fe0a22f33cef8c06\"" Sep 4 23:52:18.101329 containerd[1487]: time="2025-09-04T23:52:18.098795922Z" level=info msg="StartContainer for \"c07f994eb7af11243e0543ae3896cf5957980cc222cd8501fe0a22f33cef8c06\"" Sep 4 23:52:18.145385 systemd[1]: Started cri-containerd-c07f994eb7af11243e0543ae3896cf5957980cc222cd8501fe0a22f33cef8c06.scope - libcontainer container c07f994eb7af11243e0543ae3896cf5957980cc222cd8501fe0a22f33cef8c06. Sep 4 23:52:18.200193 containerd[1487]: time="2025-09-04T23:52:18.198904720Z" level=info msg="StartContainer for \"c07f994eb7af11243e0543ae3896cf5957980cc222cd8501fe0a22f33cef8c06\" returns successfully" Sep 4 23:52:18.225290 systemd[1]: cri-containerd-c07f994eb7af11243e0543ae3896cf5957980cc222cd8501fe0a22f33cef8c06.scope: Deactivated successfully. Sep 4 23:52:18.226263 systemd[1]: cri-containerd-c07f994eb7af11243e0543ae3896cf5957980cc222cd8501fe0a22f33cef8c06.scope: Consumed 33ms CPU time, 9.3M memory peak, 3M read from disk. Sep 4 23:52:18.237391 kubelet[2600]: E0904 23:52:18.235615 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:18.316483 containerd[1487]: time="2025-09-04T23:52:18.316229771Z" level=info msg="shim disconnected" id=c07f994eb7af11243e0543ae3896cf5957980cc222cd8501fe0a22f33cef8c06 namespace=k8s.io Sep 4 23:52:18.316483 containerd[1487]: time="2025-09-04T23:52:18.316338997Z" level=warning msg="cleaning up after shim disconnected" id=c07f994eb7af11243e0543ae3896cf5957980cc222cd8501fe0a22f33cef8c06 namespace=k8s.io Sep 4 23:52:18.316483 containerd[1487]: time="2025-09-04T23:52:18.316355130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:19.253152 kubelet[2600]: E0904 23:52:19.252235 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:19.260193 containerd[1487]: time="2025-09-04T23:52:19.256992266Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:52:19.286316 containerd[1487]: time="2025-09-04T23:52:19.286033082Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f\"" Sep 4 23:52:19.288422 containerd[1487]: time="2025-09-04T23:52:19.288261236Z" level=info msg="StartContainer for \"7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f\"" Sep 4 23:52:19.357497 systemd[1]: Started cri-containerd-7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f.scope - libcontainer container 7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f. Sep 4 23:52:19.404912 containerd[1487]: time="2025-09-04T23:52:19.404357781Z" level=info msg="StartContainer for \"7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f\" returns successfully" Sep 4 23:52:19.428110 systemd[1]: cri-containerd-7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f.scope: Deactivated successfully. Sep 4 23:52:19.430208 systemd[1]: cri-containerd-7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f.scope: Consumed 30ms CPU time, 7.3M memory peak, 1.9M read from disk. Sep 4 23:52:19.466852 containerd[1487]: time="2025-09-04T23:52:19.466674563Z" level=info msg="shim disconnected" id=7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f namespace=k8s.io Sep 4 23:52:19.466852 containerd[1487]: time="2025-09-04T23:52:19.466776711Z" level=warning msg="cleaning up after shim disconnected" id=7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f namespace=k8s.io Sep 4 23:52:19.466852 containerd[1487]: time="2025-09-04T23:52:19.466791363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:19.743292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e2e3b299d30412928bb16a6f0d1c9266f2b05f804fb3d12bc388d68a389326f-rootfs.mount: Deactivated successfully. Sep 4 23:52:19.805582 kubelet[2600]: E0904 23:52:19.804902 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rqsrp" podUID="599e3931-ba1e-4e05-a70a-5c7b61dc6c52" Sep 4 23:52:19.992754 kubelet[2600]: E0904 23:52:19.992658 2600 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:52:20.262115 kubelet[2600]: E0904 23:52:20.261914 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:20.270685 containerd[1487]: time="2025-09-04T23:52:20.269954434Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:52:20.312924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount930251266.mount: Deactivated successfully. Sep 4 23:52:20.318607 containerd[1487]: time="2025-09-04T23:52:20.318433868Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea772e32f9f748b3706ba54fd8a65f6bbadaf7e7fbc813e3d02f7a220b052f1d\"" Sep 4 23:52:20.320635 containerd[1487]: time="2025-09-04T23:52:20.320575874Z" level=info msg="StartContainer for \"ea772e32f9f748b3706ba54fd8a65f6bbadaf7e7fbc813e3d02f7a220b052f1d\"" Sep 4 23:52:20.379477 systemd[1]: Started cri-containerd-ea772e32f9f748b3706ba54fd8a65f6bbadaf7e7fbc813e3d02f7a220b052f1d.scope - libcontainer container ea772e32f9f748b3706ba54fd8a65f6bbadaf7e7fbc813e3d02f7a220b052f1d. Sep 4 23:52:20.446014 containerd[1487]: time="2025-09-04T23:52:20.445378657Z" level=info msg="StartContainer for \"ea772e32f9f748b3706ba54fd8a65f6bbadaf7e7fbc813e3d02f7a220b052f1d\" returns successfully" Sep 4 23:52:20.449435 systemd[1]: cri-containerd-ea772e32f9f748b3706ba54fd8a65f6bbadaf7e7fbc813e3d02f7a220b052f1d.scope: Deactivated successfully. Sep 4 23:52:20.505545 containerd[1487]: time="2025-09-04T23:52:20.504721743Z" level=info msg="shim disconnected" id=ea772e32f9f748b3706ba54fd8a65f6bbadaf7e7fbc813e3d02f7a220b052f1d namespace=k8s.io Sep 4 23:52:20.505545 containerd[1487]: time="2025-09-04T23:52:20.504784232Z" level=warning msg="cleaning up after shim disconnected" id=ea772e32f9f748b3706ba54fd8a65f6bbadaf7e7fbc813e3d02f7a220b052f1d namespace=k8s.io Sep 4 23:52:20.505545 containerd[1487]: time="2025-09-04T23:52:20.504809484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:20.743290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea772e32f9f748b3706ba54fd8a65f6bbadaf7e7fbc813e3d02f7a220b052f1d-rootfs.mount: Deactivated successfully. Sep 4 23:52:21.286854 kubelet[2600]: E0904 23:52:21.286657 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:21.302688 containerd[1487]: time="2025-09-04T23:52:21.301639496Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:52:21.338283 containerd[1487]: time="2025-09-04T23:52:21.337730430Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52e27b218a8c27c34429a19f38deeef74a8e945ad235a65e0d44637a7181b3cf\"" Sep 4 23:52:21.338867 containerd[1487]: time="2025-09-04T23:52:21.338746139Z" level=info msg="StartContainer for \"52e27b218a8c27c34429a19f38deeef74a8e945ad235a65e0d44637a7181b3cf\"" Sep 4 23:52:21.408386 systemd[1]: Started cri-containerd-52e27b218a8c27c34429a19f38deeef74a8e945ad235a65e0d44637a7181b3cf.scope - libcontainer container 52e27b218a8c27c34429a19f38deeef74a8e945ad235a65e0d44637a7181b3cf. Sep 4 23:52:21.465654 systemd[1]: cri-containerd-52e27b218a8c27c34429a19f38deeef74a8e945ad235a65e0d44637a7181b3cf.scope: Deactivated successfully. Sep 4 23:52:21.469847 containerd[1487]: time="2025-09-04T23:52:21.469698687Z" level=info msg="StartContainer for \"52e27b218a8c27c34429a19f38deeef74a8e945ad235a65e0d44637a7181b3cf\" returns successfully" Sep 4 23:52:21.524216 containerd[1487]: time="2025-09-04T23:52:21.522239759Z" level=info msg="shim disconnected" id=52e27b218a8c27c34429a19f38deeef74a8e945ad235a65e0d44637a7181b3cf namespace=k8s.io Sep 4 23:52:21.524216 containerd[1487]: time="2025-09-04T23:52:21.524205718Z" level=warning msg="cleaning up after shim disconnected" id=52e27b218a8c27c34429a19f38deeef74a8e945ad235a65e0d44637a7181b3cf namespace=k8s.io Sep 4 23:52:21.524216 containerd[1487]: time="2025-09-04T23:52:21.524222606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:21.742262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52e27b218a8c27c34429a19f38deeef74a8e945ad235a65e0d44637a7181b3cf-rootfs.mount: Deactivated successfully. Sep 4 23:52:21.805286 kubelet[2600]: E0904 23:52:21.805082 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rqsrp" podUID="599e3931-ba1e-4e05-a70a-5c7b61dc6c52" Sep 4 23:52:22.299358 kubelet[2600]: E0904 23:52:22.299305 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:22.307481 containerd[1487]: time="2025-09-04T23:52:22.307403131Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:52:22.351874 containerd[1487]: time="2025-09-04T23:52:22.351501566Z" level=info msg="CreateContainer within sandbox \"8ab6452c2e793d9f50f5ead9ccc7e33560c55cf71b67404ad67f74eccd23508b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b49125175ef481ce4d46f505b4a1d6737f3b19b67b73470987fd095a48eced6\"" Sep 4 23:52:22.354663 containerd[1487]: time="2025-09-04T23:52:22.353103657Z" level=info msg="StartContainer for \"4b49125175ef481ce4d46f505b4a1d6737f3b19b67b73470987fd095a48eced6\"" Sep 4 23:52:22.412497 systemd[1]: Started cri-containerd-4b49125175ef481ce4d46f505b4a1d6737f3b19b67b73470987fd095a48eced6.scope - libcontainer container 4b49125175ef481ce4d46f505b4a1d6737f3b19b67b73470987fd095a48eced6. Sep 4 23:52:22.466041 containerd[1487]: time="2025-09-04T23:52:22.465555086Z" level=info msg="StartContainer for \"4b49125175ef481ce4d46f505b4a1d6737f3b19b67b73470987fd095a48eced6\" returns successfully" Sep 4 23:52:23.000296 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 23:52:23.306003 kubelet[2600]: E0904 23:52:23.305292 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:23.805048 kubelet[2600]: E0904 23:52:23.804955 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rqsrp" podUID="599e3931-ba1e-4e05-a70a-5c7b61dc6c52" Sep 4 23:52:24.308402 kubelet[2600]: E0904 23:52:24.308251 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:24.616761 systemd[1]: run-containerd-runc-k8s.io-4b49125175ef481ce4d46f505b4a1d6737f3b19b67b73470987fd095a48eced6-runc.Cuf3kl.mount: Deactivated successfully. Sep 4 23:52:25.805472 kubelet[2600]: E0904 23:52:25.805426 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:26.672837 systemd-networkd[1383]: lxc_health: Link UP Sep 4 23:52:26.698790 systemd-networkd[1383]: lxc_health: Gained carrier Sep 4 23:52:26.896122 systemd[1]: run-containerd-runc-k8s.io-4b49125175ef481ce4d46f505b4a1d6737f3b19b67b73470987fd095a48eced6-runc.xob7VR.mount: Deactivated successfully. Sep 4 23:52:27.883755 kubelet[2600]: E0904 23:52:27.883713 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:27.922888 kubelet[2600]: I0904 23:52:27.922627 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xzp2v" podStartSLOduration=10.92260771 podStartE2EDuration="10.92260771s" podCreationTimestamp="2025-09-04 23:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:52:23.328368099 +0000 UTC m=+108.844600801" watchObservedRunningTime="2025-09-04 23:52:27.92260771 +0000 UTC m=+113.438840412" Sep 4 23:52:28.319454 kubelet[2600]: E0904 23:52:28.319040 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:28.409038 systemd-networkd[1383]: lxc_health: Gained IPv6LL Sep 4 23:52:29.321252 kubelet[2600]: E0904 23:52:29.320830 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 4 23:52:33.596884 sshd[4446]: Connection closed by 147.75.109.163 port 44518 Sep 4 23:52:33.598731 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:33.612533 systemd[1]: sshd@27-143.110.229.161:22-147.75.109.163:44518.service: Deactivated successfully. Sep 4 23:52:33.615770 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 23:52:33.618806 systemd-logind[1463]: Session 28 logged out. Waiting for processes to exit. Sep 4 23:52:33.623442 systemd-logind[1463]: Removed session 28. Sep 4 23:52:34.707102 containerd[1487]: time="2025-09-04T23:52:34.707033809Z" level=info msg="StopPodSandbox for \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\"" Sep 4 23:52:34.707850 containerd[1487]: time="2025-09-04T23:52:34.707166738Z" level=info msg="TearDown network for sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" successfully" Sep 4 23:52:34.707850 containerd[1487]: time="2025-09-04T23:52:34.707179198Z" level=info msg="StopPodSandbox for \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" returns successfully" Sep 4 23:52:34.707850 containerd[1487]: time="2025-09-04T23:52:34.707585429Z" level=info msg="RemovePodSandbox for \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\"" Sep 4 23:52:34.707850 containerd[1487]: time="2025-09-04T23:52:34.707607113Z" level=info msg="Forcibly stopping sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\"" Sep 4 23:52:34.707850 containerd[1487]: time="2025-09-04T23:52:34.707655311Z" level=info msg="TearDown network for sandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" successfully" Sep 4 23:52:34.713811 containerd[1487]: time="2025-09-04T23:52:34.713727646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:34.713989 containerd[1487]: time="2025-09-04T23:52:34.713858930Z" level=info msg="RemovePodSandbox \"48ba4d1169412ed568ab0c5bc5e5f456ab74a37a23918aec9befa87d3a83fdf5\" returns successfully" Sep 4 23:52:34.714910 containerd[1487]: time="2025-09-04T23:52:34.714648086Z" level=info msg="StopPodSandbox for \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\"" Sep 4 23:52:34.714910 containerd[1487]: time="2025-09-04T23:52:34.714749628Z" level=info msg="TearDown network for sandbox \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\" successfully" Sep 4 23:52:34.714910 containerd[1487]: time="2025-09-04T23:52:34.714761622Z" level=info msg="StopPodSandbox for \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\" returns successfully" Sep 4 23:52:34.715157 containerd[1487]: time="2025-09-04T23:52:34.715132293Z" level=info msg="RemovePodSandbox for \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\"" Sep 4 23:52:34.715193 containerd[1487]: time="2025-09-04T23:52:34.715163644Z" level=info msg="Forcibly stopping sandbox \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\"" Sep 4 23:52:34.715272 containerd[1487]: time="2025-09-04T23:52:34.715228321Z" level=info msg="TearDown network for sandbox \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\" successfully" Sep 4 23:52:34.718475 containerd[1487]: time="2025-09-04T23:52:34.718367382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:34.718475 containerd[1487]: time="2025-09-04T23:52:34.718448547Z" level=info msg="RemovePodSandbox \"45dd28e4116eba95db3d26c0ddd5adc7e883069fa80a9c75b2999f0868a3c268\" returns successfully"