Sep 12 10:09:51.964997 kernel: Linux version 6.6.105-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 08:42:12 -00 2025 Sep 12 10:09:51.965047 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:09:51.965064 kernel: BIOS-provided physical RAM map: Sep 12 10:09:51.965071 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 10:09:51.965078 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 10:09:51.965085 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 10:09:51.965096 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 12 10:09:51.965108 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 12 10:09:51.965118 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 10:09:51.965127 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 10:09:51.965143 kernel: NX (Execute Disable) protection: active Sep 12 10:09:51.965153 kernel: APIC: Static calls initialized Sep 12 10:09:51.965172 kernel: SMBIOS 2.8 present. Sep 12 10:09:51.965183 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 12 10:09:51.965195 kernel: Hypervisor detected: KVM Sep 12 10:09:51.965207 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 10:09:51.965229 kernel: kvm-clock: using sched offset of 3404307310 cycles Sep 12 10:09:51.965243 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 10:09:51.965255 kernel: tsc: Detected 2494.140 MHz processor Sep 12 10:09:51.965268 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 10:09:51.965277 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 10:09:51.965285 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 12 10:09:51.965293 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 10:09:51.965301 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 10:09:51.965315 kernel: ACPI: Early table checksum verification disabled Sep 12 10:09:51.965323 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 12 10:09:51.965331 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:51.965340 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:51.965348 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:51.965356 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 12 10:09:51.965364 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:51.965372 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:51.965380 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:51.965391 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:51.965399 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 12 10:09:51.965407 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 12 10:09:51.965414 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 12 10:09:51.965422 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 12 10:09:51.965430 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 12 10:09:51.965438 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 12 10:09:51.965450 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 12 10:09:51.965462 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 10:09:51.965470 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 10:09:51.965479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 10:09:51.965487 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 10:09:51.965499 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 12 10:09:51.965508 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 12 10:09:51.965519 kernel: Zone ranges: Sep 12 10:09:51.965528 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 10:09:51.965536 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 12 10:09:51.965544 kernel: Normal empty Sep 12 10:09:51.965553 kernel: Movable zone start for each node Sep 12 10:09:51.965561 kernel: Early memory node ranges Sep 12 10:09:51.965569 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 10:09:51.965577 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 12 10:09:51.965586 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 12 10:09:51.965597 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 10:09:51.965606 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 10:09:51.965617 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 12 10:09:51.965626 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 10:09:51.965647 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 10:09:51.966836 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 10:09:51.966846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 10:09:51.966854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 10:09:51.966863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 10:09:51.966872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 10:09:51.966889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 10:09:51.966897 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 10:09:51.966906 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 10:09:51.966915 kernel: TSC deadline timer available Sep 12 10:09:51.966924 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 10:09:51.966933 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 10:09:51.966941 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 12 10:09:51.966954 kernel: Booting paravirtualized kernel on KVM Sep 12 10:09:51.966962 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 10:09:51.966976 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 10:09:51.966984 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 10:09:51.966993 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 10:09:51.967002 kernel: pcpu-alloc: [0] 0 1 Sep 12 10:09:51.967010 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 10:09:51.967021 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:09:51.967030 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 10:09:51.967039 kernel: random: crng init done Sep 12 10:09:51.967051 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 10:09:51.967060 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 10:09:51.967069 kernel: Fallback order for Node 0: 0 Sep 12 10:09:51.967077 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 12 10:09:51.967085 kernel: Policy zone: DMA32 Sep 12 10:09:51.967094 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 10:09:51.967103 kernel: Memory: 1969152K/2096612K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 127200K reserved, 0K cma-reserved) Sep 12 10:09:51.967112 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 10:09:51.967121 kernel: Kernel/User page tables isolation: enabled Sep 12 10:09:51.967133 kernel: ftrace: allocating 37946 entries in 149 pages Sep 12 10:09:51.967142 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 10:09:51.967150 kernel: Dynamic Preempt: voluntary Sep 12 10:09:51.967159 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 10:09:51.967169 kernel: rcu: RCU event tracing is enabled. Sep 12 10:09:51.967178 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 10:09:51.967186 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 10:09:51.967195 kernel: Rude variant of Tasks RCU enabled. Sep 12 10:09:51.967204 kernel: Tracing variant of Tasks RCU enabled. Sep 12 10:09:51.967216 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 10:09:51.967225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 10:09:51.967237 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 10:09:51.967249 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 10:09:51.967265 kernel: Console: colour VGA+ 80x25 Sep 12 10:09:51.967276 kernel: printk: console [tty0] enabled Sep 12 10:09:51.967289 kernel: printk: console [ttyS0] enabled Sep 12 10:09:51.967303 kernel: ACPI: Core revision 20230628 Sep 12 10:09:51.967312 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 10:09:51.967325 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 10:09:51.967334 kernel: x2apic enabled Sep 12 10:09:51.967342 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 10:09:51.967351 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 10:09:51.967360 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 12 10:09:51.967368 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 12 10:09:51.967377 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 10:09:51.967386 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 10:09:51.967409 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 10:09:51.967418 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 10:09:51.967427 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 10:09:51.967439 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 10:09:51.967449 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 10:09:51.967458 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 10:09:51.967467 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 10:09:51.967476 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 10:09:51.967487 kernel: active return thunk: its_return_thunk Sep 12 10:09:51.967509 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 10:09:51.967520 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 10:09:51.967530 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 10:09:51.967539 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 10:09:51.967548 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 10:09:51.967557 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 10:09:51.967566 kernel: Freeing SMP alternatives memory: 32K Sep 12 10:09:51.967575 kernel: pid_max: default: 32768 minimum: 301 Sep 12 10:09:51.967590 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 10:09:51.967599 kernel: landlock: Up and running. Sep 12 10:09:51.967608 kernel: SELinux: Initializing. Sep 12 10:09:51.967617 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 10:09:51.967626 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 10:09:51.970408 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 12 10:09:51.970438 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:09:51.970457 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:09:51.970484 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:09:51.970515 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 12 10:09:51.970529 kernel: signal: max sigframe size: 1776 Sep 12 10:09:51.970546 kernel: rcu: Hierarchical SRCU implementation. Sep 12 10:09:51.970560 kernel: rcu: Max phase no-delay instances is 400. Sep 12 10:09:51.970580 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 10:09:51.970596 kernel: smp: Bringing up secondary CPUs ... Sep 12 10:09:51.970608 kernel: smpboot: x86: Booting SMP configuration: Sep 12 10:09:51.970620 kernel: .... node #0, CPUs: #1 Sep 12 10:09:51.970649 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 10:09:51.970667 kernel: smpboot: Max logical packages: 1 Sep 12 10:09:51.970680 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 12 10:09:51.970694 kernel: devtmpfs: initialized Sep 12 10:09:51.970707 kernel: x86/mm: Memory block size: 128MB Sep 12 10:09:51.970720 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 10:09:51.970733 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 10:09:51.970747 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 10:09:51.970762 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 10:09:51.970777 kernel: audit: initializing netlink subsys (disabled) Sep 12 10:09:51.970795 kernel: audit: type=2000 audit(1757671791.274:1): state=initialized audit_enabled=0 res=1 Sep 12 10:09:51.970804 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 10:09:51.970813 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 10:09:51.970822 kernel: cpuidle: using governor menu Sep 12 10:09:51.970832 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 10:09:51.970841 kernel: dca service started, version 1.12.1 Sep 12 10:09:51.970850 kernel: PCI: Using configuration type 1 for base access Sep 12 10:09:51.970860 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 10:09:51.970869 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 10:09:51.970883 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 10:09:51.970892 kernel: ACPI: Added _OSI(Module Device) Sep 12 10:09:51.970901 kernel: ACPI: Added _OSI(Processor Device) Sep 12 10:09:51.970910 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 10:09:51.970919 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 10:09:51.970928 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 10:09:51.970938 kernel: ACPI: Interpreter enabled Sep 12 10:09:51.970947 kernel: ACPI: PM: (supports S0 S5) Sep 12 10:09:51.970956 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 10:09:51.970968 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 10:09:51.970977 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 10:09:51.970986 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 10:09:51.970995 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 10:09:51.971295 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 10:09:51.971409 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 10:09:51.971555 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 10:09:51.971579 kernel: acpiphp: Slot [3] registered Sep 12 10:09:51.971588 kernel: acpiphp: Slot [4] registered Sep 12 10:09:51.971598 kernel: acpiphp: Slot [5] registered Sep 12 10:09:51.971607 kernel: acpiphp: Slot [6] registered Sep 12 10:09:51.971616 kernel: acpiphp: Slot [7] registered Sep 12 10:09:51.971625 kernel: acpiphp: Slot [8] registered Sep 12 10:09:51.971663 kernel: acpiphp: Slot [9] registered Sep 12 10:09:51.971673 kernel: acpiphp: Slot [10] registered Sep 12 10:09:51.971682 kernel: acpiphp: Slot [11] registered Sep 12 10:09:51.971691 kernel: acpiphp: Slot [12] registered Sep 12 10:09:51.971705 kernel: acpiphp: Slot [13] registered Sep 12 10:09:51.971714 kernel: acpiphp: Slot [14] registered Sep 12 10:09:51.971724 kernel: acpiphp: Slot [15] registered Sep 12 10:09:51.971733 kernel: acpiphp: Slot [16] registered Sep 12 10:09:51.971742 kernel: acpiphp: Slot [17] registered Sep 12 10:09:51.971751 kernel: acpiphp: Slot [18] registered Sep 12 10:09:51.971760 kernel: acpiphp: Slot [19] registered Sep 12 10:09:51.971769 kernel: acpiphp: Slot [20] registered Sep 12 10:09:51.971778 kernel: acpiphp: Slot [21] registered Sep 12 10:09:51.971791 kernel: acpiphp: Slot [22] registered Sep 12 10:09:51.971800 kernel: acpiphp: Slot [23] registered Sep 12 10:09:51.971809 kernel: acpiphp: Slot [24] registered Sep 12 10:09:51.971818 kernel: acpiphp: Slot [25] registered Sep 12 10:09:51.971827 kernel: acpiphp: Slot [26] registered Sep 12 10:09:51.971836 kernel: acpiphp: Slot [27] registered Sep 12 10:09:51.971845 kernel: acpiphp: Slot [28] registered Sep 12 10:09:51.971854 kernel: acpiphp: Slot [29] registered Sep 12 10:09:51.971863 kernel: acpiphp: Slot [30] registered Sep 12 10:09:51.971872 kernel: acpiphp: Slot [31] registered Sep 12 10:09:51.971884 kernel: PCI host bridge to bus 0000:00 Sep 12 10:09:51.972050 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 10:09:51.972211 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 10:09:51.972367 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 10:09:51.972544 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 10:09:51.973339 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 12 10:09:51.973492 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 10:09:51.974865 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 10:09:51.975059 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 10:09:51.975284 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 12 10:09:51.975435 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 12 10:09:51.975609 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 12 10:09:51.976973 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 12 10:09:51.977179 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 12 10:09:51.977343 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 12 10:09:51.977516 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 12 10:09:51.977621 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 12 10:09:51.978841 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 10:09:51.978954 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 12 10:09:51.979063 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 12 10:09:51.979205 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 12 10:09:51.979308 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 12 10:09:51.979408 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 12 10:09:51.979544 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 12 10:09:51.981753 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 12 10:09:51.981901 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 10:09:51.982042 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 12 10:09:51.982144 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 12 10:09:51.982276 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 12 10:09:51.982379 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 12 10:09:51.982499 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 10:09:51.982598 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 12 10:09:51.982807 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 12 10:09:51.982918 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 12 10:09:51.983082 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 12 10:09:51.983236 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 12 10:09:51.983352 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 12 10:09:51.983451 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 12 10:09:51.983599 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 12 10:09:51.983738 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 10:09:51.983857 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 12 10:09:51.983959 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 12 10:09:51.984108 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 12 10:09:51.984246 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 12 10:09:51.984352 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 12 10:09:51.984466 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 12 10:09:51.984599 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 12 10:09:51.986862 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 12 10:09:51.986992 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 12 10:09:51.987006 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 10:09:51.987021 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 10:09:51.987032 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 10:09:51.987042 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 10:09:51.987052 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 10:09:51.987068 kernel: iommu: Default domain type: Translated Sep 12 10:09:51.987077 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 10:09:51.987087 kernel: PCI: Using ACPI for IRQ routing Sep 12 10:09:51.987096 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 10:09:51.987106 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 10:09:51.987115 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 12 10:09:51.987249 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 12 10:09:51.987352 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 12 10:09:51.987465 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 10:09:51.987478 kernel: vgaarb: loaded Sep 12 10:09:51.987487 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 10:09:51.987500 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 10:09:51.987510 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 10:09:51.987519 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 10:09:51.987529 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 10:09:51.987538 kernel: pnp: PnP ACPI init Sep 12 10:09:51.987547 kernel: pnp: PnP ACPI: found 4 devices Sep 12 10:09:51.987561 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 10:09:51.987570 kernel: NET: Registered PF_INET protocol family Sep 12 10:09:51.987580 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 10:09:51.987589 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 10:09:51.987599 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 10:09:51.987608 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 10:09:51.987617 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 10:09:51.987627 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 10:09:51.987647 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 10:09:51.987660 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 10:09:51.987670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 10:09:51.987679 kernel: NET: Registered PF_XDP protocol family Sep 12 10:09:51.987783 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 10:09:51.987873 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 10:09:51.987961 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 10:09:51.988052 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 10:09:51.988140 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 12 10:09:51.988255 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 12 10:09:51.988365 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 10:09:51.988380 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 10:09:51.988484 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 30925 usecs Sep 12 10:09:51.988496 kernel: PCI: CLS 0 bytes, default 64 Sep 12 10:09:51.988506 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 10:09:51.988515 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 12 10:09:51.988524 kernel: Initialise system trusted keyrings Sep 12 10:09:51.988538 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 10:09:51.988547 kernel: Key type asymmetric registered Sep 12 10:09:51.988556 kernel: Asymmetric key parser 'x509' registered Sep 12 10:09:51.988566 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 10:09:51.988575 kernel: io scheduler mq-deadline registered Sep 12 10:09:51.988584 kernel: io scheduler kyber registered Sep 12 10:09:51.988593 kernel: io scheduler bfq registered Sep 12 10:09:51.988602 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 10:09:51.988612 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 12 10:09:51.988621 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 10:09:51.988657 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 10:09:51.988667 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 10:09:51.988677 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 10:09:51.988686 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 10:09:51.988696 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 10:09:51.988705 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 10:09:51.988715 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 10:09:51.988854 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 10:09:51.988955 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 10:09:51.989123 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T10:09:51 UTC (1757671791) Sep 12 10:09:51.989250 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 10:09:51.989263 kernel: intel_pstate: CPU model not supported Sep 12 10:09:51.989273 kernel: NET: Registered PF_INET6 protocol family Sep 12 10:09:51.989282 kernel: Segment Routing with IPv6 Sep 12 10:09:51.989292 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 10:09:51.989301 kernel: NET: Registered PF_PACKET protocol family Sep 12 10:09:51.989316 kernel: Key type dns_resolver registered Sep 12 10:09:51.989326 kernel: IPI shorthand broadcast: enabled Sep 12 10:09:51.989336 kernel: sched_clock: Marking stable (918005769, 111292064)->(1135831804, -106533971) Sep 12 10:09:51.989345 kernel: registered taskstats version 1 Sep 12 10:09:51.989355 kernel: Loading compiled-in X.509 certificates Sep 12 10:09:51.989364 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.105-flatcar: 0972efc09ee0bcd53f8cdb5573e11871ce7b16a9' Sep 12 10:09:51.989373 kernel: Key type .fscrypt registered Sep 12 10:09:51.989382 kernel: Key type fscrypt-provisioning registered Sep 12 10:09:51.989392 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 10:09:51.989404 kernel: ima: Allocated hash algorithm: sha1 Sep 12 10:09:51.989413 kernel: ima: No architecture policies found Sep 12 10:09:51.989423 kernel: clk: Disabling unused clocks Sep 12 10:09:51.989432 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 12 10:09:51.989445 kernel: Write protecting the kernel read-only data: 38912k Sep 12 10:09:51.989489 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 12 10:09:51.989503 kernel: Run /init as init process Sep 12 10:09:51.989512 kernel: with arguments: Sep 12 10:09:51.989522 kernel: /init Sep 12 10:09:51.989535 kernel: with environment: Sep 12 10:09:51.989545 kernel: HOME=/ Sep 12 10:09:51.989554 kernel: TERM=linux Sep 12 10:09:51.989564 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 10:09:51.989580 systemd[1]: Successfully made /usr/ read-only. Sep 12 10:09:51.989594 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:09:51.989605 systemd[1]: Detected virtualization kvm. Sep 12 10:09:51.989615 systemd[1]: Detected architecture x86-64. Sep 12 10:09:51.989628 systemd[1]: Running in initrd. Sep 12 10:09:51.989766 systemd[1]: No hostname configured, using default hostname. Sep 12 10:09:51.989777 systemd[1]: Hostname set to . Sep 12 10:09:51.989788 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:09:51.989798 systemd[1]: Queued start job for default target initrd.target. Sep 12 10:09:51.989809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:09:51.989819 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:09:51.989831 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 10:09:51.989846 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:09:51.989856 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 10:09:51.989867 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 10:09:51.989879 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 10:09:51.989889 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 10:09:51.989900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:09:51.989913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:09:51.989924 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:09:51.989934 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:09:51.989948 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:09:51.989958 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:09:51.989968 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:09:51.989982 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:09:51.989993 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 10:09:51.990003 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 10:09:51.990013 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:09:51.990024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:09:51.990034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:09:51.990044 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:09:51.990054 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 10:09:51.990064 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:09:51.990079 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 10:09:51.990090 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 10:09:51.990100 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:09:51.990111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:09:51.990121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:09:51.990131 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 10:09:51.990142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:09:51.990156 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 10:09:51.990166 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:09:51.990225 systemd-journald[183]: Collecting audit messages is disabled. Sep 12 10:09:51.990270 systemd-journald[183]: Journal started Sep 12 10:09:51.990294 systemd-journald[183]: Runtime Journal (/run/log/journal/375aa927bf8b4370b48c98b2e7256d36) is 4.9M, max 39.3M, 34.4M free. Sep 12 10:09:51.973707 systemd-modules-load[184]: Inserted module 'overlay' Sep 12 10:09:51.993142 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:09:51.996726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:09:52.000364 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:09:52.017669 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 10:09:52.020049 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:09:52.022084 kernel: Bridge firewalling registered Sep 12 10:09:52.020261 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 12 10:09:52.022884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:09:52.026902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:09:52.028596 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:09:52.040941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:09:52.055024 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:09:52.055806 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:09:52.057286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:09:52.063975 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 10:09:52.066452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:09:52.070975 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:09:52.083869 dracut-cmdline[217]: dracut-dracut-053 Sep 12 10:09:52.088307 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:09:52.123700 systemd-resolved[222]: Positive Trust Anchors: Sep 12 10:09:52.123725 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:09:52.123774 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:09:52.130906 systemd-resolved[222]: Defaulting to hostname 'linux'. Sep 12 10:09:52.132328 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:09:52.132863 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:09:52.200711 kernel: SCSI subsystem initialized Sep 12 10:09:52.210677 kernel: Loading iSCSI transport class v2.0-870. Sep 12 10:09:52.222689 kernel: iscsi: registered transport (tcp) Sep 12 10:09:52.247701 kernel: iscsi: registered transport (qla4xxx) Sep 12 10:09:52.247827 kernel: QLogic iSCSI HBA Driver Sep 12 10:09:52.308070 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 10:09:52.312886 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 10:09:52.343177 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 10:09:52.343284 kernel: device-mapper: uevent: version 1.0.3 Sep 12 10:09:52.344315 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 10:09:52.391738 kernel: raid6: avx2x4 gen() 19959 MB/s Sep 12 10:09:52.408715 kernel: raid6: avx2x2 gen() 22146 MB/s Sep 12 10:09:52.425715 kernel: raid6: avx2x1 gen() 20009 MB/s Sep 12 10:09:52.425818 kernel: raid6: using algorithm avx2x2 gen() 22146 MB/s Sep 12 10:09:52.443870 kernel: raid6: .... xor() 19609 MB/s, rmw enabled Sep 12 10:09:52.443983 kernel: raid6: using avx2x2 recovery algorithm Sep 12 10:09:52.467726 kernel: xor: automatically using best checksumming function avx Sep 12 10:09:52.633687 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 10:09:52.649719 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:09:52.657120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:09:52.691831 systemd-udevd[405]: Using default interface naming scheme 'v255'. Sep 12 10:09:52.701565 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:09:52.710143 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 10:09:52.733011 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Sep 12 10:09:52.781453 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:09:52.787995 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:09:52.856220 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:09:52.867950 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 10:09:52.897744 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 10:09:52.901393 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:09:52.902771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:09:52.903734 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:09:52.909915 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 10:09:52.939566 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:09:52.963669 kernel: libata version 3.00 loaded. Sep 12 10:09:52.977725 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 12 10:09:52.981406 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 10:09:52.982706 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 12 10:09:53.004667 kernel: scsi host0: ata_piix Sep 12 10:09:53.010274 kernel: scsi host2: ata_piix Sep 12 10:09:53.011007 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 12 10:09:53.011036 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 10:09:53.011056 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 12 10:09:53.011071 kernel: scsi host1: Virtio SCSI HBA Sep 12 10:09:53.022900 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 10:09:53.022973 kernel: GPT:9289727 != 125829119 Sep 12 10:09:53.022988 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 10:09:53.023000 kernel: GPT:9289727 != 125829119 Sep 12 10:09:53.023015 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 10:09:53.023048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:09:53.036955 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:09:53.040714 kernel: ACPI: bus type USB registered Sep 12 10:09:53.040805 kernel: usbcore: registered new interface driver usbfs Sep 12 10:09:53.040825 kernel: usbcore: registered new interface driver hub Sep 12 10:09:53.040843 kernel: usbcore: registered new device driver usb Sep 12 10:09:53.038015 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:09:53.043694 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 12 10:09:53.046348 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 12 10:09:53.043384 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:09:53.043921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:09:53.044837 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:09:53.047996 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:09:53.055085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:09:53.056438 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:09:53.102015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:09:53.110983 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:09:53.131960 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:09:53.193665 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 10:09:53.193757 kernel: AES CTR mode by8 optimization enabled Sep 12 10:09:53.237666 kernel: BTRFS: device fsid 2566299d-dd4a-4826-ba43-7397a17991fb devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (464) Sep 12 10:09:53.248713 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (465) Sep 12 10:09:53.265977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 10:09:53.275616 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 10:09:53.283291 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 12 10:09:53.283611 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 12 10:09:53.285497 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 12 10:09:53.285783 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 12 10:09:53.288048 kernel: hub 1-0:1.0: USB hub found Sep 12 10:09:53.288302 kernel: hub 1-0:1.0: 2 ports detected Sep 12 10:09:53.288058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 10:09:53.297828 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 10:09:53.298527 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 10:09:53.305931 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 10:09:53.315059 disk-uuid[552]: Primary Header is updated. Sep 12 10:09:53.315059 disk-uuid[552]: Secondary Entries is updated. Sep 12 10:09:53.315059 disk-uuid[552]: Secondary Header is updated. Sep 12 10:09:53.324478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:09:54.335152 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:09:54.335509 disk-uuid[553]: The operation has completed successfully. Sep 12 10:09:54.405516 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 10:09:54.405688 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 10:09:54.438950 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 10:09:54.445125 sh[564]: Success Sep 12 10:09:54.463598 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 10:09:54.531674 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 10:09:54.545799 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 10:09:54.547052 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 10:09:54.577212 kernel: BTRFS info (device dm-0): first mount of filesystem 2566299d-dd4a-4826-ba43-7397a17991fb Sep 12 10:09:54.577299 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:09:54.577321 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 10:09:54.578301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 10:09:54.578985 kernel: BTRFS info (device dm-0): using free space tree Sep 12 10:09:54.588689 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 10:09:54.590495 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 10:09:54.599137 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 10:09:54.603038 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 10:09:54.619037 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:54.619105 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:09:54.619119 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:09:54.623672 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:09:54.630732 kernel: BTRFS info (device vda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:54.636520 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 10:09:54.646677 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 10:09:54.746395 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:09:54.753891 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:09:54.787010 ignition[652]: Ignition 2.20.0 Sep 12 10:09:54.787024 ignition[652]: Stage: fetch-offline Sep 12 10:09:54.787063 ignition[652]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:54.787071 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 10:09:54.787192 ignition[652]: parsed url from cmdline: "" Sep 12 10:09:54.787196 ignition[652]: no config URL provided Sep 12 10:09:54.787202 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:09:54.787210 ignition[652]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:09:54.787216 ignition[652]: failed to fetch config: resource requires networking Sep 12 10:09:54.787414 ignition[652]: Ignition finished successfully Sep 12 10:09:54.792722 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:09:54.800938 systemd-networkd[745]: lo: Link UP Sep 12 10:09:54.800954 systemd-networkd[745]: lo: Gained carrier Sep 12 10:09:54.804338 systemd-networkd[745]: Enumeration completed Sep 12 10:09:54.804726 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 10:09:54.804731 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 12 10:09:54.804974 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:09:54.805583 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:09:54.805587 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:09:54.806182 systemd[1]: Reached target network.target - Network. Sep 12 10:09:54.806262 systemd-networkd[745]: eth0: Link UP Sep 12 10:09:54.806267 systemd-networkd[745]: eth0: Gained carrier Sep 12 10:09:54.806275 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 10:09:54.810456 systemd-networkd[745]: eth1: Link UP Sep 12 10:09:54.810462 systemd-networkd[745]: eth1: Gained carrier Sep 12 10:09:54.810479 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:09:54.813883 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 10:09:54.820734 systemd-networkd[745]: eth0: DHCPv4 address 64.23.164.42/20, gateway 64.23.160.1 acquired from 169.254.169.253 Sep 12 10:09:54.828776 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Sep 12 10:09:54.832972 ignition[753]: Ignition 2.20.0 Sep 12 10:09:54.832987 ignition[753]: Stage: fetch Sep 12 10:09:54.833292 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:54.833303 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 10:09:54.833402 ignition[753]: parsed url from cmdline: "" Sep 12 10:09:54.833406 ignition[753]: no config URL provided Sep 12 10:09:54.833411 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:09:54.833419 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:09:54.833444 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 12 10:09:54.849978 ignition[753]: GET result: OK Sep 12 10:09:54.850707 ignition[753]: parsing config with SHA512: 0a09cbb491979a8312a43beb94827696415a4bb63809442f3b1eda6adedb48ae640dacfc1b58993a14a92b9f1d1d18fc983cfac913218baf8d985f0ba1b93bc1 Sep 12 10:09:54.855869 unknown[753]: fetched base config from "system" Sep 12 10:09:54.855880 unknown[753]: fetched base config from "system" Sep 12 10:09:54.855887 unknown[753]: fetched user config from "digitalocean" Sep 12 10:09:54.857409 ignition[753]: fetch: fetch complete Sep 12 10:09:54.857422 ignition[753]: fetch: fetch passed Sep 12 10:09:54.859354 ignition[753]: Ignition finished successfully Sep 12 10:09:54.861631 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 10:09:54.865958 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 10:09:54.892774 ignition[760]: Ignition 2.20.0 Sep 12 10:09:54.892792 ignition[760]: Stage: kargs Sep 12 10:09:54.893227 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:54.893247 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 10:09:54.896452 ignition[760]: kargs: kargs passed Sep 12 10:09:54.896516 ignition[760]: Ignition finished successfully Sep 12 10:09:54.897687 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 10:09:54.900891 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 10:09:54.923365 ignition[766]: Ignition 2.20.0 Sep 12 10:09:54.924785 ignition[766]: Stage: disks Sep 12 10:09:54.925176 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:54.925196 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 10:09:54.926379 ignition[766]: disks: disks passed Sep 12 10:09:54.928593 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 10:09:54.926437 ignition[766]: Ignition finished successfully Sep 12 10:09:54.930130 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 10:09:54.931024 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 10:09:54.931573 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:09:54.932310 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:09:54.933132 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:09:54.940937 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 10:09:54.959522 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 10:09:54.963891 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 10:09:54.971850 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 10:09:55.079691 kernel: EXT4-fs (vda9): mounted filesystem 4caafea7-bbab-4a47-b77b-37af606fc08b r/w with ordered data mode. Quota mode: none. Sep 12 10:09:55.080275 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 10:09:55.081678 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 10:09:55.096963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:09:55.100027 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 10:09:55.104986 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Sep 12 10:09:55.109662 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (782) Sep 12 10:09:55.112915 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:55.113003 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:09:55.113043 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:09:55.114978 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 10:09:55.117378 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 10:09:55.117450 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:09:55.126144 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 10:09:55.139670 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:09:55.139655 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 10:09:55.154567 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:09:55.213621 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 10:09:55.216010 coreos-metadata[785]: Sep 12 10:09:55.215 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 10:09:55.222709 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Sep 12 10:09:55.225319 coreos-metadata[784]: Sep 12 10:09:55.225 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 10:09:55.227814 coreos-metadata[785]: Sep 12 10:09:55.227 INFO Fetch successful Sep 12 10:09:55.232529 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 10:09:55.235989 coreos-metadata[785]: Sep 12 10:09:55.235 INFO wrote hostname ci-4230.2.2-n-d7464eacd8 to /sysroot/etc/hostname Sep 12 10:09:55.237784 coreos-metadata[784]: Sep 12 10:09:55.236 INFO Fetch successful Sep 12 10:09:55.239065 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 10:09:55.241344 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 10:09:55.248063 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Sep 12 10:09:55.248730 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Sep 12 10:09:55.361853 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 10:09:55.367804 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 10:09:55.369858 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 10:09:55.384776 kernel: BTRFS info (device vda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:55.406190 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 10:09:55.411477 ignition[905]: INFO : Ignition 2.20.0 Sep 12 10:09:55.412402 ignition[905]: INFO : Stage: mount Sep 12 10:09:55.412402 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:55.412402 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 10:09:55.414613 ignition[905]: INFO : mount: mount passed Sep 12 10:09:55.414613 ignition[905]: INFO : Ignition finished successfully Sep 12 10:09:55.415495 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 10:09:55.420854 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 10:09:55.576617 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 10:09:55.581913 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:09:55.593700 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (915) Sep 12 10:09:55.596167 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:55.596239 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:09:55.596253 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:09:55.601711 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:09:55.602592 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:09:55.627184 ignition[931]: INFO : Ignition 2.20.0 Sep 12 10:09:55.627926 ignition[931]: INFO : Stage: files Sep 12 10:09:55.629729 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:55.629729 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 10:09:55.630833 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Sep 12 10:09:55.632426 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 10:09:55.633268 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 10:09:55.637149 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 10:09:55.637988 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 10:09:55.638719 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 10:09:55.638664 unknown[931]: wrote ssh authorized keys file for user: core Sep 12 10:09:55.640277 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 10:09:55.640907 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 10:09:55.686309 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 10:09:55.794355 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 10:09:55.795205 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:09:55.795205 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 10:09:56.068775 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 10:09:56.431051 systemd-networkd[745]: eth1: Gained IPv6LL Sep 12 10:09:56.555254 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:09:56.555254 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 10:09:56.556711 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 10:09:56.556711 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:09:56.556711 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:09:56.556711 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:09:56.556711 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:09:56.556711 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:09:56.556711 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:09:56.556711 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:09:56.564891 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:09:56.564891 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 10:09:56.564891 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 10:09:56.564891 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 10:09:56.564891 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 10:09:56.558751 systemd-networkd[745]: eth0: Gained IPv6LL Sep 12 10:09:56.762779 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 10:09:57.046204 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 10:09:57.046204 ignition[931]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 10:09:57.048654 ignition[931]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:09:57.048654 ignition[931]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:09:57.048654 ignition[931]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 10:09:57.048654 ignition[931]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 10:09:57.048654 ignition[931]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 10:09:57.054549 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:09:57.054549 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:09:57.054549 ignition[931]: INFO : files: files passed Sep 12 10:09:57.054549 ignition[931]: INFO : Ignition finished successfully Sep 12 10:09:57.052264 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 10:09:57.060135 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 10:09:57.071978 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 10:09:57.078258 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 10:09:57.078674 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 10:09:57.086261 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:09:57.086261 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:09:57.089439 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:09:57.092195 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:09:57.093185 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 10:09:57.097994 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 10:09:57.136699 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 10:09:57.137505 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 10:09:57.139173 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 10:09:57.140180 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 10:09:57.141092 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 10:09:57.147987 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 10:09:57.163185 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:09:57.174906 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 10:09:57.187588 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:09:57.188738 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:09:57.189236 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 10:09:57.189616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 10:09:57.190939 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:09:57.191719 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 10:09:57.192170 systemd[1]: Stopped target basic.target - Basic System. Sep 12 10:09:57.193155 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 10:09:57.193846 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:09:57.194670 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 10:09:57.195563 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 10:09:57.196410 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:09:57.197399 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 10:09:57.198191 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 10:09:57.199001 systemd[1]: Stopped target swap.target - Swaps. Sep 12 10:09:57.199749 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 10:09:57.199889 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:09:57.200778 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:09:57.201358 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:09:57.202100 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 10:09:57.202216 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:09:57.202959 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 10:09:57.203102 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 10:09:57.204218 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 10:09:57.204392 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:09:57.205323 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 10:09:57.205436 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 10:09:57.206178 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 10:09:57.206316 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 10:09:57.213492 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 10:09:57.216978 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 10:09:57.217445 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 10:09:57.217773 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:09:57.219168 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 10:09:57.219289 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:09:57.227881 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 10:09:57.228029 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 10:09:57.247504 ignition[985]: INFO : Ignition 2.20.0 Sep 12 10:09:57.250522 ignition[985]: INFO : Stage: umount Sep 12 10:09:57.250522 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:57.250522 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 10:09:57.250522 ignition[985]: INFO : umount: umount passed Sep 12 10:09:57.250522 ignition[985]: INFO : Ignition finished successfully Sep 12 10:09:57.251678 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 10:09:57.256536 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 10:09:57.256708 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 10:09:57.258440 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 10:09:57.258555 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 10:09:57.260560 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 10:09:57.260678 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 10:09:57.261437 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 10:09:57.261489 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 10:09:57.262198 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 10:09:57.262244 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 10:09:57.262886 systemd[1]: Stopped target network.target - Network. Sep 12 10:09:57.263495 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 10:09:57.263549 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:09:57.264328 systemd[1]: Stopped target paths.target - Path Units. Sep 12 10:09:57.265150 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 10:09:57.268731 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:09:57.269416 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 10:09:57.270235 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 10:09:57.271196 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 10:09:57.271263 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:09:57.271917 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 10:09:57.271962 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:09:57.272625 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 10:09:57.272711 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 10:09:57.273526 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 10:09:57.273589 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 10:09:57.274181 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 10:09:57.274225 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 10:09:57.275111 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 10:09:57.275815 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 10:09:57.278455 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 10:09:57.278593 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 10:09:57.283582 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 10:09:57.284286 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 10:09:57.284398 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:09:57.286616 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:09:57.288168 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 10:09:57.288298 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 10:09:57.290602 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 10:09:57.290886 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 10:09:57.290952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:09:57.297842 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 10:09:57.299248 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 10:09:57.299371 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:09:57.300571 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:09:57.301474 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:09:57.302143 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 10:09:57.302218 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 10:09:57.302800 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:09:57.305362 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 10:09:57.316058 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 10:09:57.316229 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 10:09:57.325910 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 10:09:57.326187 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:09:57.327536 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 10:09:57.327605 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 10:09:57.328514 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 10:09:57.328570 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:09:57.329390 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 10:09:57.329467 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:09:57.330530 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 10:09:57.330604 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 10:09:57.331897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:09:57.331971 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:09:57.355001 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 10:09:57.355562 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 10:09:57.355702 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:09:57.357166 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 10:09:57.357253 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:09:57.358704 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 10:09:57.358797 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:09:57.360102 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:09:57.360180 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:09:57.362141 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 10:09:57.362278 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 10:09:57.363488 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 10:09:57.370907 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 10:09:57.382601 systemd[1]: Switching root. Sep 12 10:09:57.436385 systemd-journald[183]: Journal stopped Sep 12 10:09:58.844994 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 12 10:09:58.845111 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 10:09:58.845128 kernel: SELinux: policy capability open_perms=1 Sep 12 10:09:58.845146 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 10:09:58.845159 kernel: SELinux: policy capability always_check_network=0 Sep 12 10:09:58.845171 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 10:09:58.845184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 10:09:58.845204 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 10:09:58.845216 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 10:09:58.845227 kernel: audit: type=1403 audit(1757671797.581:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 10:09:58.845246 systemd[1]: Successfully loaded SELinux policy in 38.305ms. Sep 12 10:09:58.845265 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.998ms. Sep 12 10:09:58.845282 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:09:58.845295 systemd[1]: Detected virtualization kvm. Sep 12 10:09:58.845308 systemd[1]: Detected architecture x86-64. Sep 12 10:09:58.845321 systemd[1]: Detected first boot. Sep 12 10:09:58.845338 systemd[1]: Hostname set to . Sep 12 10:09:58.845351 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:09:58.845363 zram_generator::config[1030]: No configuration found. Sep 12 10:09:58.845378 kernel: Guest personality initialized and is inactive Sep 12 10:09:58.845393 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 10:09:58.845405 kernel: Initialized host personality Sep 12 10:09:58.845416 kernel: NET: Registered PF_VSOCK protocol family Sep 12 10:09:58.845428 systemd[1]: Populated /etc with preset unit settings. Sep 12 10:09:58.845446 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 10:09:58.845459 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 10:09:58.845472 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 10:09:58.845484 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 10:09:58.845497 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 10:09:58.845516 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 10:09:58.845536 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 10:09:58.845556 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 10:09:58.845573 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 10:09:58.845591 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 10:09:58.845609 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 10:09:58.845622 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 10:09:58.857969 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:09:58.858020 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:09:58.858035 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 10:09:58.858049 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 10:09:58.858064 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 10:09:58.858079 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:09:58.858092 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 10:09:58.858185 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:09:58.858198 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 10:09:58.858211 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 10:09:58.858224 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 10:09:58.858236 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 10:09:58.858249 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:09:58.858261 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:09:58.858274 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:09:58.858286 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:09:58.858299 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 10:09:58.858315 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 10:09:58.858328 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 10:09:58.858340 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:09:58.858354 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:09:58.858367 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:09:58.858380 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 10:09:58.858394 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 10:09:58.858407 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 10:09:58.858419 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 10:09:58.858435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:58.858447 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 10:09:58.858461 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 10:09:58.858474 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 10:09:58.858487 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 10:09:58.858500 systemd[1]: Reached target machines.target - Containers. Sep 12 10:09:58.858512 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 10:09:58.858525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:09:58.858540 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:09:58.858552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 10:09:58.858565 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:09:58.858577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:09:58.858590 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:09:58.858602 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 10:09:58.858615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:09:58.858628 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 10:09:58.858745 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 10:09:58.858760 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 10:09:58.858773 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 10:09:58.858786 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 10:09:58.858805 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:09:58.858818 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:09:58.858830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:09:58.858843 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 10:09:58.858855 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 10:09:58.858870 kernel: loop: module loaded Sep 12 10:09:58.858885 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 10:09:58.858898 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:09:58.858911 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 10:09:58.858927 systemd[1]: Stopped verity-setup.service. Sep 12 10:09:58.858943 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:58.858955 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 10:09:58.858967 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 10:09:58.858980 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 10:09:58.858993 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 10:09:58.859008 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 10:09:58.859021 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 10:09:58.859034 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:09:58.859046 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 10:09:58.859058 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 10:09:58.859071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:09:58.859083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:09:58.859096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:09:58.859108 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:09:58.859124 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:09:58.859137 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:09:58.859150 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:09:58.859163 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 10:09:58.859175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:09:58.859188 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:09:58.859201 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:09:58.859214 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 10:09:58.859229 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 10:09:58.859242 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 10:09:58.859255 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 10:09:58.859267 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:09:58.859284 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 10:09:58.859299 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 10:09:58.859312 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 10:09:58.859326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:09:58.859339 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 10:09:58.859355 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:09:58.859368 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 10:09:58.859381 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 10:09:58.859393 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 10:09:58.859406 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 10:09:58.859419 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 10:09:58.859431 kernel: fuse: init (API version 7.39) Sep 12 10:09:58.859444 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 10:09:58.859458 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 10:09:58.859473 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 10:09:58.859486 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 10:09:58.859499 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 10:09:58.859512 kernel: loop0: detected capacity change from 0 to 138176 Sep 12 10:09:58.859524 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 10:09:58.859539 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 10:09:58.859558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:09:58.865749 systemd-journald[1107]: Collecting audit messages is disabled. Sep 12 10:09:58.865861 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 10:09:58.865890 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:09:58.865916 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 10:09:58.866311 systemd-journald[1107]: Journal started Sep 12 10:09:58.866350 systemd-journald[1107]: Runtime Journal (/run/log/journal/375aa927bf8b4370b48c98b2e7256d36) is 4.9M, max 39.3M, 34.4M free. Sep 12 10:09:58.328759 systemd[1]: Queued start job for default target multi-user.target. Sep 12 10:09:58.867835 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:09:58.340617 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 10:09:58.341233 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 10:09:58.780645 systemd-tmpfiles[1125]: ACLs are not supported, ignoring. Sep 12 10:09:58.783109 systemd-tmpfiles[1125]: ACLs are not supported, ignoring. Sep 12 10:09:58.889229 kernel: ACPI: bus type drm_connector registered Sep 12 10:09:58.889934 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:09:58.891940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:09:58.914744 kernel: loop1: detected capacity change from 0 to 221472 Sep 12 10:09:58.934169 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 10:09:58.970479 systemd-journald[1107]: Time spent on flushing to /var/log/journal/375aa927bf8b4370b48c98b2e7256d36 is 69.041ms for 1013 entries. Sep 12 10:09:58.970479 systemd-journald[1107]: System Journal (/var/log/journal/375aa927bf8b4370b48c98b2e7256d36) is 8M, max 195.6M, 187.6M free. Sep 12 10:09:59.048457 systemd-journald[1107]: Received client request to flush runtime journal. Sep 12 10:09:59.048503 kernel: loop2: detected capacity change from 0 to 8 Sep 12 10:09:59.048520 kernel: loop3: detected capacity change from 0 to 147912 Sep 12 10:09:59.006559 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 10:09:59.015124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:09:59.054101 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 10:09:59.072488 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:09:59.091015 kernel: loop4: detected capacity change from 0 to 138176 Sep 12 10:09:59.095092 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 10:09:59.138680 kernel: loop5: detected capacity change from 0 to 221472 Sep 12 10:09:59.143377 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 12 10:09:59.143941 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 12 10:09:59.158823 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 10:09:59.170681 kernel: loop6: detected capacity change from 0 to 8 Sep 12 10:09:59.171890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:09:59.177771 kernel: loop7: detected capacity change from 0 to 147912 Sep 12 10:09:59.201207 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 12 10:09:59.202193 (sd-merge)[1180]: Merged extensions into '/usr'. Sep 12 10:09:59.223897 systemd[1]: Reload requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 10:09:59.223927 systemd[1]: Reloading... Sep 12 10:09:59.435498 zram_generator::config[1213]: No configuration found. Sep 12 10:09:59.719203 ldconfig[1131]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 10:09:59.808440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:09:59.907441 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 10:09:59.907589 systemd[1]: Reloading finished in 678 ms. Sep 12 10:09:59.925155 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 10:09:59.926110 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 10:09:59.940026 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 10:09:59.959973 systemd[1]: Starting ensure-sysext.service... Sep 12 10:09:59.964255 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:09:59.983521 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 10:09:59.996949 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Sep 12 10:09:59.996989 systemd[1]: Reloading... Sep 12 10:10:00.054786 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 10:10:00.055647 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 10:10:00.059779 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 10:10:00.060268 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Sep 12 10:10:00.060407 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Sep 12 10:10:00.077282 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:10:00.077954 systemd-tmpfiles[1255]: Skipping /boot Sep 12 10:10:00.115292 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:10:00.115456 systemd-tmpfiles[1255]: Skipping /boot Sep 12 10:10:00.130691 zram_generator::config[1284]: No configuration found. Sep 12 10:10:00.329059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:10:00.422073 systemd[1]: Reloading finished in 422 ms. Sep 12 10:10:00.443766 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 10:10:00.467020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:10:00.486240 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:10:00.499067 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 10:10:00.517405 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 10:10:00.527342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:10:00.539234 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:10:00.549382 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 10:10:00.558990 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:10:00.559387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:10:00.574359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:10:00.583272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:10:00.594165 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:10:00.595455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:10:00.598929 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:10:00.599143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:10:00.615376 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 10:10:00.622299 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:10:00.622690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:10:00.623062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:10:00.623214 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:10:00.623407 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:10:00.638726 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:10:00.639443 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:10:00.650691 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:10:00.655423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:10:00.655810 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:10:00.656237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:10:00.670970 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 10:10:00.674520 systemd[1]: Finished ensure-sysext.service. Sep 12 10:10:00.687245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:10:00.687686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:10:00.691292 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:10:00.691700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:10:00.707295 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:10:00.708002 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:10:00.722475 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 10:10:00.731624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:10:00.732615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:10:00.738578 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:10:00.739367 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:10:00.755186 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 10:10:00.770186 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 10:10:00.771414 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 10:10:00.773786 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 10:10:00.780656 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Sep 12 10:10:00.854046 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 10:10:00.871169 augenrules[1371]: No rules Sep 12 10:10:00.872135 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:10:00.872874 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:10:00.922038 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:10:00.941057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:10:00.950344 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 10:10:01.304662 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Sep 12 10:10:01.312284 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 12 10:10:01.315018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:10:01.315491 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:10:01.323052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:10:01.334279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:10:01.344140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:10:01.344959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:10:01.345038 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:10:01.345085 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 10:10:01.345110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:10:01.378522 systemd-networkd[1380]: lo: Link UP Sep 12 10:10:01.379706 systemd-networkd[1380]: lo: Gained carrier Sep 12 10:10:01.381684 systemd-networkd[1380]: Enumeration completed Sep 12 10:10:01.382851 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:10:01.401707 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1401) Sep 12 10:10:01.402077 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 10:10:01.411024 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 10:10:01.440845 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 10:10:01.493416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:10:01.494561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:10:01.508038 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 10:10:01.509172 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 10:10:01.520740 systemd-resolved[1333]: Positive Trust Anchors: Sep 12 10:10:01.520761 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:10:01.520817 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:10:01.523762 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:10:01.524023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:10:01.534689 systemd-resolved[1333]: Using system hostname 'ci-4230.2.2-n-d7464eacd8'. Sep 12 10:10:01.544492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:10:01.545449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:10:01.556577 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:10:01.568587 systemd[1]: Reached target network.target - Network. Sep 12 10:10:01.570782 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:10:01.573885 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:10:01.574028 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:10:01.578696 kernel: ISO 9660 Extensions: RRIP_1991A Sep 12 10:10:01.592971 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 12 10:10:01.594843 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 10:10:01.608214 systemd-networkd[1380]: eth0: Configuring with /run/systemd/network/10-92:4e:68:c7:52:05.network. Sep 12 10:10:01.610354 systemd-networkd[1380]: eth0: Link UP Sep 12 10:10:01.610366 systemd-networkd[1380]: eth0: Gained carrier Sep 12 10:10:01.621342 systemd-timesyncd[1361]: Network configuration changed, trying to establish connection. Sep 12 10:10:01.648233 systemd-networkd[1380]: eth1: Configuring with /run/systemd/network/10-06:ef:8c:3c:3b:72.network. Sep 12 10:10:01.649880 systemd-networkd[1380]: eth1: Link UP Sep 12 10:10:01.649894 systemd-networkd[1380]: eth1: Gained carrier Sep 12 10:10:01.739818 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 10:10:01.753803 kernel: ACPI: button: Power Button [PWRF] Sep 12 10:10:01.767701 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 12 10:10:02.572223 systemd-timesyncd[1361]: Contacted time server 44.4.53.1:123 (0.flatcar.pool.ntp.org). Sep 12 10:10:02.572323 systemd-timesyncd[1361]: Initial clock synchronization to Fri 2025-09-12 10:10:02.572019 UTC. Sep 12 10:10:02.572656 systemd-resolved[1333]: Clock change detected. Flushing caches. Sep 12 10:10:02.591972 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 10:10:02.601286 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 12 10:10:02.601404 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 12 10:10:02.602512 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 10:10:02.607384 kernel: Console: switching to colour dummy device 80x25 Sep 12 10:10:02.607973 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 10:10:02.608054 kernel: [drm] features: -context_init Sep 12 10:10:02.609010 kernel: [drm] number of scanouts: 1 Sep 12 10:10:02.609092 kernel: [drm] number of cap sets: 0 Sep 12 10:10:02.613993 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 12 10:10:02.623983 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 10:10:02.624093 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 10:10:02.626351 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 10:10:02.652182 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 10:10:02.734019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:10:02.754299 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 10:10:02.834547 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 10:10:02.859026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:10:02.859509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:10:02.872563 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:10:02.961912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:10:03.049323 kernel: EDAC MC: Ver: 3.0.0 Sep 12 10:10:03.050769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:10:03.085785 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 10:10:03.094527 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 10:10:03.130983 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:10:03.189102 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 10:10:03.192838 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:10:03.193428 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:10:03.194159 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 10:10:03.194713 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 10:10:03.195521 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 10:10:03.196056 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 10:10:03.196283 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 10:10:03.196411 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 10:10:03.196469 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:10:03.196569 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:10:03.220381 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 10:10:03.225714 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 10:10:03.233532 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 10:10:03.236343 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 10:10:03.238981 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 10:10:03.251719 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 10:10:03.255087 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 10:10:03.278299 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 10:10:03.285147 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 10:10:03.289023 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:10:03.286787 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:10:03.287584 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:10:03.291565 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:10:03.291806 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:10:03.305275 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 10:10:03.323533 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 10:10:03.330325 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 10:10:03.344213 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 10:10:03.361557 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 10:10:03.366496 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 10:10:03.381367 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 10:10:03.388613 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 10:10:03.401366 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 10:10:03.414382 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 10:10:03.433347 jq[1456]: false Sep 12 10:10:03.433925 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 10:10:03.439677 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 10:10:03.440762 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 10:10:03.446306 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 10:10:03.482324 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 10:10:03.486460 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 10:10:03.497202 coreos-metadata[1452]: Sep 12 10:10:03.496 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 10:10:03.502063 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 10:10:03.502592 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 10:10:03.505875 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 10:10:03.507176 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 10:10:03.527215 extend-filesystems[1457]: Found loop4 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found loop5 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found loop6 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found loop7 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found vda Sep 12 10:10:03.527215 extend-filesystems[1457]: Found vda1 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found vda2 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found vda3 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found usr Sep 12 10:10:03.527215 extend-filesystems[1457]: Found vda4 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found vda6 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found vda7 Sep 12 10:10:03.527215 extend-filesystems[1457]: Found vda9 Sep 12 10:10:03.527215 extend-filesystems[1457]: Checking size of /dev/vda9 Sep 12 10:10:03.560041 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 10:10:03.559656 dbus-daemon[1453]: [system] SELinux support is enabled Sep 12 10:10:03.665583 update_engine[1464]: I20250912 10:10:03.577474 1464 main.cc:92] Flatcar Update Engine starting Sep 12 10:10:03.665583 update_engine[1464]: I20250912 10:10:03.585260 1464 update_check_scheduler.cc:74] Next update check in 5m46s Sep 12 10:10:03.582354 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 10:10:03.666233 jq[1467]: true Sep 12 10:10:03.582430 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 10:10:03.583471 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 10:10:03.583627 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 12 10:10:03.583658 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 10:10:03.677246 extend-filesystems[1457]: Resized partition /dev/vda9 Sep 12 10:10:03.588665 systemd[1]: Started update-engine.service - Update Engine. Sep 12 10:10:03.604394 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 10:10:03.632124 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 10:10:03.691196 coreos-metadata[1452]: Sep 12 10:10:03.690 INFO Fetch successful Sep 12 10:10:03.691296 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Sep 12 10:10:03.636685 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 10:10:03.668874 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 10:10:03.722792 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 12 10:10:03.676160 systemd-networkd[1380]: eth1: Gained IPv6LL Sep 12 10:10:03.726675 tar[1473]: linux-amd64/helm Sep 12 10:10:03.700374 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 10:10:03.705821 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 10:10:03.723365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:03.732343 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 10:10:03.773640 jq[1488]: true Sep 12 10:10:03.859868 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1382) Sep 12 10:10:03.986391 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 10:10:03.991673 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 10:10:04.007701 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 10:10:04.088907 systemd-logind[1463]: New seat seat0. Sep 12 10:10:04.122256 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 10:10:04.132814 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 10:10:04.136727 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 10:10:04.137548 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 10:10:04.188734 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 10:10:04.224014 extend-filesystems[1493]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 10:10:04.224014 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 10:10:04.224014 extend-filesystems[1493]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 10:10:04.229822 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 10:10:04.261746 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 10:10:04.262039 extend-filesystems[1457]: Resized filesystem in /dev/vda9 Sep 12 10:10:04.262039 extend-filesystems[1457]: Found vdb Sep 12 10:10:04.230355 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 10:10:04.284200 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:10:04.290534 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 10:10:04.297846 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 10:10:04.327622 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 10:10:04.348026 systemd[1]: Starting sshkeys.service... Sep 12 10:10:04.378308 systemd-networkd[1380]: eth0: Gained IPv6LL Sep 12 10:10:04.387843 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 10:10:04.388740 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 10:10:04.405658 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 10:10:04.425290 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 10:10:04.442012 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 10:10:04.493500 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 10:10:04.504253 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 10:10:04.516575 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 10:10:04.517474 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 10:10:04.553994 coreos-metadata[1555]: Sep 12 10:10:04.553 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 10:10:04.573535 coreos-metadata[1555]: Sep 12 10:10:04.572 INFO Fetch successful Sep 12 10:10:04.590388 unknown[1555]: wrote ssh authorized keys file for user: core Sep 12 10:10:04.619947 containerd[1481]: time="2025-09-12T10:10:04.619620830Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 10:10:04.643262 update-ssh-keys[1562]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:10:04.648418 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 10:10:04.658568 systemd[1]: Finished sshkeys.service. Sep 12 10:10:04.695971 containerd[1481]: time="2025-09-12T10:10:04.695556154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:10:04.700695 containerd[1481]: time="2025-09-12T10:10:04.700620533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.105-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.702196532Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.702286879Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.702529692Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.702558813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.702664176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.702684574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.703184774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.703216752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.703240616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.703256408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.703441466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:10:04.704977 containerd[1481]: time="2025-09-12T10:10:04.703873902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:10:04.707549 containerd[1481]: time="2025-09-12T10:10:04.707478824Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:10:04.709443 containerd[1481]: time="2025-09-12T10:10:04.709380930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 10:10:04.709881 containerd[1481]: time="2025-09-12T10:10:04.709816559Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 10:10:04.710296 containerd[1481]: time="2025-09-12T10:10:04.710267913Z" level=info msg="metadata content store policy set" policy=shared Sep 12 10:10:04.723278 containerd[1481]: time="2025-09-12T10:10:04.723176069Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 10:10:04.723595 containerd[1481]: time="2025-09-12T10:10:04.723537399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 10:10:04.723751 containerd[1481]: time="2025-09-12T10:10:04.723731401Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 10:10:04.724259 containerd[1481]: time="2025-09-12T10:10:04.723827296Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 10:10:04.724259 containerd[1481]: time="2025-09-12T10:10:04.723858410Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 10:10:04.724259 containerd[1481]: time="2025-09-12T10:10:04.724162325Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 10:10:04.726967 containerd[1481]: time="2025-09-12T10:10:04.726137581Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 10:10:04.726967 containerd[1481]: time="2025-09-12T10:10:04.726910792Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 10:10:04.727559 containerd[1481]: time="2025-09-12T10:10:04.727517461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 10:10:04.727738 containerd[1481]: time="2025-09-12T10:10:04.727712831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 10:10:04.727815 containerd[1481]: time="2025-09-12T10:10:04.727801807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 10:10:04.727870 containerd[1481]: time="2025-09-12T10:10:04.727859939Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 10:10:04.728918 containerd[1481]: time="2025-09-12T10:10:04.728889862Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 10:10:04.729049 containerd[1481]: time="2025-09-12T10:10:04.729036239Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 10:10:04.729103 containerd[1481]: time="2025-09-12T10:10:04.729093798Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 10:10:04.730176 containerd[1481]: time="2025-09-12T10:10:04.730144345Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 10:10:04.730274 containerd[1481]: time="2025-09-12T10:10:04.730262327Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 10:10:04.730322 containerd[1481]: time="2025-09-12T10:10:04.730312972Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 10:10:04.730413 containerd[1481]: time="2025-09-12T10:10:04.730401417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730478081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730531152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730552445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730570023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730589632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730607584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730629373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730649622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730677583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730693997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730711114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730728678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730748970Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730830356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.731976 containerd[1481]: time="2025-09-12T10:10:04.730859019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.732439 containerd[1481]: time="2025-09-12T10:10:04.730875357Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 10:10:04.732439 containerd[1481]: time="2025-09-12T10:10:04.730964106Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 10:10:04.732439 containerd[1481]: time="2025-09-12T10:10:04.730996740Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 10:10:04.732439 containerd[1481]: time="2025-09-12T10:10:04.731010616Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 10:10:04.732439 containerd[1481]: time="2025-09-12T10:10:04.731022481Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 10:10:04.732439 containerd[1481]: time="2025-09-12T10:10:04.731057111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.732439 containerd[1481]: time="2025-09-12T10:10:04.731072907Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 10:10:04.732439 containerd[1481]: time="2025-09-12T10:10:04.731084236Z" level=info msg="NRI interface is disabled by configuration." Sep 12 10:10:04.732439 containerd[1481]: time="2025-09-12T10:10:04.731094525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 10:10:04.732700 containerd[1481]: time="2025-09-12T10:10:04.731462153Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 10:10:04.732700 containerd[1481]: time="2025-09-12T10:10:04.731512139Z" level=info msg="Connect containerd service" Sep 12 10:10:04.732700 containerd[1481]: time="2025-09-12T10:10:04.731559265Z" level=info msg="using legacy CRI server" Sep 12 10:10:04.732700 containerd[1481]: time="2025-09-12T10:10:04.731568063Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 10:10:04.732700 containerd[1481]: time="2025-09-12T10:10:04.731696248Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 10:10:04.734077 containerd[1481]: time="2025-09-12T10:10:04.734038997Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:10:04.734482 containerd[1481]: time="2025-09-12T10:10:04.734397853Z" level=info msg="Start subscribing containerd event" Sep 12 10:10:04.734543 containerd[1481]: time="2025-09-12T10:10:04.734515814Z" level=info msg="Start recovering state" Sep 12 10:10:04.735120 containerd[1481]: time="2025-09-12T10:10:04.735070027Z" level=info msg="Start event monitor" Sep 12 10:10:04.735193 containerd[1481]: time="2025-09-12T10:10:04.735129264Z" level=info msg="Start snapshots syncer" Sep 12 10:10:04.735331 containerd[1481]: time="2025-09-12T10:10:04.735152742Z" level=info msg="Start cni network conf syncer for default" Sep 12 10:10:04.735331 containerd[1481]: time="2025-09-12T10:10:04.735321860Z" level=info msg="Start streaming server" Sep 12 10:10:04.735488 containerd[1481]: time="2025-09-12T10:10:04.735087781Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 10:10:04.735745 containerd[1481]: time="2025-09-12T10:10:04.735704967Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 10:10:04.736205 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 10:10:04.741279 containerd[1481]: time="2025-09-12T10:10:04.740626928Z" level=info msg="containerd successfully booted in 0.125735s" Sep 12 10:10:05.157747 tar[1473]: linux-amd64/LICENSE Sep 12 10:10:05.157747 tar[1473]: linux-amd64/README.md Sep 12 10:10:05.170357 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 10:10:05.861885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:05.865663 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 10:10:05.870061 systemd[1]: Startup finished in 1.057s (kernel) + 5.882s (initrd) + 7.547s (userspace) = 14.487s. Sep 12 10:10:05.873681 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:10:06.717246 kubelet[1576]: E0912 10:10:06.716703 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:10:06.720702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:10:06.721009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:10:06.721565 systemd[1]: kubelet.service: Consumed 1.483s CPU time, 268.8M memory peak. Sep 12 10:10:07.619257 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 10:10:07.631649 systemd[1]: Started sshd@0-64.23.164.42:22-139.178.68.195:40152.service - OpenSSH per-connection server daemon (139.178.68.195:40152). Sep 12 10:10:07.727747 sshd[1588]: Accepted publickey for core from 139.178.68.195 port 40152 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:10:07.731523 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:07.753699 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 10:10:07.771604 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 10:10:07.775802 systemd-logind[1463]: New session 1 of user core. Sep 12 10:10:07.821128 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 10:10:07.832523 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 10:10:07.839033 (systemd)[1592]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 10:10:07.843523 systemd-logind[1463]: New session c1 of user core. Sep 12 10:10:08.086811 systemd[1592]: Queued start job for default target default.target. Sep 12 10:10:08.095287 systemd[1592]: Created slice app.slice - User Application Slice. Sep 12 10:10:08.095409 systemd[1592]: Reached target paths.target - Paths. Sep 12 10:10:08.095498 systemd[1592]: Reached target timers.target - Timers. Sep 12 10:10:08.098150 systemd[1592]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 10:10:08.122814 systemd[1592]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 10:10:08.123455 systemd[1592]: Reached target sockets.target - Sockets. Sep 12 10:10:08.123715 systemd[1592]: Reached target basic.target - Basic System. Sep 12 10:10:08.123966 systemd[1592]: Reached target default.target - Main User Target. Sep 12 10:10:08.123975 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 10:10:08.124236 systemd[1592]: Startup finished in 270ms. Sep 12 10:10:08.132373 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 10:10:08.208569 systemd[1]: Started sshd@1-64.23.164.42:22-139.178.68.195:40158.service - OpenSSH per-connection server daemon (139.178.68.195:40158). Sep 12 10:10:08.276682 sshd[1603]: Accepted publickey for core from 139.178.68.195 port 40158 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:10:08.278914 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:08.287501 systemd-logind[1463]: New session 2 of user core. Sep 12 10:10:08.296349 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 10:10:08.360880 sshd[1605]: Connection closed by 139.178.68.195 port 40158 Sep 12 10:10:08.361769 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:08.387507 systemd[1]: sshd@1-64.23.164.42:22-139.178.68.195:40158.service: Deactivated successfully. Sep 12 10:10:08.390838 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 10:10:08.394224 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Sep 12 10:10:08.404871 systemd[1]: Started sshd@2-64.23.164.42:22-139.178.68.195:40166.service - OpenSSH per-connection server daemon (139.178.68.195:40166). Sep 12 10:10:08.407294 systemd-logind[1463]: Removed session 2. Sep 12 10:10:08.465790 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 40166 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:10:08.468216 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:08.477955 systemd-logind[1463]: New session 3 of user core. Sep 12 10:10:08.485283 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 10:10:08.547217 sshd[1613]: Connection closed by 139.178.68.195 port 40166 Sep 12 10:10:08.548257 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:08.561414 systemd[1]: sshd@2-64.23.164.42:22-139.178.68.195:40166.service: Deactivated successfully. Sep 12 10:10:08.565523 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 10:10:08.569281 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Sep 12 10:10:08.577554 systemd[1]: Started sshd@3-64.23.164.42:22-139.178.68.195:40178.service - OpenSSH per-connection server daemon (139.178.68.195:40178). Sep 12 10:10:08.580452 systemd-logind[1463]: Removed session 3. Sep 12 10:10:08.636208 sshd[1618]: Accepted publickey for core from 139.178.68.195 port 40178 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:10:08.638391 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:08.646704 systemd-logind[1463]: New session 4 of user core. Sep 12 10:10:08.657384 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 10:10:08.726695 sshd[1621]: Connection closed by 139.178.68.195 port 40178 Sep 12 10:10:08.727605 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:08.743401 systemd[1]: sshd@3-64.23.164.42:22-139.178.68.195:40178.service: Deactivated successfully. Sep 12 10:10:08.747418 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 10:10:08.750210 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Sep 12 10:10:08.756594 systemd[1]: Started sshd@4-64.23.164.42:22-139.178.68.195:40184.service - OpenSSH per-connection server daemon (139.178.68.195:40184). Sep 12 10:10:08.759535 systemd-logind[1463]: Removed session 4. Sep 12 10:10:08.841076 sshd[1626]: Accepted publickey for core from 139.178.68.195 port 40184 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:10:08.842739 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:08.853250 systemd-logind[1463]: New session 5 of user core. Sep 12 10:10:08.861310 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 10:10:08.940365 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 10:10:08.940870 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:10:08.955008 sudo[1630]: pam_unix(sudo:session): session closed for user root Sep 12 10:10:08.959898 sshd[1629]: Connection closed by 139.178.68.195 port 40184 Sep 12 10:10:08.959636 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:08.977543 systemd[1]: sshd@4-64.23.164.42:22-139.178.68.195:40184.service: Deactivated successfully. Sep 12 10:10:08.980482 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 10:10:08.982025 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Sep 12 10:10:08.992545 systemd[1]: Started sshd@5-64.23.164.42:22-139.178.68.195:40198.service - OpenSSH per-connection server daemon (139.178.68.195:40198). Sep 12 10:10:08.994163 systemd-logind[1463]: Removed session 5. Sep 12 10:10:09.046406 sshd[1635]: Accepted publickey for core from 139.178.68.195 port 40198 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:10:09.048457 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:09.056270 systemd-logind[1463]: New session 6 of user core. Sep 12 10:10:09.063546 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 10:10:09.126388 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 10:10:09.126760 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:10:09.132002 sudo[1640]: pam_unix(sudo:session): session closed for user root Sep 12 10:10:09.139501 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 10:10:09.139823 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:10:09.160518 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:10:09.205543 augenrules[1662]: No rules Sep 12 10:10:09.205799 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:10:09.206061 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:10:09.207436 sudo[1639]: pam_unix(sudo:session): session closed for user root Sep 12 10:10:09.212200 sshd[1638]: Connection closed by 139.178.68.195 port 40198 Sep 12 10:10:09.211729 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:09.222694 systemd[1]: sshd@5-64.23.164.42:22-139.178.68.195:40198.service: Deactivated successfully. Sep 12 10:10:09.226611 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 10:10:09.228927 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Sep 12 10:10:09.235792 systemd[1]: Started sshd@6-64.23.164.42:22-139.178.68.195:40206.service - OpenSSH per-connection server daemon (139.178.68.195:40206). Sep 12 10:10:09.236959 systemd-logind[1463]: Removed session 6. Sep 12 10:10:09.285634 sshd[1670]: Accepted publickey for core from 139.178.68.195 port 40206 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:10:09.288202 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:09.296312 systemd-logind[1463]: New session 7 of user core. Sep 12 10:10:09.302290 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 10:10:09.364558 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 10:10:09.365235 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:10:09.842887 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 10:10:09.846373 (dockerd)[1690]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 10:10:10.395697 dockerd[1690]: time="2025-09-12T10:10:10.395580355Z" level=info msg="Starting up" Sep 12 10:10:10.558826 systemd[1]: var-lib-docker-metacopy\x2dcheck1493454306-merged.mount: Deactivated successfully. Sep 12 10:10:10.581471 dockerd[1690]: time="2025-09-12T10:10:10.581399938Z" level=info msg="Loading containers: start." Sep 12 10:10:10.807340 kernel: Initializing XFRM netlink socket Sep 12 10:10:10.936882 systemd-networkd[1380]: docker0: Link UP Sep 12 10:10:10.982617 dockerd[1690]: time="2025-09-12T10:10:10.982439742Z" level=info msg="Loading containers: done." Sep 12 10:10:11.003728 dockerd[1690]: time="2025-09-12T10:10:11.003653902Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 10:10:11.003992 dockerd[1690]: time="2025-09-12T10:10:11.003811189Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 10:10:11.004115 dockerd[1690]: time="2025-09-12T10:10:11.004011671Z" level=info msg="Daemon has completed initialization" Sep 12 10:10:11.052621 dockerd[1690]: time="2025-09-12T10:10:11.051904819Z" level=info msg="API listen on /run/docker.sock" Sep 12 10:10:11.052228 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 10:10:11.969785 containerd[1481]: time="2025-09-12T10:10:11.969288456Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 10:10:12.561253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608885790.mount: Deactivated successfully. Sep 12 10:10:13.880636 containerd[1481]: time="2025-09-12T10:10:13.880560861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:13.883481 containerd[1481]: time="2025-09-12T10:10:13.883400146Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 10:10:13.886057 containerd[1481]: time="2025-09-12T10:10:13.885993275Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:13.889864 containerd[1481]: time="2025-09-12T10:10:13.889805053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:13.892070 containerd[1481]: time="2025-09-12T10:10:13.892004917Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.922663225s" Sep 12 10:10:13.892323 containerd[1481]: time="2025-09-12T10:10:13.892293606Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 10:10:13.893590 containerd[1481]: time="2025-09-12T10:10:13.893544813Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 10:10:15.538305 containerd[1481]: time="2025-09-12T10:10:15.538235529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:15.539954 containerd[1481]: time="2025-09-12T10:10:15.539571759Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 10:10:15.540660 containerd[1481]: time="2025-09-12T10:10:15.540613877Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:15.545674 containerd[1481]: time="2025-09-12T10:10:15.545612639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:15.547574 containerd[1481]: time="2025-09-12T10:10:15.547501272Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.653742939s" Sep 12 10:10:15.547574 containerd[1481]: time="2025-09-12T10:10:15.547561123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 10:10:15.548596 containerd[1481]: time="2025-09-12T10:10:15.548382746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 10:10:16.775987 containerd[1481]: time="2025-09-12T10:10:16.775711216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:16.777055 containerd[1481]: time="2025-09-12T10:10:16.776950154Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 10:10:16.778982 containerd[1481]: time="2025-09-12T10:10:16.777662082Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:16.781581 containerd[1481]: time="2025-09-12T10:10:16.781495778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:16.783558 containerd[1481]: time="2025-09-12T10:10:16.783224236Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.234802597s" Sep 12 10:10:16.783558 containerd[1481]: time="2025-09-12T10:10:16.783290282Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 10:10:16.784197 containerd[1481]: time="2025-09-12T10:10:16.784172487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 10:10:16.867137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 10:10:16.873350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:17.016729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:17.036204 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:10:17.109612 kubelet[1959]: E0912 10:10:17.109477 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:10:17.115773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:10:17.116237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:10:17.117081 systemd[1]: kubelet.service: Consumed 206ms CPU time, 108.3M memory peak. Sep 12 10:10:17.941152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871044475.mount: Deactivated successfully. Sep 12 10:10:18.573672 containerd[1481]: time="2025-09-12T10:10:18.573586974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:18.575500 containerd[1481]: time="2025-09-12T10:10:18.575434355Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 10:10:18.576468 containerd[1481]: time="2025-09-12T10:10:18.576407232Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:18.579155 containerd[1481]: time="2025-09-12T10:10:18.579065138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:18.580998 containerd[1481]: time="2025-09-12T10:10:18.580907751Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.796608336s" Sep 12 10:10:18.580998 containerd[1481]: time="2025-09-12T10:10:18.580994650Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 10:10:18.583452 containerd[1481]: time="2025-09-12T10:10:18.583169953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 10:10:18.586057 systemd-resolved[1333]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 12 10:10:19.161422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360706011.mount: Deactivated successfully. Sep 12 10:10:20.190994 containerd[1481]: time="2025-09-12T10:10:20.190805755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:20.192691 containerd[1481]: time="2025-09-12T10:10:20.192589463Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 10:10:20.194971 containerd[1481]: time="2025-09-12T10:10:20.193357659Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:20.197996 containerd[1481]: time="2025-09-12T10:10:20.197919492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:20.200139 containerd[1481]: time="2025-09-12T10:10:20.200071628Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.616840049s" Sep 12 10:10:20.200385 containerd[1481]: time="2025-09-12T10:10:20.200355059Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 10:10:20.201414 containerd[1481]: time="2025-09-12T10:10:20.201354114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 10:10:20.684380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982879041.mount: Deactivated successfully. Sep 12 10:10:20.691875 containerd[1481]: time="2025-09-12T10:10:20.690661250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:20.692561 containerd[1481]: time="2025-09-12T10:10:20.692501636Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 10:10:20.693464 containerd[1481]: time="2025-09-12T10:10:20.693425859Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:20.696509 containerd[1481]: time="2025-09-12T10:10:20.696458665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:20.697571 containerd[1481]: time="2025-09-12T10:10:20.697536419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 496.141669ms" Sep 12 10:10:20.697709 containerd[1481]: time="2025-09-12T10:10:20.697695176Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 10:10:20.698834 containerd[1481]: time="2025-09-12T10:10:20.698800386Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 10:10:21.269882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211297787.mount: Deactivated successfully. Sep 12 10:10:21.657165 systemd-resolved[1333]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 12 10:10:23.139671 containerd[1481]: time="2025-09-12T10:10:23.139592413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:23.141046 containerd[1481]: time="2025-09-12T10:10:23.140917135Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 10:10:23.142213 containerd[1481]: time="2025-09-12T10:10:23.142122008Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:23.152631 containerd[1481]: time="2025-09-12T10:10:23.151827963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:23.155860 containerd[1481]: time="2025-09-12T10:10:23.155785646Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.456935314s" Sep 12 10:10:23.156159 containerd[1481]: time="2025-09-12T10:10:23.156128248Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 10:10:26.558693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:26.559074 systemd[1]: kubelet.service: Consumed 206ms CPU time, 108.3M memory peak. Sep 12 10:10:26.566480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:26.617005 systemd[1]: Reload requested from client PID 2111 ('systemctl') (unit session-7.scope)... Sep 12 10:10:26.617025 systemd[1]: Reloading... Sep 12 10:10:26.795980 zram_generator::config[2156]: No configuration found. Sep 12 10:10:26.940163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:10:27.076619 systemd[1]: Reloading finished in 459 ms. Sep 12 10:10:27.137928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:27.145648 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:27.149180 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:10:27.149701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:27.149827 systemd[1]: kubelet.service: Consumed 149ms CPU time, 98M memory peak. Sep 12 10:10:27.155649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:27.340271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:27.349985 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:10:27.422971 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:10:27.422971 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 10:10:27.422971 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:10:27.422971 kubelet[2211]: I0912 10:10:27.422418 2211 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:10:27.998042 kubelet[2211]: I0912 10:10:27.997992 2211 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 10:10:27.998971 kubelet[2211]: I0912 10:10:27.998267 2211 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:10:27.998971 kubelet[2211]: I0912 10:10:27.998557 2211 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 10:10:28.026123 kubelet[2211]: I0912 10:10:28.026072 2211 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:10:28.026737 kubelet[2211]: E0912 10:10:28.026667 2211 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.164.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.164.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:10:28.040486 kubelet[2211]: E0912 10:10:28.040419 2211 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:10:28.040486 kubelet[2211]: I0912 10:10:28.040459 2211 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:10:28.045714 kubelet[2211]: I0912 10:10:28.045658 2211 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:10:28.046465 kubelet[2211]: I0912 10:10:28.046403 2211 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 10:10:28.046650 kubelet[2211]: I0912 10:10:28.046602 2211 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:10:28.046853 kubelet[2211]: I0912 10:10:28.046646 2211 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-d7464eacd8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:10:28.046853 kubelet[2211]: I0912 10:10:28.046843 2211 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:10:28.046853 kubelet[2211]: I0912 10:10:28.046853 2211 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 10:10:28.047113 kubelet[2211]: I0912 10:10:28.047015 2211 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:10:28.049914 kubelet[2211]: I0912 10:10:28.049844 2211 kubelet.go:408] "Attempting to sync node with API server" Sep 12 10:10:28.049914 kubelet[2211]: I0912 10:10:28.049890 2211 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:10:28.049914 kubelet[2211]: I0912 10:10:28.049943 2211 kubelet.go:314] "Adding apiserver pod source" Sep 12 10:10:28.050130 kubelet[2211]: I0912 10:10:28.049967 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:10:28.054116 kubelet[2211]: W0912 10:10:28.053779 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.164.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-d7464eacd8&limit=500&resourceVersion=0": dial tcp 64.23.164.42:6443: connect: connection refused Sep 12 10:10:28.054116 kubelet[2211]: E0912 10:10:28.053875 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.164.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-d7464eacd8&limit=500&resourceVersion=0\": dial tcp 64.23.164.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:10:28.056663 kubelet[2211]: W0912 10:10:28.056343 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.164.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.164.42:6443: connect: connection refused Sep 12 10:10:28.056663 kubelet[2211]: E0912 10:10:28.056416 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.164.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.164.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:10:28.056663 kubelet[2211]: I0912 10:10:28.056515 2211 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:10:28.060264 kubelet[2211]: I0912 10:10:28.060185 2211 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 10:10:28.061586 kubelet[2211]: W0912 10:10:28.061249 2211 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 10:10:28.063232 kubelet[2211]: I0912 10:10:28.063194 2211 server.go:1274] "Started kubelet" Sep 12 10:10:28.063987 kubelet[2211]: I0912 10:10:28.063712 2211 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:10:28.065896 kubelet[2211]: I0912 10:10:28.065869 2211 server.go:449] "Adding debug handlers to kubelet server" Sep 12 10:10:28.069576 kubelet[2211]: I0912 10:10:28.068827 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:10:28.069576 kubelet[2211]: I0912 10:10:28.069189 2211 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:10:28.070750 kubelet[2211]: E0912 10:10:28.069466 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.164.42:6443/api/v1/namespaces/default/events\": dial tcp 64.23.164.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-d7464eacd8.186481370f5fd3b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-d7464eacd8,UID:ci-4230.2.2-n-d7464eacd8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-d7464eacd8,},FirstTimestamp:2025-09-12 10:10:28.062458809 +0000 UTC m=+0.704959119,LastTimestamp:2025-09-12 10:10:28.062458809 +0000 UTC m=+0.704959119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-d7464eacd8,}" Sep 12 10:10:28.072807 kubelet[2211]: I0912 10:10:28.071856 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:10:28.072807 kubelet[2211]: I0912 10:10:28.072064 2211 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:10:28.083446 kubelet[2211]: I0912 10:10:28.083391 2211 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 10:10:28.084245 kubelet[2211]: E0912 10:10:28.084211 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-d7464eacd8\" not found" Sep 12 10:10:28.088134 kubelet[2211]: E0912 10:10:28.088036 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.164.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-d7464eacd8?timeout=10s\": dial tcp 64.23.164.42:6443: connect: connection refused" interval="200ms" Sep 12 10:10:28.089359 kubelet[2211]: I0912 10:10:28.088459 2211 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:10:28.089359 kubelet[2211]: I0912 10:10:28.088532 2211 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 10:10:28.089359 kubelet[2211]: W0912 10:10:28.089037 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.164.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.164.42:6443: connect: connection refused Sep 12 10:10:28.089359 kubelet[2211]: E0912 10:10:28.089126 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.164.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.164.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:10:28.089359 kubelet[2211]: I0912 10:10:28.089334 2211 factory.go:221] Registration of the containerd container factory successfully Sep 12 10:10:28.089359 kubelet[2211]: I0912 10:10:28.089365 2211 factory.go:221] Registration of the systemd container factory successfully Sep 12 10:10:28.089627 kubelet[2211]: I0912 10:10:28.089470 2211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:10:28.114760 kubelet[2211]: E0912 10:10:28.114726 2211 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:10:28.123200 kubelet[2211]: I0912 10:10:28.123147 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 10:10:28.127659 kubelet[2211]: I0912 10:10:28.127266 2211 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 10:10:28.127659 kubelet[2211]: I0912 10:10:28.127288 2211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 10:10:28.127659 kubelet[2211]: I0912 10:10:28.127327 2211 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:10:28.129472 kubelet[2211]: I0912 10:10:28.129443 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 10:10:28.129604 kubelet[2211]: I0912 10:10:28.129591 2211 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 10:10:28.129681 kubelet[2211]: I0912 10:10:28.129673 2211 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 10:10:28.129793 kubelet[2211]: E0912 10:10:28.129777 2211 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:10:28.131544 kubelet[2211]: I0912 10:10:28.131515 2211 policy_none.go:49] "None policy: Start" Sep 12 10:10:28.132919 kubelet[2211]: I0912 10:10:28.132868 2211 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 10:10:28.132919 kubelet[2211]: I0912 10:10:28.132922 2211 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:10:28.134122 kubelet[2211]: W0912 10:10:28.134049 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.164.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.164.42:6443: connect: connection refused Sep 12 10:10:28.134733 kubelet[2211]: E0912 10:10:28.134621 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.164.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.164.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:10:28.140254 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 10:10:28.149545 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 10:10:28.154776 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 10:10:28.163645 kubelet[2211]: I0912 10:10:28.163592 2211 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 10:10:28.163924 kubelet[2211]: I0912 10:10:28.163889 2211 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:10:28.164058 kubelet[2211]: I0912 10:10:28.163908 2211 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:10:28.164732 kubelet[2211]: I0912 10:10:28.164703 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:10:28.168279 kubelet[2211]: E0912 10:10:28.168196 2211 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-n-d7464eacd8\" not found" Sep 12 10:10:28.242396 systemd[1]: Created slice kubepods-burstable-podf3a11bc322846b0b628ec2cb592b5560.slice - libcontainer container kubepods-burstable-podf3a11bc322846b0b628ec2cb592b5560.slice. Sep 12 10:10:28.266499 kubelet[2211]: I0912 10:10:28.265699 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.266499 kubelet[2211]: E0912 10:10:28.266124 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.164.42:6443/api/v1/nodes\": dial tcp 64.23.164.42:6443: connect: connection refused" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.270365 systemd[1]: Created slice kubepods-burstable-pod671c8e26bdc9cc4337eff196d2b13ae1.slice - libcontainer container kubepods-burstable-pod671c8e26bdc9cc4337eff196d2b13ae1.slice. Sep 12 10:10:28.278499 systemd[1]: Created slice kubepods-burstable-pod5dde7c6c6a75c4d8e26a2257ac0ccbd9.slice - libcontainer container kubepods-burstable-pod5dde7c6c6a75c4d8e26a2257ac0ccbd9.slice. Sep 12 10:10:28.289182 kubelet[2211]: E0912 10:10:28.289096 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.164.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-d7464eacd8?timeout=10s\": dial tcp 64.23.164.42:6443: connect: connection refused" interval="400ms" Sep 12 10:10:28.389280 kubelet[2211]: I0912 10:10:28.389192 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dde7c6c6a75c4d8e26a2257ac0ccbd9-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-d7464eacd8\" (UID: \"5dde7c6c6a75c4d8e26a2257ac0ccbd9\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.389280 kubelet[2211]: I0912 10:10:28.389286 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3a11bc322846b0b628ec2cb592b5560-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-d7464eacd8\" (UID: \"f3a11bc322846b0b628ec2cb592b5560\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.389530 kubelet[2211]: I0912 10:10:28.389316 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3a11bc322846b0b628ec2cb592b5560-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-d7464eacd8\" (UID: \"f3a11bc322846b0b628ec2cb592b5560\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.389530 kubelet[2211]: I0912 10:10:28.389340 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3a11bc322846b0b628ec2cb592b5560-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-d7464eacd8\" (UID: \"f3a11bc322846b0b628ec2cb592b5560\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.389530 kubelet[2211]: I0912 10:10:28.389365 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.389530 kubelet[2211]: I0912 10:10:28.389389 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.389530 kubelet[2211]: I0912 10:10:28.389412 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.389756 kubelet[2211]: I0912 10:10:28.389435 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.389756 kubelet[2211]: I0912 10:10:28.389463 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.468017 kubelet[2211]: I0912 10:10:28.467976 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.468569 kubelet[2211]: E0912 10:10:28.468530 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.164.42:6443/api/v1/nodes\": dial tcp 64.23.164.42:6443: connect: connection refused" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.563816 kubelet[2211]: E0912 10:10:28.563045 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:28.565605 containerd[1481]: time="2025-09-12T10:10:28.565551145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-d7464eacd8,Uid:f3a11bc322846b0b628ec2cb592b5560,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:28.568744 systemd-resolved[1333]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Sep 12 10:10:28.575661 kubelet[2211]: E0912 10:10:28.575194 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:28.576412 containerd[1481]: time="2025-09-12T10:10:28.576112445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-d7464eacd8,Uid:671c8e26bdc9cc4337eff196d2b13ae1,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:28.582538 kubelet[2211]: E0912 10:10:28.582219 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:28.583217 containerd[1481]: time="2025-09-12T10:10:28.582885159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-d7464eacd8,Uid:5dde7c6c6a75c4d8e26a2257ac0ccbd9,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:28.690255 kubelet[2211]: E0912 10:10:28.690144 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.164.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-d7464eacd8?timeout=10s\": dial tcp 64.23.164.42:6443: connect: connection refused" interval="800ms" Sep 12 10:10:28.871587 kubelet[2211]: I0912 10:10:28.871008 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:28.871587 kubelet[2211]: E0912 10:10:28.871517 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.164.42:6443/api/v1/nodes\": dial tcp 64.23.164.42:6443: connect: connection refused" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:29.064910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865310670.mount: Deactivated successfully. Sep 12 10:10:29.069087 containerd[1481]: time="2025-09-12T10:10:29.069023787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:29.070256 containerd[1481]: time="2025-09-12T10:10:29.070177574Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:29.071635 containerd[1481]: time="2025-09-12T10:10:29.071583689Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 10:10:29.072131 containerd[1481]: time="2025-09-12T10:10:29.072045600Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:10:29.073500 containerd[1481]: time="2025-09-12T10:10:29.073330278Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:29.074392 containerd[1481]: time="2025-09-12T10:10:29.074349517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:10:29.077493 containerd[1481]: time="2025-09-12T10:10:29.076884404Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:29.079477 containerd[1481]: time="2025-09-12T10:10:29.079289732Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.580751ms" Sep 12 10:10:29.080547 containerd[1481]: time="2025-09-12T10:10:29.080506394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:29.082804 containerd[1481]: time="2025-09-12T10:10:29.082745112Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.718458ms" Sep 12 10:10:29.085187 containerd[1481]: time="2025-09-12T10:10:29.085135960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.931031ms" Sep 12 10:10:29.157110 kubelet[2211]: W0912 10:10:29.156234 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.164.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-d7464eacd8&limit=500&resourceVersion=0": dial tcp 64.23.164.42:6443: connect: connection refused Sep 12 10:10:29.157110 kubelet[2211]: E0912 10:10:29.156351 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.164.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-d7464eacd8&limit=500&resourceVersion=0\": dial tcp 64.23.164.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:10:29.253030 containerd[1481]: time="2025-09-12T10:10:29.251805145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:29.253030 containerd[1481]: time="2025-09-12T10:10:29.251918844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:29.253030 containerd[1481]: time="2025-09-12T10:10:29.251969128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:29.253030 containerd[1481]: time="2025-09-12T10:10:29.252110002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:29.284080 containerd[1481]: time="2025-09-12T10:10:29.281263555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:29.284080 containerd[1481]: time="2025-09-12T10:10:29.281321978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:29.284080 containerd[1481]: time="2025-09-12T10:10:29.281337068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:29.284080 containerd[1481]: time="2025-09-12T10:10:29.281416538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:29.284080 containerd[1481]: time="2025-09-12T10:10:29.273081414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:29.284080 containerd[1481]: time="2025-09-12T10:10:29.273146796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:29.284080 containerd[1481]: time="2025-09-12T10:10:29.273159433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:29.284080 containerd[1481]: time="2025-09-12T10:10:29.273255027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:29.311451 systemd[1]: Started cri-containerd-f619eff12cbb5576da126e6dd557cb48f0e6c0ac1cf07e42c96662d2e094c8f1.scope - libcontainer container f619eff12cbb5576da126e6dd557cb48f0e6c0ac1cf07e42c96662d2e094c8f1. Sep 12 10:10:29.319525 systemd[1]: Started cri-containerd-dd16dfca555cba1c708742106c3804f11684c5e238e80dfc454f378f3f8109e7.scope - libcontainer container dd16dfca555cba1c708742106c3804f11684c5e238e80dfc454f378f3f8109e7. Sep 12 10:10:29.344220 kubelet[2211]: W0912 10:10:29.344158 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.164.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.164.42:6443: connect: connection refused Sep 12 10:10:29.346022 kubelet[2211]: E0912 10:10:29.345039 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.164.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.164.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:10:29.352359 systemd[1]: Started cri-containerd-462241f10bfd7c8fbc948b7c5e152a4a99afd5437cd8c7316f72467838a626e2.scope - libcontainer container 462241f10bfd7c8fbc948b7c5e152a4a99afd5437cd8c7316f72467838a626e2. Sep 12 10:10:29.388473 kubelet[2211]: W0912 10:10:29.388273 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.164.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.164.42:6443: connect: connection refused Sep 12 10:10:29.388650 kubelet[2211]: E0912 10:10:29.388505 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.164.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.164.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:10:29.431602 containerd[1481]: time="2025-09-12T10:10:29.430635655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-d7464eacd8,Uid:671c8e26bdc9cc4337eff196d2b13ae1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f619eff12cbb5576da126e6dd557cb48f0e6c0ac1cf07e42c96662d2e094c8f1\"" Sep 12 10:10:29.433808 kubelet[2211]: E0912 10:10:29.433631 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:29.438679 containerd[1481]: time="2025-09-12T10:10:29.438618584Z" level=info msg="CreateContainer within sandbox \"f619eff12cbb5576da126e6dd557cb48f0e6c0ac1cf07e42c96662d2e094c8f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 10:10:29.444699 containerd[1481]: time="2025-09-12T10:10:29.444655568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-d7464eacd8,Uid:5dde7c6c6a75c4d8e26a2257ac0ccbd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd16dfca555cba1c708742106c3804f11684c5e238e80dfc454f378f3f8109e7\"" Sep 12 10:10:29.445883 kubelet[2211]: E0912 10:10:29.445838 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:29.452732 containerd[1481]: time="2025-09-12T10:10:29.452678291Z" level=info msg="CreateContainer within sandbox \"dd16dfca555cba1c708742106c3804f11684c5e238e80dfc454f378f3f8109e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 10:10:29.458694 containerd[1481]: time="2025-09-12T10:10:29.458639095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-d7464eacd8,Uid:f3a11bc322846b0b628ec2cb592b5560,Namespace:kube-system,Attempt:0,} returns sandbox id \"462241f10bfd7c8fbc948b7c5e152a4a99afd5437cd8c7316f72467838a626e2\"" Sep 12 10:10:29.461496 kubelet[2211]: E0912 10:10:29.461451 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:29.465430 containerd[1481]: time="2025-09-12T10:10:29.465383259Z" level=info msg="CreateContainer within sandbox \"462241f10bfd7c8fbc948b7c5e152a4a99afd5437cd8c7316f72467838a626e2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 10:10:29.469462 containerd[1481]: time="2025-09-12T10:10:29.469214897Z" level=info msg="CreateContainer within sandbox \"dd16dfca555cba1c708742106c3804f11684c5e238e80dfc454f378f3f8109e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d89938fbfc0e379948a275f70f9af958bde8a01ca26747766f60b6b83f0273fb\"" Sep 12 10:10:29.473747 containerd[1481]: time="2025-09-12T10:10:29.473689582Z" level=info msg="StartContainer for \"d89938fbfc0e379948a275f70f9af958bde8a01ca26747766f60b6b83f0273fb\"" Sep 12 10:10:29.475756 containerd[1481]: time="2025-09-12T10:10:29.475610231Z" level=info msg="CreateContainer within sandbox \"f619eff12cbb5576da126e6dd557cb48f0e6c0ac1cf07e42c96662d2e094c8f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cbcbd65e3de152ffddcc5d24274087d5ebb35790cf9828e428dbde34f3083862\"" Sep 12 10:10:29.476963 containerd[1481]: time="2025-09-12T10:10:29.476276180Z" level=info msg="StartContainer for \"cbcbd65e3de152ffddcc5d24274087d5ebb35790cf9828e428dbde34f3083862\"" Sep 12 10:10:29.485574 containerd[1481]: time="2025-09-12T10:10:29.485509771Z" level=info msg="CreateContainer within sandbox \"462241f10bfd7c8fbc948b7c5e152a4a99afd5437cd8c7316f72467838a626e2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a3bbc93f942973e6290a12a98fed3fdd8d624a35b3e671096f3051dd844d8e92\"" Sep 12 10:10:29.486753 containerd[1481]: time="2025-09-12T10:10:29.486685753Z" level=info msg="StartContainer for \"a3bbc93f942973e6290a12a98fed3fdd8d624a35b3e671096f3051dd844d8e92\"" Sep 12 10:10:29.492000 kubelet[2211]: E0912 10:10:29.491905 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.164.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-d7464eacd8?timeout=10s\": dial tcp 64.23.164.42:6443: connect: connection refused" interval="1.6s" Sep 12 10:10:29.538562 systemd[1]: Started cri-containerd-cbcbd65e3de152ffddcc5d24274087d5ebb35790cf9828e428dbde34f3083862.scope - libcontainer container cbcbd65e3de152ffddcc5d24274087d5ebb35790cf9828e428dbde34f3083862. Sep 12 10:10:29.540834 systemd[1]: Started cri-containerd-d89938fbfc0e379948a275f70f9af958bde8a01ca26747766f60b6b83f0273fb.scope - libcontainer container d89938fbfc0e379948a275f70f9af958bde8a01ca26747766f60b6b83f0273fb. Sep 12 10:10:29.550262 systemd[1]: Started cri-containerd-a3bbc93f942973e6290a12a98fed3fdd8d624a35b3e671096f3051dd844d8e92.scope - libcontainer container a3bbc93f942973e6290a12a98fed3fdd8d624a35b3e671096f3051dd844d8e92. Sep 12 10:10:29.646212 containerd[1481]: time="2025-09-12T10:10:29.646044967Z" level=info msg="StartContainer for \"d89938fbfc0e379948a275f70f9af958bde8a01ca26747766f60b6b83f0273fb\" returns successfully" Sep 12 10:10:29.667307 containerd[1481]: time="2025-09-12T10:10:29.667109486Z" level=info msg="StartContainer for \"cbcbd65e3de152ffddcc5d24274087d5ebb35790cf9828e428dbde34f3083862\" returns successfully" Sep 12 10:10:29.675008 kubelet[2211]: I0912 10:10:29.674467 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:29.676168 kubelet[2211]: E0912 10:10:29.675106 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.164.42:6443/api/v1/nodes\": dial tcp 64.23.164.42:6443: connect: connection refused" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:29.682885 containerd[1481]: time="2025-09-12T10:10:29.682089900Z" level=info msg="StartContainer for \"a3bbc93f942973e6290a12a98fed3fdd8d624a35b3e671096f3051dd844d8e92\" returns successfully" Sep 12 10:10:29.704841 kubelet[2211]: W0912 10:10:29.704777 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.164.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.164.42:6443: connect: connection refused Sep 12 10:10:29.705172 kubelet[2211]: E0912 10:10:29.704857 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.164.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.164.42:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:10:30.143341 kubelet[2211]: E0912 10:10:30.141427 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:30.150207 kubelet[2211]: E0912 10:10:30.149890 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:30.158630 kubelet[2211]: E0912 10:10:30.158584 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:31.161686 kubelet[2211]: E0912 10:10:31.161640 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:31.162617 kubelet[2211]: E0912 10:10:31.162090 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:31.277241 kubelet[2211]: I0912 10:10:31.277185 2211 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:32.044844 kubelet[2211]: E0912 10:10:32.044793 2211 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-n-d7464eacd8\" not found" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:32.137975 kubelet[2211]: I0912 10:10:32.137412 2211 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:33.057803 kubelet[2211]: I0912 10:10:33.057734 2211 apiserver.go:52] "Watching apiserver" Sep 12 10:10:33.089271 kubelet[2211]: I0912 10:10:33.089191 2211 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 10:10:33.269436 kubelet[2211]: W0912 10:10:33.269128 2211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 10:10:33.269436 kubelet[2211]: E0912 10:10:33.269445 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:34.170218 kubelet[2211]: E0912 10:10:34.170153 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:34.302998 systemd[1]: Reload requested from client PID 2494 ('systemctl') (unit session-7.scope)... Sep 12 10:10:34.303236 systemd[1]: Reloading... Sep 12 10:10:34.446062 zram_generator::config[2547]: No configuration found. Sep 12 10:10:34.592634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:10:34.730911 systemd[1]: Reloading finished in 426 ms. Sep 12 10:10:34.764371 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:34.786847 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:10:34.787385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:34.787530 systemd[1]: kubelet.service: Consumed 1.207s CPU time, 126.4M memory peak. Sep 12 10:10:34.800343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:34.981757 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:10:34.982119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:35.094661 kubelet[2589]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:10:35.094661 kubelet[2589]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 10:10:35.094661 kubelet[2589]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:10:35.095345 kubelet[2589]: I0912 10:10:35.094725 2589 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:10:35.111172 kubelet[2589]: I0912 10:10:35.110458 2589 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 10:10:35.111172 kubelet[2589]: I0912 10:10:35.110508 2589 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:10:35.111172 kubelet[2589]: I0912 10:10:35.111137 2589 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 10:10:35.115623 kubelet[2589]: I0912 10:10:35.115572 2589 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 10:10:35.123733 kubelet[2589]: I0912 10:10:35.123676 2589 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:10:35.129390 kubelet[2589]: E0912 10:10:35.129330 2589 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:10:35.129390 kubelet[2589]: I0912 10:10:35.129385 2589 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:10:35.135122 kubelet[2589]: I0912 10:10:35.135061 2589 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:10:35.135513 kubelet[2589]: I0912 10:10:35.135260 2589 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 10:10:35.135612 kubelet[2589]: I0912 10:10:35.135492 2589 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:10:35.135815 kubelet[2589]: I0912 10:10:35.135539 2589 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-d7464eacd8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:10:35.135992 kubelet[2589]: I0912 10:10:35.135837 2589 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:10:35.135992 kubelet[2589]: I0912 10:10:35.135854 2589 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 10:10:35.135992 kubelet[2589]: I0912 10:10:35.135900 2589 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:10:35.136138 kubelet[2589]: I0912 10:10:35.136123 2589 kubelet.go:408] "Attempting to sync node with API server" Sep 12 10:10:35.136181 kubelet[2589]: I0912 10:10:35.136144 2589 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:10:35.136219 kubelet[2589]: I0912 10:10:35.136186 2589 kubelet.go:314] "Adding apiserver pod source" Sep 12 10:10:35.136219 kubelet[2589]: I0912 10:10:35.136200 2589 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:10:35.142195 kubelet[2589]: I0912 10:10:35.141065 2589 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:10:35.142195 kubelet[2589]: I0912 10:10:35.141794 2589 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 10:10:35.142750 kubelet[2589]: I0912 10:10:35.142523 2589 server.go:1274] "Started kubelet" Sep 12 10:10:35.147128 kubelet[2589]: I0912 10:10:35.147065 2589 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:10:35.151179 kubelet[2589]: I0912 10:10:35.151111 2589 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:10:35.156971 kubelet[2589]: I0912 10:10:35.156263 2589 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:10:35.156971 kubelet[2589]: I0912 10:10:35.156662 2589 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:10:35.158114 kubelet[2589]: I0912 10:10:35.158071 2589 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:10:35.170857 kubelet[2589]: I0912 10:10:35.170741 2589 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 10:10:35.172662 kubelet[2589]: E0912 10:10:35.171843 2589 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-d7464eacd8\" not found" Sep 12 10:10:35.176048 kubelet[2589]: I0912 10:10:35.175871 2589 server.go:449] "Adding debug handlers to kubelet server" Sep 12 10:10:35.179474 kubelet[2589]: I0912 10:10:35.179409 2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 10:10:35.181349 kubelet[2589]: I0912 10:10:35.181291 2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 10:10:35.181349 kubelet[2589]: I0912 10:10:35.181346 2589 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 10:10:35.181680 kubelet[2589]: I0912 10:10:35.181384 2589 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 10:10:35.181680 kubelet[2589]: E0912 10:10:35.181456 2589 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:10:35.191334 kubelet[2589]: I0912 10:10:35.191264 2589 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 10:10:35.191905 kubelet[2589]: I0912 10:10:35.191887 2589 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:10:35.198480 kubelet[2589]: I0912 10:10:35.198429 2589 factory.go:221] Registration of the systemd container factory successfully Sep 12 10:10:35.200193 kubelet[2589]: I0912 10:10:35.198596 2589 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:10:35.205448 kubelet[2589]: I0912 10:10:35.203901 2589 factory.go:221] Registration of the containerd container factory successfully Sep 12 10:10:35.267964 kubelet[2589]: I0912 10:10:35.266244 2589 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 10:10:35.268177 kubelet[2589]: I0912 10:10:35.268150 2589 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 10:10:35.268275 kubelet[2589]: I0912 10:10:35.268262 2589 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:10:35.268561 kubelet[2589]: I0912 10:10:35.268532 2589 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 10:10:35.268700 kubelet[2589]: I0912 10:10:35.268668 2589 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 10:10:35.268751 kubelet[2589]: I0912 10:10:35.268744 2589 policy_none.go:49] "None policy: Start" Sep 12 10:10:35.270145 kubelet[2589]: I0912 10:10:35.270112 2589 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 10:10:35.270263 kubelet[2589]: I0912 10:10:35.270156 2589 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:10:35.270413 kubelet[2589]: I0912 10:10:35.270395 2589 state_mem.go:75] "Updated machine memory state" Sep 12 10:10:35.284053 kubelet[2589]: I0912 10:10:35.280826 2589 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 10:10:35.284053 kubelet[2589]: I0912 10:10:35.281105 2589 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:10:35.284053 kubelet[2589]: I0912 10:10:35.281121 2589 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:10:35.284053 kubelet[2589]: I0912 10:10:35.281790 2589 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:10:35.306659 kubelet[2589]: W0912 10:10:35.306609 2589 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 10:10:35.308313 kubelet[2589]: W0912 10:10:35.308246 2589 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 10:10:35.311116 kubelet[2589]: W0912 10:10:35.311046 2589 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 10:10:35.313404 kubelet[2589]: E0912 10:10:35.313268 2589 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.2.2-n-d7464eacd8\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.344591 sudo[2622]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 10:10:35.345813 sudo[2622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 10:10:35.394573 kubelet[2589]: I0912 10:10:35.394516 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.394573 kubelet[2589]: I0912 10:10:35.394567 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.394573 kubelet[2589]: I0912 10:10:35.394591 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.394883 kubelet[2589]: I0912 10:10:35.394609 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.394883 kubelet[2589]: I0912 10:10:35.394633 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3a11bc322846b0b628ec2cb592b5560-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-d7464eacd8\" (UID: \"f3a11bc322846b0b628ec2cb592b5560\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.394883 kubelet[2589]: I0912 10:10:35.394648 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3a11bc322846b0b628ec2cb592b5560-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-d7464eacd8\" (UID: \"f3a11bc322846b0b628ec2cb592b5560\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.394883 kubelet[2589]: I0912 10:10:35.394664 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3a11bc322846b0b628ec2cb592b5560-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-d7464eacd8\" (UID: \"f3a11bc322846b0b628ec2cb592b5560\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.394883 kubelet[2589]: I0912 10:10:35.394681 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/671c8e26bdc9cc4337eff196d2b13ae1-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-d7464eacd8\" (UID: \"671c8e26bdc9cc4337eff196d2b13ae1\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.395589 kubelet[2589]: I0912 10:10:35.394697 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dde7c6c6a75c4d8e26a2257ac0ccbd9-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-d7464eacd8\" (UID: \"5dde7c6c6a75c4d8e26a2257ac0ccbd9\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.397144 kubelet[2589]: I0912 10:10:35.397095 2589 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.418978 kubelet[2589]: I0912 10:10:35.416681 2589 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.418978 kubelet[2589]: I0912 10:10:35.416801 2589 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.2-n-d7464eacd8" Sep 12 10:10:35.609730 kubelet[2589]: E0912 10:10:35.609579 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:35.610474 kubelet[2589]: E0912 10:10:35.609741 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:35.613795 kubelet[2589]: E0912 10:10:35.613682 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:36.038554 sudo[2622]: pam_unix(sudo:session): session closed for user root Sep 12 10:10:36.137581 kubelet[2589]: I0912 10:10:36.137201 2589 apiserver.go:52] "Watching apiserver" Sep 12 10:10:36.192119 kubelet[2589]: I0912 10:10:36.192044 2589 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 10:10:36.241021 kubelet[2589]: E0912 10:10:36.240737 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:36.244217 kubelet[2589]: E0912 10:10:36.243393 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:36.244972 kubelet[2589]: E0912 10:10:36.244694 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:36.312760 kubelet[2589]: I0912 10:10:36.311860 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-n-d7464eacd8" podStartSLOduration=1.311832316 podStartE2EDuration="1.311832316s" podCreationTimestamp="2025-09-12 10:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:36.30804846 +0000 UTC m=+1.317303341" watchObservedRunningTime="2025-09-12 10:10:36.311832316 +0000 UTC m=+1.321087196" Sep 12 10:10:36.327244 kubelet[2589]: I0912 10:10:36.327172 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-n-d7464eacd8" podStartSLOduration=3.3270017 podStartE2EDuration="3.3270017s" podCreationTimestamp="2025-09-12 10:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:36.326482857 +0000 UTC m=+1.335737736" watchObservedRunningTime="2025-09-12 10:10:36.3270017 +0000 UTC m=+1.336256584" Sep 12 10:10:36.374530 kubelet[2589]: I0912 10:10:36.374297 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-d7464eacd8" podStartSLOduration=1.374270266 podStartE2EDuration="1.374270266s" podCreationTimestamp="2025-09-12 10:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:36.349146757 +0000 UTC m=+1.358401637" watchObservedRunningTime="2025-09-12 10:10:36.374270266 +0000 UTC m=+1.383525145" Sep 12 10:10:37.243580 kubelet[2589]: E0912 10:10:37.243002 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:37.825109 sudo[1674]: pam_unix(sudo:session): session closed for user root Sep 12 10:10:37.827775 sshd[1673]: Connection closed by 139.178.68.195 port 40206 Sep 12 10:10:37.829115 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:37.839344 systemd[1]: sshd@6-64.23.164.42:22-139.178.68.195:40206.service: Deactivated successfully. Sep 12 10:10:37.843266 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 10:10:37.843787 systemd[1]: session-7.scope: Consumed 5.868s CPU time, 219.3M memory peak. Sep 12 10:10:37.845882 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Sep 12 10:10:37.847301 systemd-logind[1463]: Removed session 7. Sep 12 10:10:38.140261 kubelet[2589]: E0912 10:10:38.139544 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:38.756290 kubelet[2589]: I0912 10:10:38.756254 2589 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 10:10:38.756816 containerd[1481]: time="2025-09-12T10:10:38.756661811Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 10:10:38.757156 kubelet[2589]: I0912 10:10:38.757010 2589 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 10:10:38.844647 systemd[1]: Created slice kubepods-besteffort-podd1ef3014_f46f_480d_a04c_5190548cccc9.slice - libcontainer container kubepods-besteffort-podd1ef3014_f46f_480d_a04c_5190548cccc9.slice. Sep 12 10:10:38.868499 systemd[1]: Created slice kubepods-burstable-pode211083a_c916_4471_83c0_5d3ed42c2873.slice - libcontainer container kubepods-burstable-pode211083a_c916_4471_83c0_5d3ed42c2873.slice. Sep 12 10:10:38.916616 kubelet[2589]: I0912 10:10:38.916545 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-host-proc-sys-net\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.916616 kubelet[2589]: I0912 10:10:38.916619 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-hubble-tls\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.916877 kubelet[2589]: I0912 10:10:38.916643 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1ef3014-f46f-480d-a04c-5190548cccc9-lib-modules\") pod \"kube-proxy-ggmlh\" (UID: \"d1ef3014-f46f-480d-a04c-5190548cccc9\") " pod="kube-system/kube-proxy-ggmlh" Sep 12 10:10:38.916877 kubelet[2589]: I0912 10:10:38.916678 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9trll\" (UniqueName: \"kubernetes.io/projected/d1ef3014-f46f-480d-a04c-5190548cccc9-kube-api-access-9trll\") pod \"kube-proxy-ggmlh\" (UID: \"d1ef3014-f46f-480d-a04c-5190548cccc9\") " pod="kube-system/kube-proxy-ggmlh" Sep 12 10:10:38.916877 kubelet[2589]: I0912 10:10:38.916697 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-bpf-maps\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.916877 kubelet[2589]: I0912 10:10:38.916718 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1ef3014-f46f-480d-a04c-5190548cccc9-kube-proxy\") pod \"kube-proxy-ggmlh\" (UID: \"d1ef3014-f46f-480d-a04c-5190548cccc9\") " pod="kube-system/kube-proxy-ggmlh" Sep 12 10:10:38.916877 kubelet[2589]: I0912 10:10:38.916742 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-hostproc\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.916877 kubelet[2589]: I0912 10:10:38.916763 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-host-proc-sys-kernel\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917184 kubelet[2589]: I0912 10:10:38.916785 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-lib-modules\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917184 kubelet[2589]: I0912 10:10:38.916809 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cni-path\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917184 kubelet[2589]: I0912 10:10:38.916830 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-cgroup\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917184 kubelet[2589]: I0912 10:10:38.916862 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-etc-cni-netd\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917184 kubelet[2589]: I0912 10:10:38.916877 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e211083a-c916-4471-83c0-5d3ed42c2873-clustermesh-secrets\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917184 kubelet[2589]: I0912 10:10:38.916894 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-run\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917460 kubelet[2589]: I0912 10:10:38.916911 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-xtables-lock\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917460 kubelet[2589]: I0912 10:10:38.916927 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-config-path\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917460 kubelet[2589]: I0912 10:10:38.916977 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5z7d\" (UniqueName: \"kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-kube-api-access-p5z7d\") pod \"cilium-qp96l\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " pod="kube-system/cilium-qp96l" Sep 12 10:10:38.917460 kubelet[2589]: I0912 10:10:38.917007 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1ef3014-f46f-480d-a04c-5190548cccc9-xtables-lock\") pod \"kube-proxy-ggmlh\" (UID: \"d1ef3014-f46f-480d-a04c-5190548cccc9\") " pod="kube-system/kube-proxy-ggmlh" Sep 12 10:10:39.041055 kubelet[2589]: E0912 10:10:39.037620 2589 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 10:10:39.041055 kubelet[2589]: E0912 10:10:39.037676 2589 projected.go:194] Error preparing data for projected volume kube-api-access-9trll for pod kube-system/kube-proxy-ggmlh: configmap "kube-root-ca.crt" not found Sep 12 10:10:39.041055 kubelet[2589]: E0912 10:10:39.037765 2589 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ef3014-f46f-480d-a04c-5190548cccc9-kube-api-access-9trll podName:d1ef3014-f46f-480d-a04c-5190548cccc9 nodeName:}" failed. No retries permitted until 2025-09-12 10:10:39.537740649 +0000 UTC m=+4.546995507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9trll" (UniqueName: "kubernetes.io/projected/d1ef3014-f46f-480d-a04c-5190548cccc9-kube-api-access-9trll") pod "kube-proxy-ggmlh" (UID: "d1ef3014-f46f-480d-a04c-5190548cccc9") : configmap "kube-root-ca.crt" not found Sep 12 10:10:39.043463 kubelet[2589]: E0912 10:10:39.043174 2589 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 10:10:39.043463 kubelet[2589]: E0912 10:10:39.043233 2589 projected.go:194] Error preparing data for projected volume kube-api-access-p5z7d for pod kube-system/cilium-qp96l: configmap "kube-root-ca.crt" not found Sep 12 10:10:39.043463 kubelet[2589]: E0912 10:10:39.043318 2589 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-kube-api-access-p5z7d podName:e211083a-c916-4471-83c0-5d3ed42c2873 nodeName:}" failed. No retries permitted until 2025-09-12 10:10:39.543275704 +0000 UTC m=+4.552530585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5z7d" (UniqueName: "kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-kube-api-access-p5z7d") pod "cilium-qp96l" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873") : configmap "kube-root-ca.crt" not found Sep 12 10:10:39.757578 kubelet[2589]: E0912 10:10:39.757524 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:39.758783 containerd[1481]: time="2025-09-12T10:10:39.758443867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggmlh,Uid:d1ef3014-f46f-480d-a04c-5190548cccc9,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:39.776439 kubelet[2589]: E0912 10:10:39.776329 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:39.784751 containerd[1481]: time="2025-09-12T10:10:39.784638338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qp96l,Uid:e211083a-c916-4471-83c0-5d3ed42c2873,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:39.839220 containerd[1481]: time="2025-09-12T10:10:39.838469262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:39.842707 containerd[1481]: time="2025-09-12T10:10:39.840905159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:39.847872 containerd[1481]: time="2025-09-12T10:10:39.847320180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:39.847872 containerd[1481]: time="2025-09-12T10:10:39.847649140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:39.853556 systemd[1]: Created slice kubepods-besteffort-podf79ca656_115f_4339_9dba_f6a7e6ae5ae4.slice - libcontainer container kubepods-besteffort-podf79ca656_115f_4339_9dba_f6a7e6ae5ae4.slice. Sep 12 10:10:39.871482 containerd[1481]: time="2025-09-12T10:10:39.871305792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:39.871724 containerd[1481]: time="2025-09-12T10:10:39.871492608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:39.871724 containerd[1481]: time="2025-09-12T10:10:39.871514543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:39.871724 containerd[1481]: time="2025-09-12T10:10:39.871685505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:39.886347 systemd[1]: Started cri-containerd-d1185874bff4ab7ff76f6e6ae885b7fae73931f7eef457540e1390cdde31cf9c.scope - libcontainer container d1185874bff4ab7ff76f6e6ae885b7fae73931f7eef457540e1390cdde31cf9c. Sep 12 10:10:39.918222 systemd[1]: Started cri-containerd-73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a.scope - libcontainer container 73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a. Sep 12 10:10:39.925628 kubelet[2589]: I0912 10:10:39.925119 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f79ca656-115f-4339-9dba-f6a7e6ae5ae4-cilium-config-path\") pod \"cilium-operator-5d85765b45-qgzs4\" (UID: \"f79ca656-115f-4339-9dba-f6a7e6ae5ae4\") " pod="kube-system/cilium-operator-5d85765b45-qgzs4" Sep 12 10:10:39.925628 kubelet[2589]: I0912 10:10:39.925185 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhlnr\" (UniqueName: \"kubernetes.io/projected/f79ca656-115f-4339-9dba-f6a7e6ae5ae4-kube-api-access-vhlnr\") pod \"cilium-operator-5d85765b45-qgzs4\" (UID: \"f79ca656-115f-4339-9dba-f6a7e6ae5ae4\") " pod="kube-system/cilium-operator-5d85765b45-qgzs4" Sep 12 10:10:39.983909 containerd[1481]: time="2025-09-12T10:10:39.983748102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggmlh,Uid:d1ef3014-f46f-480d-a04c-5190548cccc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1185874bff4ab7ff76f6e6ae885b7fae73931f7eef457540e1390cdde31cf9c\"" Sep 12 10:10:39.990837 kubelet[2589]: E0912 10:10:39.989573 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:40.002471 containerd[1481]: time="2025-09-12T10:10:40.002082314Z" level=info msg="CreateContainer within sandbox \"d1185874bff4ab7ff76f6e6ae885b7fae73931f7eef457540e1390cdde31cf9c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 10:10:40.010212 containerd[1481]: time="2025-09-12T10:10:40.010030661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qp96l,Uid:e211083a-c916-4471-83c0-5d3ed42c2873,Namespace:kube-system,Attempt:0,} returns sandbox id \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\"" Sep 12 10:10:40.012870 kubelet[2589]: E0912 10:10:40.012502 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:40.017965 containerd[1481]: time="2025-09-12T10:10:40.017138570Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 10:10:40.063732 containerd[1481]: time="2025-09-12T10:10:40.063345304Z" level=info msg="CreateContainer within sandbox \"d1185874bff4ab7ff76f6e6ae885b7fae73931f7eef457540e1390cdde31cf9c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb83be3d4cba5e73186d2604366bc8ab89a448fa8c3c9676ae72eac2dc860f09\"" Sep 12 10:10:40.068298 containerd[1481]: time="2025-09-12T10:10:40.068244797Z" level=info msg="StartContainer for \"eb83be3d4cba5e73186d2604366bc8ab89a448fa8c3c9676ae72eac2dc860f09\"" Sep 12 10:10:40.127349 systemd[1]: Started cri-containerd-eb83be3d4cba5e73186d2604366bc8ab89a448fa8c3c9676ae72eac2dc860f09.scope - libcontainer container eb83be3d4cba5e73186d2604366bc8ab89a448fa8c3c9676ae72eac2dc860f09. Sep 12 10:10:40.160078 kubelet[2589]: E0912 10:10:40.159843 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:40.165129 containerd[1481]: time="2025-09-12T10:10:40.164750067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qgzs4,Uid:f79ca656-115f-4339-9dba-f6a7e6ae5ae4,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:40.186839 containerd[1481]: time="2025-09-12T10:10:40.186579759Z" level=info msg="StartContainer for \"eb83be3d4cba5e73186d2604366bc8ab89a448fa8c3c9676ae72eac2dc860f09\" returns successfully" Sep 12 10:10:40.215788 containerd[1481]: time="2025-09-12T10:10:40.214707325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:40.215788 containerd[1481]: time="2025-09-12T10:10:40.214791686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:40.215788 containerd[1481]: time="2025-09-12T10:10:40.214808653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:40.215788 containerd[1481]: time="2025-09-12T10:10:40.214961323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:40.248577 systemd[1]: Started cri-containerd-4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b.scope - libcontainer container 4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b. Sep 12 10:10:40.262107 kubelet[2589]: E0912 10:10:40.261447 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:40.360125 containerd[1481]: time="2025-09-12T10:10:40.360065829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qgzs4,Uid:f79ca656-115f-4339-9dba-f6a7e6ae5ae4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b\"" Sep 12 10:10:40.362000 kubelet[2589]: E0912 10:10:40.361636 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:41.037305 systemd[1]: run-containerd-runc-k8s.io-eb83be3d4cba5e73186d2604366bc8ab89a448fa8c3c9676ae72eac2dc860f09-runc.7WJ7hR.mount: Deactivated successfully. Sep 12 10:10:43.367391 kubelet[2589]: E0912 10:10:43.365122 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:43.474032 kubelet[2589]: I0912 10:10:43.470456 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ggmlh" podStartSLOduration=5.470421007 podStartE2EDuration="5.470421007s" podCreationTimestamp="2025-09-12 10:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:40.278227216 +0000 UTC m=+5.287482095" watchObservedRunningTime="2025-09-12 10:10:43.470421007 +0000 UTC m=+8.479675891" Sep 12 10:10:44.277842 kubelet[2589]: E0912 10:10:44.277735 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:45.278482 kubelet[2589]: E0912 10:10:45.278438 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:45.374448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045326442.mount: Deactivated successfully. Sep 12 10:10:45.593344 kubelet[2589]: E0912 10:10:45.592643 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:46.281633 kubelet[2589]: E0912 10:10:46.281570 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:48.177455 kubelet[2589]: E0912 10:10:48.177399 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:48.222191 containerd[1481]: time="2025-09-12T10:10:48.222094336Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:48.224374 containerd[1481]: time="2025-09-12T10:10:48.224218035Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 10:10:48.227017 containerd[1481]: time="2025-09-12T10:10:48.226491620Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:48.231988 containerd[1481]: time="2025-09-12T10:10:48.230049765Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.211981636s" Sep 12 10:10:48.231988 containerd[1481]: time="2025-09-12T10:10:48.230145319Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 10:10:48.237615 containerd[1481]: time="2025-09-12T10:10:48.237105334Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 10:10:48.243899 containerd[1481]: time="2025-09-12T10:10:48.242202682Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:10:48.391788 update_engine[1464]: I20250912 10:10:48.391506 1464 update_attempter.cc:509] Updating boot flags... Sep 12 10:10:48.407491 containerd[1481]: time="2025-09-12T10:10:48.406896112Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\"" Sep 12 10:10:48.413807 containerd[1481]: time="2025-09-12T10:10:48.413331405Z" level=info msg="StartContainer for \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\"" Sep 12 10:10:48.493133 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2989) Sep 12 10:10:48.637867 systemd[1]: Started cri-containerd-c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28.scope - libcontainer container c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28. Sep 12 10:10:48.691816 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2993) Sep 12 10:10:48.757742 containerd[1481]: time="2025-09-12T10:10:48.757688492Z" level=info msg="StartContainer for \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\" returns successfully" Sep 12 10:10:48.778845 systemd[1]: cri-containerd-c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28.scope: Deactivated successfully. Sep 12 10:10:48.870291 containerd[1481]: time="2025-09-12T10:10:48.848686255Z" level=info msg="shim disconnected" id=c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28 namespace=k8s.io Sep 12 10:10:48.870630 containerd[1481]: time="2025-09-12T10:10:48.870568171Z" level=warning msg="cleaning up after shim disconnected" id=c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28 namespace=k8s.io Sep 12 10:10:48.870757 containerd[1481]: time="2025-09-12T10:10:48.870734722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:10:49.326425 kubelet[2589]: E0912 10:10:49.326385 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:49.332359 containerd[1481]: time="2025-09-12T10:10:49.331114946Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:10:49.349232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28-rootfs.mount: Deactivated successfully. Sep 12 10:10:49.365815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321685805.mount: Deactivated successfully. Sep 12 10:10:49.374172 containerd[1481]: time="2025-09-12T10:10:49.374035697Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\"" Sep 12 10:10:49.375092 containerd[1481]: time="2025-09-12T10:10:49.374887234Z" level=info msg="StartContainer for \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\"" Sep 12 10:10:49.439258 systemd[1]: Started cri-containerd-598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047.scope - libcontainer container 598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047. Sep 12 10:10:49.488322 containerd[1481]: time="2025-09-12T10:10:49.487971112Z" level=info msg="StartContainer for \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\" returns successfully" Sep 12 10:10:49.522302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:10:49.522618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:10:49.522836 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:10:49.531612 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:10:49.531915 systemd[1]: cri-containerd-598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047.scope: Deactivated successfully. Sep 12 10:10:49.573766 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:10:49.641035 containerd[1481]: time="2025-09-12T10:10:49.640904190Z" level=info msg="shim disconnected" id=598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047 namespace=k8s.io Sep 12 10:10:49.641035 containerd[1481]: time="2025-09-12T10:10:49.641029111Z" level=warning msg="cleaning up after shim disconnected" id=598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047 namespace=k8s.io Sep 12 10:10:49.641035 containerd[1481]: time="2025-09-12T10:10:49.641044237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:10:50.244411 containerd[1481]: time="2025-09-12T10:10:50.244326621Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:50.245772 containerd[1481]: time="2025-09-12T10:10:50.245551656Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 10:10:50.246794 containerd[1481]: time="2025-09-12T10:10:50.246739695Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:50.249678 containerd[1481]: time="2025-09-12T10:10:50.249612448Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.012433135s" Sep 12 10:10:50.250230 containerd[1481]: time="2025-09-12T10:10:50.249888884Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 10:10:50.256125 containerd[1481]: time="2025-09-12T10:10:50.256078467Z" level=info msg="CreateContainer within sandbox \"4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 10:10:50.269443 containerd[1481]: time="2025-09-12T10:10:50.269390550Z" level=info msg="CreateContainer within sandbox \"4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\"" Sep 12 10:10:50.270305 containerd[1481]: time="2025-09-12T10:10:50.270264218Z" level=info msg="StartContainer for \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\"" Sep 12 10:10:50.326314 systemd[1]: Started cri-containerd-359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183.scope - libcontainer container 359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183. Sep 12 10:10:50.335769 kubelet[2589]: E0912 10:10:50.334750 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:50.349615 containerd[1481]: time="2025-09-12T10:10:50.349308574Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:10:50.356191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047-rootfs.mount: Deactivated successfully. Sep 12 10:10:50.405700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388066675.mount: Deactivated successfully. Sep 12 10:10:50.414723 containerd[1481]: time="2025-09-12T10:10:50.414675039Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\"" Sep 12 10:10:50.417606 containerd[1481]: time="2025-09-12T10:10:50.417514652Z" level=info msg="StartContainer for \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\"" Sep 12 10:10:50.427795 containerd[1481]: time="2025-09-12T10:10:50.427747648Z" level=info msg="StartContainer for \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\" returns successfully" Sep 12 10:10:50.467295 systemd[1]: Started cri-containerd-6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00.scope - libcontainer container 6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00. Sep 12 10:10:50.521599 containerd[1481]: time="2025-09-12T10:10:50.521409242Z" level=info msg="StartContainer for \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\" returns successfully" Sep 12 10:10:50.522043 systemd[1]: cri-containerd-6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00.scope: Deactivated successfully. Sep 12 10:10:50.575377 containerd[1481]: time="2025-09-12T10:10:50.575303921Z" level=info msg="shim disconnected" id=6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00 namespace=k8s.io Sep 12 10:10:50.575377 containerd[1481]: time="2025-09-12T10:10:50.575374115Z" level=warning msg="cleaning up after shim disconnected" id=6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00 namespace=k8s.io Sep 12 10:10:50.575377 containerd[1481]: time="2025-09-12T10:10:50.575383654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:10:50.616174 containerd[1481]: time="2025-09-12T10:10:50.616114997Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:10:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:10:51.352486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00-rootfs.mount: Deactivated successfully. Sep 12 10:10:51.399559 kubelet[2589]: E0912 10:10:51.399504 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:51.405327 containerd[1481]: time="2025-09-12T10:10:51.405273598Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:10:51.409255 kubelet[2589]: E0912 10:10:51.409209 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:51.439272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205050746.mount: Deactivated successfully. Sep 12 10:10:51.443997 containerd[1481]: time="2025-09-12T10:10:51.443200731Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\"" Sep 12 10:10:51.449401 containerd[1481]: time="2025-09-12T10:10:51.449336067Z" level=info msg="StartContainer for \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\"" Sep 12 10:10:51.519502 systemd[1]: Started cri-containerd-68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a.scope - libcontainer container 68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a. Sep 12 10:10:51.609600 systemd[1]: cri-containerd-68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a.scope: Deactivated successfully. Sep 12 10:10:51.637283 containerd[1481]: time="2025-09-12T10:10:51.637221815Z" level=info msg="StartContainer for \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\" returns successfully" Sep 12 10:10:51.691259 containerd[1481]: time="2025-09-12T10:10:51.691182131Z" level=info msg="shim disconnected" id=68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a namespace=k8s.io Sep 12 10:10:51.691776 containerd[1481]: time="2025-09-12T10:10:51.691607414Z" level=warning msg="cleaning up after shim disconnected" id=68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a namespace=k8s.io Sep 12 10:10:51.691776 containerd[1481]: time="2025-09-12T10:10:51.691642436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:10:52.351018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a-rootfs.mount: Deactivated successfully. Sep 12 10:10:52.413386 kubelet[2589]: E0912 10:10:52.413345 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:52.414444 kubelet[2589]: E0912 10:10:52.414043 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:52.416985 containerd[1481]: time="2025-09-12T10:10:52.416704020Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:10:52.449093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306528393.mount: Deactivated successfully. Sep 12 10:10:52.454970 kubelet[2589]: I0912 10:10:52.451068 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-qgzs4" podStartSLOduration=3.563059823 podStartE2EDuration="13.451046794s" podCreationTimestamp="2025-09-12 10:10:39 +0000 UTC" firstStartedPulling="2025-09-12 10:10:40.362828051 +0000 UTC m=+5.372082922" lastFinishedPulling="2025-09-12 10:10:50.25081502 +0000 UTC m=+15.260069893" observedRunningTime="2025-09-12 10:10:51.542762473 +0000 UTC m=+16.552017352" watchObservedRunningTime="2025-09-12 10:10:52.451046794 +0000 UTC m=+17.460301673" Sep 12 10:10:52.460983 containerd[1481]: time="2025-09-12T10:10:52.458134207Z" level=info msg="CreateContainer within sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\"" Sep 12 10:10:52.461336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789374759.mount: Deactivated successfully. Sep 12 10:10:52.465306 containerd[1481]: time="2025-09-12T10:10:52.463262076Z" level=info msg="StartContainer for \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\"" Sep 12 10:10:52.515275 systemd[1]: Started cri-containerd-b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4.scope - libcontainer container b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4. Sep 12 10:10:52.560020 containerd[1481]: time="2025-09-12T10:10:52.559809242Z" level=info msg="StartContainer for \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\" returns successfully" Sep 12 10:10:52.823360 kubelet[2589]: I0912 10:10:52.823321 2589 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 10:10:52.878591 systemd[1]: Created slice kubepods-burstable-podf7630f01_2d67_42fe_86db_9665ee9fb9ca.slice - libcontainer container kubepods-burstable-podf7630f01_2d67_42fe_86db_9665ee9fb9ca.slice. Sep 12 10:10:52.891671 systemd[1]: Created slice kubepods-burstable-pod03a40189_98e7_435d_ad0c_79a15ce9be85.slice - libcontainer container kubepods-burstable-pod03a40189_98e7_435d_ad0c_79a15ce9be85.slice. Sep 12 10:10:53.038622 kubelet[2589]: I0912 10:10:53.038369 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03a40189-98e7-435d-ad0c-79a15ce9be85-config-volume\") pod \"coredns-7c65d6cfc9-rq42d\" (UID: \"03a40189-98e7-435d-ad0c-79a15ce9be85\") " pod="kube-system/coredns-7c65d6cfc9-rq42d" Sep 12 10:10:53.038622 kubelet[2589]: I0912 10:10:53.038478 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7z4\" (UniqueName: \"kubernetes.io/projected/f7630f01-2d67-42fe-86db-9665ee9fb9ca-kube-api-access-pv7z4\") pod \"coredns-7c65d6cfc9-27c2h\" (UID: \"f7630f01-2d67-42fe-86db-9665ee9fb9ca\") " pod="kube-system/coredns-7c65d6cfc9-27c2h" Sep 12 10:10:53.038622 kubelet[2589]: I0912 10:10:53.038502 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7630f01-2d67-42fe-86db-9665ee9fb9ca-config-volume\") pod \"coredns-7c65d6cfc9-27c2h\" (UID: \"f7630f01-2d67-42fe-86db-9665ee9fb9ca\") " pod="kube-system/coredns-7c65d6cfc9-27c2h" Sep 12 10:10:53.038622 kubelet[2589]: I0912 10:10:53.038520 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpdbl\" (UniqueName: \"kubernetes.io/projected/03a40189-98e7-435d-ad0c-79a15ce9be85-kube-api-access-jpdbl\") pod \"coredns-7c65d6cfc9-rq42d\" (UID: \"03a40189-98e7-435d-ad0c-79a15ce9be85\") " pod="kube-system/coredns-7c65d6cfc9-rq42d" Sep 12 10:10:53.189439 kubelet[2589]: E0912 10:10:53.189269 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:53.190958 containerd[1481]: time="2025-09-12T10:10:53.190652498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-27c2h,Uid:f7630f01-2d67-42fe-86db-9665ee9fb9ca,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:53.199395 kubelet[2589]: E0912 10:10:53.197658 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:53.200596 containerd[1481]: time="2025-09-12T10:10:53.200464467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rq42d,Uid:03a40189-98e7-435d-ad0c-79a15ce9be85,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:53.422505 kubelet[2589]: E0912 10:10:53.422466 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:54.425676 kubelet[2589]: E0912 10:10:54.425386 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:55.181034 systemd-networkd[1380]: cilium_host: Link UP Sep 12 10:10:55.181290 systemd-networkd[1380]: cilium_net: Link UP Sep 12 10:10:55.184380 systemd-networkd[1380]: cilium_net: Gained carrier Sep 12 10:10:55.184750 systemd-networkd[1380]: cilium_host: Gained carrier Sep 12 10:10:55.184888 systemd-networkd[1380]: cilium_net: Gained IPv6LL Sep 12 10:10:55.185112 systemd-networkd[1380]: cilium_host: Gained IPv6LL Sep 12 10:10:55.365373 systemd-networkd[1380]: cilium_vxlan: Link UP Sep 12 10:10:55.365382 systemd-networkd[1380]: cilium_vxlan: Gained carrier Sep 12 10:10:55.427263 kubelet[2589]: E0912 10:10:55.427154 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:55.800992 kernel: NET: Registered PF_ALG protocol family Sep 12 10:10:56.727624 systemd-networkd[1380]: lxc_health: Link UP Sep 12 10:10:56.736392 systemd-networkd[1380]: lxc_health: Gained carrier Sep 12 10:10:57.049288 systemd-networkd[1380]: cilium_vxlan: Gained IPv6LL Sep 12 10:10:57.356217 kernel: eth0: renamed from tmpcbaa3 Sep 12 10:10:57.354881 systemd-networkd[1380]: lxce4264b3ca9d5: Link UP Sep 12 10:10:57.366706 systemd-networkd[1380]: lxccbd2ac30e0f8: Link UP Sep 12 10:10:57.372307 kernel: eth0: renamed from tmp910d5 Sep 12 10:10:57.375089 systemd-networkd[1380]: lxce4264b3ca9d5: Gained carrier Sep 12 10:10:57.378502 systemd-networkd[1380]: lxccbd2ac30e0f8: Gained carrier Sep 12 10:10:57.779419 kubelet[2589]: E0912 10:10:57.779298 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:10:57.813571 kubelet[2589]: I0912 10:10:57.813479 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qp96l" podStartSLOduration=11.597283173 podStartE2EDuration="19.813452735s" podCreationTimestamp="2025-09-12 10:10:38 +0000 UTC" firstStartedPulling="2025-09-12 10:10:40.016336794 +0000 UTC m=+5.025591677" lastFinishedPulling="2025-09-12 10:10:48.232506361 +0000 UTC m=+13.241761239" observedRunningTime="2025-09-12 10:10:53.46418867 +0000 UTC m=+18.473443551" watchObservedRunningTime="2025-09-12 10:10:57.813452735 +0000 UTC m=+22.822707614" Sep 12 10:10:58.010282 systemd-networkd[1380]: lxc_health: Gained IPv6LL Sep 12 10:10:58.458578 systemd-networkd[1380]: lxce4264b3ca9d5: Gained IPv6LL Sep 12 10:10:58.649398 systemd-networkd[1380]: lxccbd2ac30e0f8: Gained IPv6LL Sep 12 10:10:59.447946 kubelet[2589]: I0912 10:10:59.446022 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 10:10:59.447946 kubelet[2589]: E0912 10:10:59.446806 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:00.443975 kubelet[2589]: E0912 10:11:00.442952 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:03.214332 containerd[1481]: time="2025-09-12T10:11:03.212367441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:11:03.214332 containerd[1481]: time="2025-09-12T10:11:03.212526973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:11:03.214332 containerd[1481]: time="2025-09-12T10:11:03.212553438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:11:03.214332 containerd[1481]: time="2025-09-12T10:11:03.212715063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:11:03.258300 containerd[1481]: time="2025-09-12T10:11:03.257118075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:11:03.258300 containerd[1481]: time="2025-09-12T10:11:03.257485824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:11:03.267585 containerd[1481]: time="2025-09-12T10:11:03.257640593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:11:03.267585 containerd[1481]: time="2025-09-12T10:11:03.258363617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:11:03.301236 systemd[1]: Started cri-containerd-910d5abf1fb7f191065bb71f115a779a1aff5962a79f7f36a151ad3c82b9652e.scope - libcontainer container 910d5abf1fb7f191065bb71f115a779a1aff5962a79f7f36a151ad3c82b9652e. Sep 12 10:11:03.321332 systemd[1]: Started cri-containerd-cbaa3d2203181f8873867f4be617d2c227d1152a1e27f26a44b6769ce168460c.scope - libcontainer container cbaa3d2203181f8873867f4be617d2c227d1152a1e27f26a44b6769ce168460c. Sep 12 10:11:03.405988 containerd[1481]: time="2025-09-12T10:11:03.405096287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rq42d,Uid:03a40189-98e7-435d-ad0c-79a15ce9be85,Namespace:kube-system,Attempt:0,} returns sandbox id \"910d5abf1fb7f191065bb71f115a779a1aff5962a79f7f36a151ad3c82b9652e\"" Sep 12 10:11:03.409909 kubelet[2589]: E0912 10:11:03.409771 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:03.422446 containerd[1481]: time="2025-09-12T10:11:03.422390488Z" level=info msg="CreateContainer within sandbox \"910d5abf1fb7f191065bb71f115a779a1aff5962a79f7f36a151ad3c82b9652e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:11:03.495172 containerd[1481]: time="2025-09-12T10:11:03.493460075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-27c2h,Uid:f7630f01-2d67-42fe-86db-9665ee9fb9ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbaa3d2203181f8873867f4be617d2c227d1152a1e27f26a44b6769ce168460c\"" Sep 12 10:11:03.495355 kubelet[2589]: E0912 10:11:03.494444 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:03.498754 containerd[1481]: time="2025-09-12T10:11:03.497491383Z" level=info msg="CreateContainer within sandbox \"cbaa3d2203181f8873867f4be617d2c227d1152a1e27f26a44b6769ce168460c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:11:03.525878 containerd[1481]: time="2025-09-12T10:11:03.525808525Z" level=info msg="CreateContainer within sandbox \"910d5abf1fb7f191065bb71f115a779a1aff5962a79f7f36a151ad3c82b9652e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95c2d02b76fa86eb969e794c8e4804ac7c5aba788fba54a8aafa05e08b4d5f68\"" Sep 12 10:11:03.529290 containerd[1481]: time="2025-09-12T10:11:03.528137708Z" level=info msg="StartContainer for \"95c2d02b76fa86eb969e794c8e4804ac7c5aba788fba54a8aafa05e08b4d5f68\"" Sep 12 10:11:03.544079 containerd[1481]: time="2025-09-12T10:11:03.543968603Z" level=info msg="CreateContainer within sandbox \"cbaa3d2203181f8873867f4be617d2c227d1152a1e27f26a44b6769ce168460c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f65544d484a8c6a56840a0f65b0100ad781c611b9f469197db64fbfec82c7db8\"" Sep 12 10:11:03.547465 containerd[1481]: time="2025-09-12T10:11:03.547108048Z" level=info msg="StartContainer for \"f65544d484a8c6a56840a0f65b0100ad781c611b9f469197db64fbfec82c7db8\"" Sep 12 10:11:03.573403 systemd[1]: Started cri-containerd-95c2d02b76fa86eb969e794c8e4804ac7c5aba788fba54a8aafa05e08b4d5f68.scope - libcontainer container 95c2d02b76fa86eb969e794c8e4804ac7c5aba788fba54a8aafa05e08b4d5f68. Sep 12 10:11:03.605272 systemd[1]: Started cri-containerd-f65544d484a8c6a56840a0f65b0100ad781c611b9f469197db64fbfec82c7db8.scope - libcontainer container f65544d484a8c6a56840a0f65b0100ad781c611b9f469197db64fbfec82c7db8. Sep 12 10:11:03.636401 containerd[1481]: time="2025-09-12T10:11:03.636332330Z" level=info msg="StartContainer for \"95c2d02b76fa86eb969e794c8e4804ac7c5aba788fba54a8aafa05e08b4d5f68\" returns successfully" Sep 12 10:11:03.663273 containerd[1481]: time="2025-09-12T10:11:03.663068318Z" level=info msg="StartContainer for \"f65544d484a8c6a56840a0f65b0100ad781c611b9f469197db64fbfec82c7db8\" returns successfully" Sep 12 10:11:04.227848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811315027.mount: Deactivated successfully. Sep 12 10:11:04.468736 kubelet[2589]: E0912 10:11:04.468678 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:04.474981 kubelet[2589]: E0912 10:11:04.474406 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:04.489944 kubelet[2589]: I0912 10:11:04.488870 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rq42d" podStartSLOduration=25.488835667 podStartE2EDuration="25.488835667s" podCreationTimestamp="2025-09-12 10:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:11:04.486410625 +0000 UTC m=+29.495665505" watchObservedRunningTime="2025-09-12 10:11:04.488835667 +0000 UTC m=+29.498090545" Sep 12 10:11:04.551272 kubelet[2589]: I0912 10:11:04.551174 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-27c2h" podStartSLOduration=25.551143663 podStartE2EDuration="25.551143663s" podCreationTimestamp="2025-09-12 10:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:11:04.511051065 +0000 UTC m=+29.520305943" watchObservedRunningTime="2025-09-12 10:11:04.551143663 +0000 UTC m=+29.560398548" Sep 12 10:11:05.476749 kubelet[2589]: E0912 10:11:05.476679 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:05.478289 kubelet[2589]: E0912 10:11:05.477570 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:06.479380 kubelet[2589]: E0912 10:11:06.479320 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:06.480400 kubelet[2589]: E0912 10:11:06.480281 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:14.034574 systemd[1]: Started sshd@7-64.23.164.42:22-139.178.68.195:57478.service - OpenSSH per-connection server daemon (139.178.68.195:57478). Sep 12 10:11:14.156455 sshd[3988]: Accepted publickey for core from 139.178.68.195 port 57478 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:14.158810 sshd-session[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:14.167380 systemd-logind[1463]: New session 8 of user core. Sep 12 10:11:14.174334 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 10:11:14.812320 sshd[3990]: Connection closed by 139.178.68.195 port 57478 Sep 12 10:11:14.813492 sshd-session[3988]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:14.824595 systemd[1]: sshd@7-64.23.164.42:22-139.178.68.195:57478.service: Deactivated successfully. Sep 12 10:11:14.829744 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 10:11:14.831122 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Sep 12 10:11:14.833780 systemd-logind[1463]: Removed session 8. Sep 12 10:11:19.833559 systemd[1]: Started sshd@8-64.23.164.42:22-139.178.68.195:57488.service - OpenSSH per-connection server daemon (139.178.68.195:57488). Sep 12 10:11:19.895033 sshd[4004]: Accepted publickey for core from 139.178.68.195 port 57488 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:19.897257 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:19.905434 systemd-logind[1463]: New session 9 of user core. Sep 12 10:11:19.913308 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 10:11:20.066974 sshd[4006]: Connection closed by 139.178.68.195 port 57488 Sep 12 10:11:20.067847 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:20.073839 systemd[1]: sshd@8-64.23.164.42:22-139.178.68.195:57488.service: Deactivated successfully. Sep 12 10:11:20.077665 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 10:11:20.079245 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Sep 12 10:11:20.080680 systemd-logind[1463]: Removed session 9. Sep 12 10:11:25.088384 systemd[1]: Started sshd@9-64.23.164.42:22-139.178.68.195:45002.service - OpenSSH per-connection server daemon (139.178.68.195:45002). Sep 12 10:11:25.145726 sshd[4019]: Accepted publickey for core from 139.178.68.195 port 45002 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:25.148119 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:25.157397 systemd-logind[1463]: New session 10 of user core. Sep 12 10:11:25.164384 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 10:11:25.316007 sshd[4021]: Connection closed by 139.178.68.195 port 45002 Sep 12 10:11:25.316885 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:25.322352 systemd[1]: sshd@9-64.23.164.42:22-139.178.68.195:45002.service: Deactivated successfully. Sep 12 10:11:25.326490 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 10:11:25.330208 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Sep 12 10:11:25.332200 systemd-logind[1463]: Removed session 10. Sep 12 10:11:30.339393 systemd[1]: Started sshd@10-64.23.164.42:22-139.178.68.195:49826.service - OpenSSH per-connection server daemon (139.178.68.195:49826). Sep 12 10:11:30.392543 sshd[4035]: Accepted publickey for core from 139.178.68.195 port 49826 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:30.394635 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:30.401103 systemd-logind[1463]: New session 11 of user core. Sep 12 10:11:30.408310 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 10:11:30.564271 sshd[4037]: Connection closed by 139.178.68.195 port 49826 Sep 12 10:11:30.565714 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:30.581457 systemd[1]: sshd@10-64.23.164.42:22-139.178.68.195:49826.service: Deactivated successfully. Sep 12 10:11:30.584689 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 10:11:30.586417 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Sep 12 10:11:30.595429 systemd[1]: Started sshd@11-64.23.164.42:22-139.178.68.195:49842.service - OpenSSH per-connection server daemon (139.178.68.195:49842). Sep 12 10:11:30.597482 systemd-logind[1463]: Removed session 11. Sep 12 10:11:30.676490 sshd[4049]: Accepted publickey for core from 139.178.68.195 port 49842 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:30.678291 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:30.688821 systemd-logind[1463]: New session 12 of user core. Sep 12 10:11:30.699379 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 10:11:30.956399 sshd[4052]: Connection closed by 139.178.68.195 port 49842 Sep 12 10:11:30.958383 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:30.974157 systemd[1]: sshd@11-64.23.164.42:22-139.178.68.195:49842.service: Deactivated successfully. Sep 12 10:11:30.979614 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 10:11:30.982676 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Sep 12 10:11:30.994468 systemd[1]: Started sshd@12-64.23.164.42:22-139.178.68.195:49852.service - OpenSSH per-connection server daemon (139.178.68.195:49852). Sep 12 10:11:31.001613 systemd-logind[1463]: Removed session 12. Sep 12 10:11:31.068018 sshd[4060]: Accepted publickey for core from 139.178.68.195 port 49852 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:31.070198 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:31.078140 systemd-logind[1463]: New session 13 of user core. Sep 12 10:11:31.082444 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 10:11:31.245945 sshd[4063]: Connection closed by 139.178.68.195 port 49852 Sep 12 10:11:31.246707 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:31.251101 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Sep 12 10:11:31.252163 systemd[1]: sshd@12-64.23.164.42:22-139.178.68.195:49852.service: Deactivated successfully. Sep 12 10:11:31.255594 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 10:11:31.258363 systemd-logind[1463]: Removed session 13. Sep 12 10:11:36.270373 systemd[1]: Started sshd@13-64.23.164.42:22-139.178.68.195:49862.service - OpenSSH per-connection server daemon (139.178.68.195:49862). Sep 12 10:11:36.339384 sshd[4079]: Accepted publickey for core from 139.178.68.195 port 49862 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:36.342140 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:36.350208 systemd-logind[1463]: New session 14 of user core. Sep 12 10:11:36.357283 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 10:11:36.499959 sshd[4081]: Connection closed by 139.178.68.195 port 49862 Sep 12 10:11:36.500864 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:36.506719 systemd[1]: sshd@13-64.23.164.42:22-139.178.68.195:49862.service: Deactivated successfully. Sep 12 10:11:36.510703 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 10:11:36.511848 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Sep 12 10:11:36.513385 systemd-logind[1463]: Removed session 14. Sep 12 10:11:41.528165 systemd[1]: Started sshd@14-64.23.164.42:22-139.178.68.195:56342.service - OpenSSH per-connection server daemon (139.178.68.195:56342). Sep 12 10:11:41.585151 sshd[4095]: Accepted publickey for core from 139.178.68.195 port 56342 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:41.587529 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:41.594597 systemd-logind[1463]: New session 15 of user core. Sep 12 10:11:41.601129 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 10:11:41.740914 sshd[4097]: Connection closed by 139.178.68.195 port 56342 Sep 12 10:11:41.741616 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:41.747425 systemd[1]: sshd@14-64.23.164.42:22-139.178.68.195:56342.service: Deactivated successfully. Sep 12 10:11:41.750889 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 10:11:41.752445 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Sep 12 10:11:41.753522 systemd-logind[1463]: Removed session 15. Sep 12 10:11:46.770904 systemd[1]: Started sshd@15-64.23.164.42:22-139.178.68.195:56354.service - OpenSSH per-connection server daemon (139.178.68.195:56354). Sep 12 10:11:46.828058 sshd[4108]: Accepted publickey for core from 139.178.68.195 port 56354 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:46.828829 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:46.837183 systemd-logind[1463]: New session 16 of user core. Sep 12 10:11:46.842309 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 10:11:46.979202 sshd[4110]: Connection closed by 139.178.68.195 port 56354 Sep 12 10:11:46.980194 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:46.991737 systemd[1]: sshd@15-64.23.164.42:22-139.178.68.195:56354.service: Deactivated successfully. Sep 12 10:11:46.994510 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 10:11:46.996600 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Sep 12 10:11:47.002426 systemd[1]: Started sshd@16-64.23.164.42:22-139.178.68.195:56362.service - OpenSSH per-connection server daemon (139.178.68.195:56362). Sep 12 10:11:47.004912 systemd-logind[1463]: Removed session 16. Sep 12 10:11:47.065441 sshd[4121]: Accepted publickey for core from 139.178.68.195 port 56362 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:47.067353 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:47.074877 systemd-logind[1463]: New session 17 of user core. Sep 12 10:11:47.084120 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 10:11:47.186964 kubelet[2589]: E0912 10:11:47.186883 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:47.190832 kubelet[2589]: E0912 10:11:47.189054 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:11:47.450090 sshd[4124]: Connection closed by 139.178.68.195 port 56362 Sep 12 10:11:47.452068 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:47.475784 systemd[1]: Started sshd@17-64.23.164.42:22-139.178.68.195:56370.service - OpenSSH per-connection server daemon (139.178.68.195:56370). Sep 12 10:11:47.476508 systemd[1]: sshd@16-64.23.164.42:22-139.178.68.195:56362.service: Deactivated successfully. Sep 12 10:11:47.482245 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 10:11:47.484179 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Sep 12 10:11:47.488496 systemd-logind[1463]: Removed session 17. Sep 12 10:11:47.556763 sshd[4131]: Accepted publickey for core from 139.178.68.195 port 56370 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:47.558435 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:47.565645 systemd-logind[1463]: New session 18 of user core. Sep 12 10:11:47.575308 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 10:11:49.429719 sshd[4136]: Connection closed by 139.178.68.195 port 56370 Sep 12 10:11:49.429325 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:49.449985 systemd[1]: sshd@17-64.23.164.42:22-139.178.68.195:56370.service: Deactivated successfully. Sep 12 10:11:49.456439 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 10:11:49.463342 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Sep 12 10:11:49.474642 systemd[1]: Started sshd@18-64.23.164.42:22-139.178.68.195:56374.service - OpenSSH per-connection server daemon (139.178.68.195:56374). Sep 12 10:11:49.482303 systemd-logind[1463]: Removed session 18. Sep 12 10:11:49.557374 sshd[4152]: Accepted publickey for core from 139.178.68.195 port 56374 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:49.559810 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:49.569034 systemd-logind[1463]: New session 19 of user core. Sep 12 10:11:49.573289 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 10:11:49.969375 sshd[4155]: Connection closed by 139.178.68.195 port 56374 Sep 12 10:11:49.970257 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:49.990405 systemd[1]: sshd@18-64.23.164.42:22-139.178.68.195:56374.service: Deactivated successfully. Sep 12 10:11:49.998175 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 10:11:50.003563 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Sep 12 10:11:50.015403 systemd[1]: Started sshd@19-64.23.164.42:22-139.178.68.195:58624.service - OpenSSH per-connection server daemon (139.178.68.195:58624). Sep 12 10:11:50.017594 systemd-logind[1463]: Removed session 19. Sep 12 10:11:50.074974 sshd[4164]: Accepted publickey for core from 139.178.68.195 port 58624 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:50.077390 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:50.086046 systemd-logind[1463]: New session 20 of user core. Sep 12 10:11:50.091338 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 10:11:50.250843 sshd[4167]: Connection closed by 139.178.68.195 port 58624 Sep 12 10:11:50.252059 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:50.258068 systemd[1]: sshd@19-64.23.164.42:22-139.178.68.195:58624.service: Deactivated successfully. Sep 12 10:11:50.261437 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 10:11:50.263499 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Sep 12 10:11:50.264835 systemd-logind[1463]: Removed session 20. Sep 12 10:11:55.279432 systemd[1]: Started sshd@20-64.23.164.42:22-139.178.68.195:58632.service - OpenSSH per-connection server daemon (139.178.68.195:58632). Sep 12 10:11:55.329786 sshd[4183]: Accepted publickey for core from 139.178.68.195 port 58632 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:11:55.332305 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:55.339049 systemd-logind[1463]: New session 21 of user core. Sep 12 10:11:55.346258 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 10:11:55.481177 sshd[4185]: Connection closed by 139.178.68.195 port 58632 Sep 12 10:11:55.483226 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:55.487671 systemd[1]: sshd@20-64.23.164.42:22-139.178.68.195:58632.service: Deactivated successfully. Sep 12 10:11:55.492492 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 10:11:55.496192 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Sep 12 10:11:55.498469 systemd-logind[1463]: Removed session 21. Sep 12 10:12:00.504492 systemd[1]: Started sshd@21-64.23.164.42:22-139.178.68.195:48800.service - OpenSSH per-connection server daemon (139.178.68.195:48800). Sep 12 10:12:00.554666 sshd[4197]: Accepted publickey for core from 139.178.68.195 port 48800 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:12:00.556986 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:00.564187 systemd-logind[1463]: New session 22 of user core. Sep 12 10:12:00.574286 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 10:12:00.715383 sshd[4199]: Connection closed by 139.178.68.195 port 48800 Sep 12 10:12:00.716030 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:00.722858 systemd[1]: sshd@21-64.23.164.42:22-139.178.68.195:48800.service: Deactivated successfully. Sep 12 10:12:00.727676 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 10:12:00.728860 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Sep 12 10:12:00.730484 systemd-logind[1463]: Removed session 22. Sep 12 10:12:02.183155 kubelet[2589]: E0912 10:12:02.183104 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:05.739519 systemd[1]: Started sshd@22-64.23.164.42:22-139.178.68.195:48816.service - OpenSSH per-connection server daemon (139.178.68.195:48816). Sep 12 10:12:05.795201 sshd[4212]: Accepted publickey for core from 139.178.68.195 port 48816 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:12:05.797441 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:05.804363 systemd-logind[1463]: New session 23 of user core. Sep 12 10:12:05.811286 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 10:12:05.980004 sshd[4214]: Connection closed by 139.178.68.195 port 48816 Sep 12 10:12:05.980794 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:05.988585 systemd[1]: sshd@22-64.23.164.42:22-139.178.68.195:48816.service: Deactivated successfully. Sep 12 10:12:05.991829 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 10:12:05.995495 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Sep 12 10:12:05.996853 systemd-logind[1463]: Removed session 23. Sep 12 10:12:07.185541 kubelet[2589]: E0912 10:12:07.185116 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:10.183202 kubelet[2589]: E0912 10:12:10.183145 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:11.001535 systemd[1]: Started sshd@23-64.23.164.42:22-139.178.68.195:57186.service - OpenSSH per-connection server daemon (139.178.68.195:57186). Sep 12 10:12:11.070231 sshd[4228]: Accepted publickey for core from 139.178.68.195 port 57186 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:12:11.071875 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:11.080071 systemd-logind[1463]: New session 24 of user core. Sep 12 10:12:11.086512 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 10:12:11.242544 sshd[4230]: Connection closed by 139.178.68.195 port 57186 Sep 12 10:12:11.243807 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:11.259625 systemd[1]: sshd@23-64.23.164.42:22-139.178.68.195:57186.service: Deactivated successfully. Sep 12 10:12:11.262750 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 10:12:11.266262 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Sep 12 10:12:11.273423 systemd[1]: Started sshd@24-64.23.164.42:22-139.178.68.195:57198.service - OpenSSH per-connection server daemon (139.178.68.195:57198). Sep 12 10:12:11.275297 systemd-logind[1463]: Removed session 24. Sep 12 10:12:11.344501 sshd[4241]: Accepted publickey for core from 139.178.68.195 port 57198 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:12:11.346832 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:11.354082 systemd-logind[1463]: New session 25 of user core. Sep 12 10:12:11.367290 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 10:12:13.043512 containerd[1481]: time="2025-09-12T10:12:13.043375852Z" level=info msg="StopContainer for \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\" with timeout 30 (s)" Sep 12 10:12:13.048551 containerd[1481]: time="2025-09-12T10:12:13.048330995Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:12:13.048551 containerd[1481]: time="2025-09-12T10:12:13.048406182Z" level=info msg="Stop container \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\" with signal terminated" Sep 12 10:12:13.065638 systemd[1]: cri-containerd-359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183.scope: Deactivated successfully. Sep 12 10:12:13.080779 containerd[1481]: time="2025-09-12T10:12:13.080515306Z" level=info msg="StopContainer for \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\" with timeout 2 (s)" Sep 12 10:12:13.082248 containerd[1481]: time="2025-09-12T10:12:13.082140723Z" level=info msg="Stop container \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\" with signal terminated" Sep 12 10:12:13.099130 systemd-networkd[1380]: lxc_health: Link DOWN Sep 12 10:12:13.099139 systemd-networkd[1380]: lxc_health: Lost carrier Sep 12 10:12:13.120818 systemd[1]: cri-containerd-b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4.scope: Deactivated successfully. Sep 12 10:12:13.121621 systemd[1]: cri-containerd-b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4.scope: Consumed 9.716s CPU time, 190.7M memory peak, 68.1M read from disk, 13.3M written to disk. Sep 12 10:12:13.132705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183-rootfs.mount: Deactivated successfully. Sep 12 10:12:13.143563 containerd[1481]: time="2025-09-12T10:12:13.143204517Z" level=info msg="shim disconnected" id=359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183 namespace=k8s.io Sep 12 10:12:13.143563 containerd[1481]: time="2025-09-12T10:12:13.143284696Z" level=warning msg="cleaning up after shim disconnected" id=359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183 namespace=k8s.io Sep 12 10:12:13.143563 containerd[1481]: time="2025-09-12T10:12:13.143297974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:12:13.166716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4-rootfs.mount: Deactivated successfully. Sep 12 10:12:13.177283 containerd[1481]: time="2025-09-12T10:12:13.177202852Z" level=info msg="shim disconnected" id=b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4 namespace=k8s.io Sep 12 10:12:13.177283 containerd[1481]: time="2025-09-12T10:12:13.177280737Z" level=warning msg="cleaning up after shim disconnected" id=b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4 namespace=k8s.io Sep 12 10:12:13.177911 containerd[1481]: time="2025-09-12T10:12:13.177294353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:12:13.179294 containerd[1481]: time="2025-09-12T10:12:13.179263165Z" level=info msg="StopContainer for \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\" returns successfully" Sep 12 10:12:13.182069 containerd[1481]: time="2025-09-12T10:12:13.181868089Z" level=info msg="StopPodSandbox for \"4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b\"" Sep 12 10:12:13.183878 kubelet[2589]: E0912 10:12:13.183699 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:13.198183 containerd[1481]: time="2025-09-12T10:12:13.188137628Z" level=info msg="Container to stop \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:12:13.204158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b-shm.mount: Deactivated successfully. Sep 12 10:12:13.225060 systemd[1]: cri-containerd-4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b.scope: Deactivated successfully. Sep 12 10:12:13.233464 containerd[1481]: time="2025-09-12T10:12:13.233379066Z" level=info msg="StopContainer for \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\" returns successfully" Sep 12 10:12:13.235541 containerd[1481]: time="2025-09-12T10:12:13.235332535Z" level=info msg="StopPodSandbox for \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\"" Sep 12 10:12:13.235541 containerd[1481]: time="2025-09-12T10:12:13.235403086Z" level=info msg="Container to stop \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:12:13.235541 containerd[1481]: time="2025-09-12T10:12:13.235493081Z" level=info msg="Container to stop \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:12:13.235541 containerd[1481]: time="2025-09-12T10:12:13.235506893Z" level=info msg="Container to stop \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:12:13.235541 containerd[1481]: time="2025-09-12T10:12:13.235522819Z" level=info msg="Container to stop \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:12:13.235541 containerd[1481]: time="2025-09-12T10:12:13.235536411Z" level=info msg="Container to stop \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:12:13.240571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a-shm.mount: Deactivated successfully. Sep 12 10:12:13.253154 systemd[1]: cri-containerd-73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a.scope: Deactivated successfully. Sep 12 10:12:13.288653 containerd[1481]: time="2025-09-12T10:12:13.288578348Z" level=info msg="shim disconnected" id=73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a namespace=k8s.io Sep 12 10:12:13.289367 containerd[1481]: time="2025-09-12T10:12:13.289219127Z" level=warning msg="cleaning up after shim disconnected" id=73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a namespace=k8s.io Sep 12 10:12:13.289367 containerd[1481]: time="2025-09-12T10:12:13.289245886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:12:13.290655 containerd[1481]: time="2025-09-12T10:12:13.288899683Z" level=info msg="shim disconnected" id=4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b namespace=k8s.io Sep 12 10:12:13.290655 containerd[1481]: time="2025-09-12T10:12:13.290010605Z" level=warning msg="cleaning up after shim disconnected" id=4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b namespace=k8s.io Sep 12 10:12:13.290655 containerd[1481]: time="2025-09-12T10:12:13.290037184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:12:13.320478 containerd[1481]: time="2025-09-12T10:12:13.320238124Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:12:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:12:13.321585 containerd[1481]: time="2025-09-12T10:12:13.321328789Z" level=info msg="TearDown network for sandbox \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" successfully" Sep 12 10:12:13.321585 containerd[1481]: time="2025-09-12T10:12:13.321389578Z" level=info msg="StopPodSandbox for \"73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a\" returns successfully" Sep 12 10:12:13.334163 containerd[1481]: time="2025-09-12T10:12:13.334083457Z" level=info msg="TearDown network for sandbox \"4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b\" successfully" Sep 12 10:12:13.334163 containerd[1481]: time="2025-09-12T10:12:13.334110979Z" level=info msg="StopPodSandbox for \"4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b\" returns successfully" Sep 12 10:12:13.386632 kubelet[2589]: I0912 10:12:13.386482 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-host-proc-sys-kernel\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.387089 kubelet[2589]: I0912 10:12:13.386584 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.387089 kubelet[2589]: I0912 10:12:13.386877 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-xtables-lock\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.388033 kubelet[2589]: I0912 10:12:13.387329 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-config-path\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.391063 kubelet[2589]: I0912 10:12:13.391018 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-run\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.391063 kubelet[2589]: I0912 10:12:13.391064 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhlnr\" (UniqueName: \"kubernetes.io/projected/f79ca656-115f-4339-9dba-f6a7e6ae5ae4-kube-api-access-vhlnr\") pod \"f79ca656-115f-4339-9dba-f6a7e6ae5ae4\" (UID: \"f79ca656-115f-4339-9dba-f6a7e6ae5ae4\") " Sep 12 10:12:13.391356 kubelet[2589]: I0912 10:12:13.391080 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-bpf-maps\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.391356 kubelet[2589]: I0912 10:12:13.391100 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-host-proc-sys-net\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.391356 kubelet[2589]: I0912 10:12:13.391123 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5z7d\" (UniqueName: \"kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-kube-api-access-p5z7d\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.391356 kubelet[2589]: I0912 10:12:13.391151 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-cgroup\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.391356 kubelet[2589]: I0912 10:12:13.391175 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f79ca656-115f-4339-9dba-f6a7e6ae5ae4-cilium-config-path\") pod \"f79ca656-115f-4339-9dba-f6a7e6ae5ae4\" (UID: \"f79ca656-115f-4339-9dba-f6a7e6ae5ae4\") " Sep 12 10:12:13.391356 kubelet[2589]: I0912 10:12:13.391193 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-hostproc\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.392560 kubelet[2589]: I0912 10:12:13.391210 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-hubble-tls\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.392560 kubelet[2589]: I0912 10:12:13.391225 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-lib-modules\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.392560 kubelet[2589]: I0912 10:12:13.391239 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cni-path\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.392560 kubelet[2589]: I0912 10:12:13.391252 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-etc-cni-netd\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.392560 kubelet[2589]: I0912 10:12:13.391272 2589 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e211083a-c916-4471-83c0-5d3ed42c2873-clustermesh-secrets\") pod \"e211083a-c916-4471-83c0-5d3ed42c2873\" (UID: \"e211083a-c916-4471-83c0-5d3ed42c2873\") " Sep 12 10:12:13.392560 kubelet[2589]: I0912 10:12:13.391324 2589 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-host-proc-sys-kernel\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.398109 kubelet[2589]: I0912 10:12:13.387256 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.398109 kubelet[2589]: I0912 10:12:13.394047 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.398109 kubelet[2589]: I0912 10:12:13.396651 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-hostproc" (OuterVolumeSpecName: "hostproc") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.398109 kubelet[2589]: I0912 10:12:13.397256 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.398109 kubelet[2589]: I0912 10:12:13.397311 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cni-path" (OuterVolumeSpecName: "cni-path") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.398593 kubelet[2589]: I0912 10:12:13.397328 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.398593 kubelet[2589]: I0912 10:12:13.397350 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 10:12:13.398593 kubelet[2589]: I0912 10:12:13.397770 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.398593 kubelet[2589]: I0912 10:12:13.398060 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.402589 kubelet[2589]: I0912 10:12:13.401861 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:12:13.402589 kubelet[2589]: I0912 10:12:13.402401 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e211083a-c916-4471-83c0-5d3ed42c2873-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 10:12:13.402589 kubelet[2589]: I0912 10:12:13.402506 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-kube-api-access-p5z7d" (OuterVolumeSpecName: "kube-api-access-p5z7d") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "kube-api-access-p5z7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 10:12:13.404899 kubelet[2589]: I0912 10:12:13.404804 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79ca656-115f-4339-9dba-f6a7e6ae5ae4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f79ca656-115f-4339-9dba-f6a7e6ae5ae4" (UID: "f79ca656-115f-4339-9dba-f6a7e6ae5ae4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 10:12:13.407850 kubelet[2589]: I0912 10:12:13.407773 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e211083a-c916-4471-83c0-5d3ed42c2873" (UID: "e211083a-c916-4471-83c0-5d3ed42c2873"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 10:12:13.409195 kubelet[2589]: I0912 10:12:13.409108 2589 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79ca656-115f-4339-9dba-f6a7e6ae5ae4-kube-api-access-vhlnr" (OuterVolumeSpecName: "kube-api-access-vhlnr") pod "f79ca656-115f-4339-9dba-f6a7e6ae5ae4" (UID: "f79ca656-115f-4339-9dba-f6a7e6ae5ae4"). InnerVolumeSpecName "kube-api-access-vhlnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 10:12:13.492223 kubelet[2589]: I0912 10:12:13.491966 2589 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-xtables-lock\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.492223 kubelet[2589]: I0912 10:12:13.492174 2589 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-config-path\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.492800 kubelet[2589]: I0912 10:12:13.492442 2589 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-bpf-maps\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.492800 kubelet[2589]: I0912 10:12:13.492467 2589 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-run\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.492800 kubelet[2589]: I0912 10:12:13.492489 2589 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhlnr\" (UniqueName: \"kubernetes.io/projected/f79ca656-115f-4339-9dba-f6a7e6ae5ae4-kube-api-access-vhlnr\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.492800 kubelet[2589]: I0912 10:12:13.492502 2589 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-host-proc-sys-net\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.492800 kubelet[2589]: I0912 10:12:13.492518 2589 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5z7d\" (UniqueName: \"kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-kube-api-access-p5z7d\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.492800 kubelet[2589]: I0912 10:12:13.492531 2589 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cilium-cgroup\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.492800 kubelet[2589]: I0912 10:12:13.492545 2589 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f79ca656-115f-4339-9dba-f6a7e6ae5ae4-cilium-config-path\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.492800 kubelet[2589]: I0912 10:12:13.492560 2589 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e211083a-c916-4471-83c0-5d3ed42c2873-hubble-tls\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.493208 kubelet[2589]: I0912 10:12:13.492574 2589 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-hostproc\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.493208 kubelet[2589]: I0912 10:12:13.492588 2589 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-lib-modules\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.493208 kubelet[2589]: I0912 10:12:13.492604 2589 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e211083a-c916-4471-83c0-5d3ed42c2873-clustermesh-secrets\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.493208 kubelet[2589]: I0912 10:12:13.492616 2589 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-cni-path\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.493208 kubelet[2589]: I0912 10:12:13.492629 2589 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e211083a-c916-4471-83c0-5d3ed42c2873-etc-cni-netd\") on node \"ci-4230.2.2-n-d7464eacd8\" DevicePath \"\"" Sep 12 10:12:13.666288 kubelet[2589]: I0912 10:12:13.666244 2589 scope.go:117] "RemoveContainer" containerID="b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4" Sep 12 10:12:13.681853 systemd[1]: Removed slice kubepods-burstable-pode211083a_c916_4471_83c0_5d3ed42c2873.slice - libcontainer container kubepods-burstable-pode211083a_c916_4471_83c0_5d3ed42c2873.slice. Sep 12 10:12:13.684345 containerd[1481]: time="2025-09-12T10:12:13.681839519Z" level=info msg="RemoveContainer for \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\"" Sep 12 10:12:13.682058 systemd[1]: kubepods-burstable-pode211083a_c916_4471_83c0_5d3ed42c2873.slice: Consumed 9.836s CPU time, 191M memory peak, 68.2M read from disk, 13.3M written to disk. Sep 12 10:12:13.688411 containerd[1481]: time="2025-09-12T10:12:13.686717258Z" level=info msg="RemoveContainer for \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\" returns successfully" Sep 12 10:12:13.687297 systemd[1]: Removed slice kubepods-besteffort-podf79ca656_115f_4339_9dba_f6a7e6ae5ae4.slice - libcontainer container kubepods-besteffort-podf79ca656_115f_4339_9dba_f6a7e6ae5ae4.slice. Sep 12 10:12:13.688720 kubelet[2589]: I0912 10:12:13.687151 2589 scope.go:117] "RemoveContainer" containerID="68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a" Sep 12 10:12:13.691589 containerd[1481]: time="2025-09-12T10:12:13.690909350Z" level=info msg="RemoveContainer for \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\"" Sep 12 10:12:13.700267 containerd[1481]: time="2025-09-12T10:12:13.700145635Z" level=info msg="RemoveContainer for \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\" returns successfully" Sep 12 10:12:13.700879 kubelet[2589]: I0912 10:12:13.700632 2589 scope.go:117] "RemoveContainer" containerID="6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00" Sep 12 10:12:13.704074 containerd[1481]: time="2025-09-12T10:12:13.703483591Z" level=info msg="RemoveContainer for \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\"" Sep 12 10:12:13.712496 containerd[1481]: time="2025-09-12T10:12:13.712314848Z" level=info msg="RemoveContainer for \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\" returns successfully" Sep 12 10:12:13.713442 kubelet[2589]: I0912 10:12:13.713405 2589 scope.go:117] "RemoveContainer" containerID="598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047" Sep 12 10:12:13.715741 containerd[1481]: time="2025-09-12T10:12:13.715597695Z" level=info msg="RemoveContainer for \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\"" Sep 12 10:12:13.719801 containerd[1481]: time="2025-09-12T10:12:13.719706015Z" level=info msg="RemoveContainer for \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\" returns successfully" Sep 12 10:12:13.720068 kubelet[2589]: I0912 10:12:13.720039 2589 scope.go:117] "RemoveContainer" containerID="c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28" Sep 12 10:12:13.721699 containerd[1481]: time="2025-09-12T10:12:13.721554276Z" level=info msg="RemoveContainer for \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\"" Sep 12 10:12:13.729083 containerd[1481]: time="2025-09-12T10:12:13.728123000Z" level=info msg="RemoveContainer for \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\" returns successfully" Sep 12 10:12:13.729662 kubelet[2589]: I0912 10:12:13.729388 2589 scope.go:117] "RemoveContainer" containerID="b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4" Sep 12 10:12:13.731043 containerd[1481]: time="2025-09-12T10:12:13.730983112Z" level=error msg="ContainerStatus for \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\": not found" Sep 12 10:12:13.731613 kubelet[2589]: E0912 10:12:13.731545 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\": not found" containerID="b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4" Sep 12 10:12:13.738648 kubelet[2589]: I0912 10:12:13.731617 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4"} err="failed to get container status \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b394ba4d892cea4d420d5acb6351ae621a411977fec8670e04f059765d2d92c4\": not found" Sep 12 10:12:13.738648 kubelet[2589]: I0912 10:12:13.737999 2589 scope.go:117] "RemoveContainer" containerID="68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a" Sep 12 10:12:13.738844 containerd[1481]: time="2025-09-12T10:12:13.738427921Z" level=error msg="ContainerStatus for \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\": not found" Sep 12 10:12:13.738882 kubelet[2589]: E0912 10:12:13.738671 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\": not found" containerID="68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a" Sep 12 10:12:13.738882 kubelet[2589]: I0912 10:12:13.738710 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a"} err="failed to get container status \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\": rpc error: code = NotFound desc = an error occurred when try to find container \"68ffb0f406f3be4463dc84a7ad6d61d2a113dfdbfd026f5fa9b861558a97a75a\": not found" Sep 12 10:12:13.738882 kubelet[2589]: I0912 10:12:13.738737 2589 scope.go:117] "RemoveContainer" containerID="6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00" Sep 12 10:12:13.740650 containerd[1481]: time="2025-09-12T10:12:13.738999817Z" level=error msg="ContainerStatus for \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\": not found" Sep 12 10:12:13.740650 containerd[1481]: time="2025-09-12T10:12:13.739395868Z" level=error msg="ContainerStatus for \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\": not found" Sep 12 10:12:13.740650 containerd[1481]: time="2025-09-12T10:12:13.739836769Z" level=error msg="ContainerStatus for \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\": not found" Sep 12 10:12:13.740865 kubelet[2589]: E0912 10:12:13.739157 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\": not found" containerID="6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00" Sep 12 10:12:13.740865 kubelet[2589]: I0912 10:12:13.739184 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00"} err="failed to get container status \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a958c244c09596ef776afbe6544ba42ced64d01087852962f212f6626583f00\": not found" Sep 12 10:12:13.740865 kubelet[2589]: I0912 10:12:13.739208 2589 scope.go:117] "RemoveContainer" containerID="598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047" Sep 12 10:12:13.740865 kubelet[2589]: E0912 10:12:13.739590 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\": not found" containerID="598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047" Sep 12 10:12:13.740865 kubelet[2589]: I0912 10:12:13.739623 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047"} err="failed to get container status \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\": rpc error: code = NotFound desc = an error occurred when try to find container \"598afcc53bf7f6fa55db46ad34d2c71f16c8c8b8710125a624f5ee192fee0047\": not found" Sep 12 10:12:13.740865 kubelet[2589]: I0912 10:12:13.739643 2589 scope.go:117] "RemoveContainer" containerID="c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28" Sep 12 10:12:13.741218 kubelet[2589]: E0912 10:12:13.741179 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\": not found" containerID="c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28" Sep 12 10:12:13.741661 kubelet[2589]: I0912 10:12:13.741617 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28"} err="failed to get container status \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\": rpc error: code = NotFound desc = an error occurred when try to find container \"c99f482627b882ed0a2c4b078b21ef0d295aee49b8d33a657d5302ff9dc3cd28\": not found" Sep 12 10:12:13.741756 kubelet[2589]: I0912 10:12:13.741745 2589 scope.go:117] "RemoveContainer" containerID="359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183" Sep 12 10:12:13.745327 containerd[1481]: time="2025-09-12T10:12:13.745277228Z" level=info msg="RemoveContainer for \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\"" Sep 12 10:12:13.753084 containerd[1481]: time="2025-09-12T10:12:13.753029414Z" level=info msg="RemoveContainer for \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\" returns successfully" Sep 12 10:12:13.753544 kubelet[2589]: I0912 10:12:13.753505 2589 scope.go:117] "RemoveContainer" containerID="359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183" Sep 12 10:12:13.754645 containerd[1481]: time="2025-09-12T10:12:13.754580208Z" level=error msg="ContainerStatus for \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\": not found" Sep 12 10:12:13.754922 kubelet[2589]: E0912 10:12:13.754890 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\": not found" containerID="359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183" Sep 12 10:12:13.755020 kubelet[2589]: I0912 10:12:13.754951 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183"} err="failed to get container status \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\": rpc error: code = NotFound desc = an error occurred when try to find container \"359015dcbe162b0d08aeebf9dfac9f98e2f5131ee6c441d51c923ab9a1c90183\": not found" Sep 12 10:12:13.993253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fd29da8f37f753c69a84e74d556379b07d0a654be8f77e1fa3c77a8ca5caa4b-rootfs.mount: Deactivated successfully. Sep 12 10:12:13.993476 systemd[1]: var-lib-kubelet-pods-f79ca656\x2d115f\x2d4339\x2d9dba\x2df6a7e6ae5ae4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvhlnr.mount: Deactivated successfully. Sep 12 10:12:13.993637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73537eb104deb6f7a2afc94d744cbb80076eb222746bed926a43b56ee521352a-rootfs.mount: Deactivated successfully. Sep 12 10:12:13.993778 systemd[1]: var-lib-kubelet-pods-e211083a\x2dc916\x2d4471\x2d83c0\x2d5d3ed42c2873-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp5z7d.mount: Deactivated successfully. Sep 12 10:12:13.993946 systemd[1]: var-lib-kubelet-pods-e211083a\x2dc916\x2d4471\x2d83c0\x2d5d3ed42c2873-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 10:12:13.994089 systemd[1]: var-lib-kubelet-pods-e211083a\x2dc916\x2d4471\x2d83c0\x2d5d3ed42c2873-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 10:12:14.698855 sshd[4244]: Connection closed by 139.178.68.195 port 57198 Sep 12 10:12:14.700447 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:14.713967 systemd[1]: sshd@24-64.23.164.42:22-139.178.68.195:57198.service: Deactivated successfully. Sep 12 10:12:14.719906 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 10:12:14.724706 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Sep 12 10:12:14.733416 systemd[1]: Started sshd@25-64.23.164.42:22-139.178.68.195:57214.service - OpenSSH per-connection server daemon (139.178.68.195:57214). Sep 12 10:12:14.740267 systemd-logind[1463]: Removed session 25. Sep 12 10:12:14.818407 sshd[4406]: Accepted publickey for core from 139.178.68.195 port 57214 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:12:14.820586 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:14.828525 systemd-logind[1463]: New session 26 of user core. Sep 12 10:12:14.837219 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 10:12:15.183431 kubelet[2589]: E0912 10:12:15.183356 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:15.186511 kubelet[2589]: I0912 10:12:15.186469 2589 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e211083a-c916-4471-83c0-5d3ed42c2873" path="/var/lib/kubelet/pods/e211083a-c916-4471-83c0-5d3ed42c2873/volumes" Sep 12 10:12:15.188960 kubelet[2589]: I0912 10:12:15.188226 2589 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79ca656-115f-4339-9dba-f6a7e6ae5ae4" path="/var/lib/kubelet/pods/f79ca656-115f-4339-9dba-f6a7e6ae5ae4/volumes" Sep 12 10:12:15.321964 kubelet[2589]: E0912 10:12:15.321259 2589 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:12:15.554074 sshd[4409]: Connection closed by 139.178.68.195 port 57214 Sep 12 10:12:15.558965 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:15.575179 systemd[1]: sshd@25-64.23.164.42:22-139.178.68.195:57214.service: Deactivated successfully. Sep 12 10:12:15.584053 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 10:12:15.591600 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Sep 12 10:12:15.608484 systemd[1]: Started sshd@26-64.23.164.42:22-139.178.68.195:57228.service - OpenSSH per-connection server daemon (139.178.68.195:57228). Sep 12 10:12:15.613133 systemd-logind[1463]: Removed session 26. Sep 12 10:12:15.673619 kubelet[2589]: E0912 10:12:15.673096 2589 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f79ca656-115f-4339-9dba-f6a7e6ae5ae4" containerName="cilium-operator" Sep 12 10:12:15.673619 kubelet[2589]: E0912 10:12:15.673156 2589 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e211083a-c916-4471-83c0-5d3ed42c2873" containerName="mount-bpf-fs" Sep 12 10:12:15.673619 kubelet[2589]: E0912 10:12:15.673169 2589 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e211083a-c916-4471-83c0-5d3ed42c2873" containerName="clean-cilium-state" Sep 12 10:12:15.673619 kubelet[2589]: E0912 10:12:15.673180 2589 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e211083a-c916-4471-83c0-5d3ed42c2873" containerName="cilium-agent" Sep 12 10:12:15.673619 kubelet[2589]: E0912 10:12:15.673190 2589 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e211083a-c916-4471-83c0-5d3ed42c2873" containerName="mount-cgroup" Sep 12 10:12:15.673619 kubelet[2589]: E0912 10:12:15.673199 2589 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e211083a-c916-4471-83c0-5d3ed42c2873" containerName="apply-sysctl-overwrites" Sep 12 10:12:15.673619 kubelet[2589]: I0912 10:12:15.673240 2589 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79ca656-115f-4339-9dba-f6a7e6ae5ae4" containerName="cilium-operator" Sep 12 10:12:15.673619 kubelet[2589]: I0912 10:12:15.673250 2589 memory_manager.go:354] "RemoveStaleState removing state" podUID="e211083a-c916-4471-83c0-5d3ed42c2873" containerName="cilium-agent" Sep 12 10:12:15.689032 kubelet[2589]: W0912 10:12:15.688997 2589 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.2.2-n-d7464eacd8" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-n-d7464eacd8' and this object Sep 12 10:12:15.689637 kubelet[2589]: E0912 10:12:15.689590 2589 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.2.2-n-d7464eacd8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-d7464eacd8' and this object" logger="UnhandledError" Sep 12 10:12:15.689946 kubelet[2589]: W0912 10:12:15.689914 2589 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230.2.2-n-d7464eacd8" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-n-d7464eacd8' and this object Sep 12 10:12:15.690049 kubelet[2589]: E0912 10:12:15.690028 2589 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230.2.2-n-d7464eacd8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-d7464eacd8' and this object" logger="UnhandledError" Sep 12 10:12:15.690315 kubelet[2589]: W0912 10:12:15.690300 2589 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230.2.2-n-d7464eacd8" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-n-d7464eacd8' and this object Sep 12 10:12:15.690428 kubelet[2589]: E0912 10:12:15.690409 2589 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230.2.2-n-d7464eacd8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-d7464eacd8' and this object" logger="UnhandledError" Sep 12 10:12:15.691716 kubelet[2589]: W0912 10:12:15.690744 2589 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230.2.2-n-d7464eacd8" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-n-d7464eacd8' and this object Sep 12 10:12:15.691848 kubelet[2589]: E0912 10:12:15.691828 2589 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230.2.2-n-d7464eacd8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-d7464eacd8' and this object" logger="UnhandledError" Sep 12 10:12:15.693525 systemd[1]: Created slice kubepods-burstable-podb2d5d239_487f_4d02_a7a0_25a0e2225fe0.slice - libcontainer container kubepods-burstable-podb2d5d239_487f_4d02_a7a0_25a0e2225fe0.slice. Sep 12 10:12:15.707956 kubelet[2589]: I0912 10:12:15.707261 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-cilium-ipsec-secrets\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.707956 kubelet[2589]: I0912 10:12:15.707301 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-bpf-maps\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.707956 kubelet[2589]: I0912 10:12:15.707319 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-clustermesh-secrets\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.707956 kubelet[2589]: I0912 10:12:15.707335 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-etc-cni-netd\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.707956 kubelet[2589]: I0912 10:12:15.707349 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-xtables-lock\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.707956 kubelet[2589]: I0912 10:12:15.707362 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-cni-path\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.708259 kubelet[2589]: I0912 10:12:15.707376 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-hubble-tls\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.708259 kubelet[2589]: I0912 10:12:15.707391 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-cilium-config-path\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.708259 kubelet[2589]: I0912 10:12:15.707464 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-hostproc\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.708259 kubelet[2589]: I0912 10:12:15.707493 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-host-proc-sys-kernel\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.708259 kubelet[2589]: I0912 10:12:15.707510 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-lib-modules\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.708259 kubelet[2589]: I0912 10:12:15.707525 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-host-proc-sys-net\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.708411 kubelet[2589]: I0912 10:12:15.707540 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j77q\" (UniqueName: \"kubernetes.io/projected/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-kube-api-access-4j77q\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.708411 kubelet[2589]: I0912 10:12:15.707559 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-cilium-run\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.708411 kubelet[2589]: I0912 10:12:15.707573 2589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-cilium-cgroup\") pod \"cilium-5nhnh\" (UID: \"b2d5d239-487f-4d02-a7a0-25a0e2225fe0\") " pod="kube-system/cilium-5nhnh" Sep 12 10:12:15.734966 sshd[4418]: Accepted publickey for core from 139.178.68.195 port 57228 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:12:15.738870 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:15.748572 systemd-logind[1463]: New session 27 of user core. Sep 12 10:12:15.758282 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 10:12:15.830989 sshd[4421]: Connection closed by 139.178.68.195 port 57228 Sep 12 10:12:15.830783 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:15.845865 systemd[1]: sshd@26-64.23.164.42:22-139.178.68.195:57228.service: Deactivated successfully. Sep 12 10:12:15.848855 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 10:12:15.853049 systemd-logind[1463]: Session 27 logged out. Waiting for processes to exit. Sep 12 10:12:15.861487 systemd[1]: Started sshd@27-64.23.164.42:22-139.178.68.195:57244.service - OpenSSH per-connection server daemon (139.178.68.195:57244). Sep 12 10:12:15.864536 systemd-logind[1463]: Removed session 27. Sep 12 10:12:15.924625 sshd[4428]: Accepted publickey for core from 139.178.68.195 port 57244 ssh2: RSA SHA256:2VqWZqk4hMH9H5AhbP/0AQtkzByPETmNCvQEl/0/v6I Sep 12 10:12:15.926656 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:12:15.932997 systemd-logind[1463]: New session 28 of user core. Sep 12 10:12:15.938230 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 10:12:16.809134 kubelet[2589]: E0912 10:12:16.809057 2589 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 12 10:12:16.809616 kubelet[2589]: E0912 10:12:16.809202 2589 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-cilium-config-path podName:b2d5d239-487f-4d02-a7a0-25a0e2225fe0 nodeName:}" failed. No retries permitted until 2025-09-12 10:12:17.309174464 +0000 UTC m=+102.318429343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-cilium-config-path") pod "cilium-5nhnh" (UID: "b2d5d239-487f-4d02-a7a0-25a0e2225fe0") : failed to sync configmap cache: timed out waiting for the condition Sep 12 10:12:16.809616 kubelet[2589]: E0912 10:12:16.809086 2589 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Sep 12 10:12:16.809616 kubelet[2589]: E0912 10:12:16.809264 2589 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-cilium-ipsec-secrets podName:b2d5d239-487f-4d02-a7a0-25a0e2225fe0 nodeName:}" failed. No retries permitted until 2025-09-12 10:12:17.309252358 +0000 UTC m=+102.318507218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-cilium-ipsec-secrets") pod "cilium-5nhnh" (UID: "b2d5d239-487f-4d02-a7a0-25a0e2225fe0") : failed to sync secret cache: timed out waiting for the condition Sep 12 10:12:16.810317 kubelet[2589]: E0912 10:12:16.810200 2589 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 12 10:12:16.810317 kubelet[2589]: E0912 10:12:16.810286 2589 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-clustermesh-secrets podName:b2d5d239-487f-4d02-a7a0-25a0e2225fe0 nodeName:}" failed. No retries permitted until 2025-09-12 10:12:17.310269035 +0000 UTC m=+102.319523893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-clustermesh-secrets") pod "cilium-5nhnh" (UID: "b2d5d239-487f-4d02-a7a0-25a0e2225fe0") : failed to sync secret cache: timed out waiting for the condition Sep 12 10:12:16.810590 kubelet[2589]: E0912 10:12:16.810512 2589 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 10:12:16.810590 kubelet[2589]: E0912 10:12:16.810536 2589 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-5nhnh: failed to sync secret cache: timed out waiting for the condition Sep 12 10:12:16.810857 kubelet[2589]: E0912 10:12:16.810709 2589 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-hubble-tls podName:b2d5d239-487f-4d02-a7a0-25a0e2225fe0 nodeName:}" failed. No retries permitted until 2025-09-12 10:12:17.310688085 +0000 UTC m=+102.319942942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/b2d5d239-487f-4d02-a7a0-25a0e2225fe0-hubble-tls") pod "cilium-5nhnh" (UID: "b2d5d239-487f-4d02-a7a0-25a0e2225fe0") : failed to sync secret cache: timed out waiting for the condition Sep 12 10:12:16.941265 kubelet[2589]: I0912 10:12:16.941177 2589 setters.go:600] "Node became not ready" node="ci-4230.2.2-n-d7464eacd8" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T10:12:16Z","lastTransitionTime":"2025-09-12T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 10:12:17.504127 kubelet[2589]: E0912 10:12:17.504079 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:17.505394 containerd[1481]: time="2025-09-12T10:12:17.504961690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nhnh,Uid:b2d5d239-487f-4d02-a7a0-25a0e2225fe0,Namespace:kube-system,Attempt:0,}" Sep 12 10:12:17.532803 containerd[1481]: time="2025-09-12T10:12:17.532577687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:12:17.532803 containerd[1481]: time="2025-09-12T10:12:17.532661145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:12:17.532803 containerd[1481]: time="2025-09-12T10:12:17.532674515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:17.534239 containerd[1481]: time="2025-09-12T10:12:17.534061721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:12:17.559255 systemd[1]: Started cri-containerd-11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b.scope - libcontainer container 11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b. Sep 12 10:12:17.593131 containerd[1481]: time="2025-09-12T10:12:17.593083304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nhnh,Uid:b2d5d239-487f-4d02-a7a0-25a0e2225fe0,Namespace:kube-system,Attempt:0,} returns sandbox id \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\"" Sep 12 10:12:17.597946 kubelet[2589]: E0912 10:12:17.597900 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:17.600425 containerd[1481]: time="2025-09-12T10:12:17.600376192Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:12:17.614252 containerd[1481]: time="2025-09-12T10:12:17.614190961Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"411c6f490015d82d041d18b46085216f116b98bd75e5305d87c5b006fe03bd67\"" Sep 12 10:12:17.615438 containerd[1481]: time="2025-09-12T10:12:17.615269543Z" level=info msg="StartContainer for \"411c6f490015d82d041d18b46085216f116b98bd75e5305d87c5b006fe03bd67\"" Sep 12 10:12:17.655234 systemd[1]: Started cri-containerd-411c6f490015d82d041d18b46085216f116b98bd75e5305d87c5b006fe03bd67.scope - libcontainer container 411c6f490015d82d041d18b46085216f116b98bd75e5305d87c5b006fe03bd67. Sep 12 10:12:17.697785 containerd[1481]: time="2025-09-12T10:12:17.697645878Z" level=info msg="StartContainer for \"411c6f490015d82d041d18b46085216f116b98bd75e5305d87c5b006fe03bd67\" returns successfully" Sep 12 10:12:17.720285 systemd[1]: cri-containerd-411c6f490015d82d041d18b46085216f116b98bd75e5305d87c5b006fe03bd67.scope: Deactivated successfully. Sep 12 10:12:17.721453 systemd[1]: cri-containerd-411c6f490015d82d041d18b46085216f116b98bd75e5305d87c5b006fe03bd67.scope: Consumed 32ms CPU time, 9.7M memory peak, 3.2M read from disk. Sep 12 10:12:17.763785 containerd[1481]: time="2025-09-12T10:12:17.763584017Z" level=info msg="shim disconnected" id=411c6f490015d82d041d18b46085216f116b98bd75e5305d87c5b006fe03bd67 namespace=k8s.io Sep 12 10:12:17.763785 containerd[1481]: time="2025-09-12T10:12:17.763700405Z" level=warning msg="cleaning up after shim disconnected" id=411c6f490015d82d041d18b46085216f116b98bd75e5305d87c5b006fe03bd67 namespace=k8s.io Sep 12 10:12:17.763785 containerd[1481]: time="2025-09-12T10:12:17.763714451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:12:17.780770 containerd[1481]: time="2025-09-12T10:12:17.780640437Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:12:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:12:18.697132 kubelet[2589]: E0912 10:12:18.695901 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:18.701229 containerd[1481]: time="2025-09-12T10:12:18.701181629Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:12:18.727790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2488628363.mount: Deactivated successfully. Sep 12 10:12:18.731754 containerd[1481]: time="2025-09-12T10:12:18.730999346Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c\"" Sep 12 10:12:18.732163 containerd[1481]: time="2025-09-12T10:12:18.732124854Z" level=info msg="StartContainer for \"c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c\"" Sep 12 10:12:18.785306 systemd[1]: Started cri-containerd-c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c.scope - libcontainer container c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c. Sep 12 10:12:18.824487 containerd[1481]: time="2025-09-12T10:12:18.824410168Z" level=info msg="StartContainer for \"c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c\" returns successfully" Sep 12 10:12:18.836587 systemd[1]: cri-containerd-c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c.scope: Deactivated successfully. Sep 12 10:12:18.837288 systemd[1]: cri-containerd-c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c.scope: Consumed 25ms CPU time, 7.4M memory peak, 2.2M read from disk. Sep 12 10:12:18.881762 containerd[1481]: time="2025-09-12T10:12:18.881698910Z" level=info msg="shim disconnected" id=c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c namespace=k8s.io Sep 12 10:12:18.882328 containerd[1481]: time="2025-09-12T10:12:18.882070577Z" level=warning msg="cleaning up after shim disconnected" id=c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c namespace=k8s.io Sep 12 10:12:18.882328 containerd[1481]: time="2025-09-12T10:12:18.882088112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:12:19.333561 systemd[1]: run-containerd-runc-k8s.io-c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c-runc.6Kr7nh.mount: Deactivated successfully. Sep 12 10:12:19.333699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6e01d7ad46ae52fb5f57decbfeb78e1fffdf76342448eabfe0aebee1abf369c-rootfs.mount: Deactivated successfully. Sep 12 10:12:19.701191 kubelet[2589]: E0912 10:12:19.700741 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:19.708546 containerd[1481]: time="2025-09-12T10:12:19.708479106Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:12:19.730313 containerd[1481]: time="2025-09-12T10:12:19.730237369Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c\"" Sep 12 10:12:19.732828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459689753.mount: Deactivated successfully. Sep 12 10:12:19.733746 containerd[1481]: time="2025-09-12T10:12:19.733266430Z" level=info msg="StartContainer for \"2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c\"" Sep 12 10:12:19.793229 systemd[1]: Started cri-containerd-2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c.scope - libcontainer container 2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c. Sep 12 10:12:19.840560 containerd[1481]: time="2025-09-12T10:12:19.840495612Z" level=info msg="StartContainer for \"2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c\" returns successfully" Sep 12 10:12:19.848240 systemd[1]: cri-containerd-2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c.scope: Deactivated successfully. Sep 12 10:12:19.879855 containerd[1481]: time="2025-09-12T10:12:19.879787726Z" level=info msg="shim disconnected" id=2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c namespace=k8s.io Sep 12 10:12:19.879855 containerd[1481]: time="2025-09-12T10:12:19.879843176Z" level=warning msg="cleaning up after shim disconnected" id=2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c namespace=k8s.io Sep 12 10:12:19.879855 containerd[1481]: time="2025-09-12T10:12:19.879851085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:12:19.898104 containerd[1481]: time="2025-09-12T10:12:19.898022819Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:12:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:12:20.322270 kubelet[2589]: E0912 10:12:20.322165 2589 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:12:20.331600 systemd[1]: run-containerd-runc-k8s.io-2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c-runc.OtkEyd.mount: Deactivated successfully. Sep 12 10:12:20.331731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2175060f6715bd7bb62f2ea98ee3cc2b9c2e95f6e1d83494da95be2f5d0aab5c-rootfs.mount: Deactivated successfully. Sep 12 10:12:20.706244 kubelet[2589]: E0912 10:12:20.706187 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:20.710817 containerd[1481]: time="2025-09-12T10:12:20.710016080Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:12:20.730844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720131755.mount: Deactivated successfully. Sep 12 10:12:20.739924 containerd[1481]: time="2025-09-12T10:12:20.739855380Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4e64bd5520ddc9b351907bd3a099372d547a7ecde42cd99fe6580c06f9076e38\"" Sep 12 10:12:20.741642 containerd[1481]: time="2025-09-12T10:12:20.740746258Z" level=info msg="StartContainer for \"4e64bd5520ddc9b351907bd3a099372d547a7ecde42cd99fe6580c06f9076e38\"" Sep 12 10:12:20.791480 systemd[1]: Started cri-containerd-4e64bd5520ddc9b351907bd3a099372d547a7ecde42cd99fe6580c06f9076e38.scope - libcontainer container 4e64bd5520ddc9b351907bd3a099372d547a7ecde42cd99fe6580c06f9076e38. Sep 12 10:12:20.826727 systemd[1]: cri-containerd-4e64bd5520ddc9b351907bd3a099372d547a7ecde42cd99fe6580c06f9076e38.scope: Deactivated successfully. Sep 12 10:12:20.827965 containerd[1481]: time="2025-09-12T10:12:20.827497336Z" level=info msg="StartContainer for \"4e64bd5520ddc9b351907bd3a099372d547a7ecde42cd99fe6580c06f9076e38\" returns successfully" Sep 12 10:12:20.856179 containerd[1481]: time="2025-09-12T10:12:20.856094608Z" level=info msg="shim disconnected" id=4e64bd5520ddc9b351907bd3a099372d547a7ecde42cd99fe6580c06f9076e38 namespace=k8s.io Sep 12 10:12:20.856679 containerd[1481]: time="2025-09-12T10:12:20.856559493Z" level=warning msg="cleaning up after shim disconnected" id=4e64bd5520ddc9b351907bd3a099372d547a7ecde42cd99fe6580c06f9076e38 namespace=k8s.io Sep 12 10:12:20.856679 containerd[1481]: time="2025-09-12T10:12:20.856618126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:12:21.332118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e64bd5520ddc9b351907bd3a099372d547a7ecde42cd99fe6580c06f9076e38-rootfs.mount: Deactivated successfully. Sep 12 10:12:21.713423 kubelet[2589]: E0912 10:12:21.711292 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:21.717515 containerd[1481]: time="2025-09-12T10:12:21.717457225Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:12:21.743081 containerd[1481]: time="2025-09-12T10:12:21.742259922Z" level=info msg="CreateContainer within sandbox \"11347f83834f56341c84679c49d1a16fb7cfed4696396191e683badea2b0b58b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ed631d658cf6aea60c27d95cbe588716071019770273cdffd2c7d0a2008ee56e\"" Sep 12 10:12:21.743397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714020127.mount: Deactivated successfully. Sep 12 10:12:21.747070 containerd[1481]: time="2025-09-12T10:12:21.745307538Z" level=info msg="StartContainer for \"ed631d658cf6aea60c27d95cbe588716071019770273cdffd2c7d0a2008ee56e\"" Sep 12 10:12:21.805272 systemd[1]: Started cri-containerd-ed631d658cf6aea60c27d95cbe588716071019770273cdffd2c7d0a2008ee56e.scope - libcontainer container ed631d658cf6aea60c27d95cbe588716071019770273cdffd2c7d0a2008ee56e. Sep 12 10:12:21.841775 containerd[1481]: time="2025-09-12T10:12:21.841579061Z" level=info msg="StartContainer for \"ed631d658cf6aea60c27d95cbe588716071019770273cdffd2c7d0a2008ee56e\" returns successfully" Sep 12 10:12:22.465023 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 10:12:22.719574 kubelet[2589]: E0912 10:12:22.719230 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:23.720359 kubelet[2589]: E0912 10:12:23.720298 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:24.723765 kubelet[2589]: E0912 10:12:24.723335 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:25.094892 systemd[1]: run-containerd-runc-k8s.io-ed631d658cf6aea60c27d95cbe588716071019770273cdffd2c7d0a2008ee56e-runc.Cwl9pq.mount: Deactivated successfully. Sep 12 10:12:26.334517 systemd-networkd[1380]: lxc_health: Link UP Sep 12 10:12:26.334899 systemd-networkd[1380]: lxc_health: Gained carrier Sep 12 10:12:27.375651 systemd[1]: run-containerd-runc-k8s.io-ed631d658cf6aea60c27d95cbe588716071019770273cdffd2c7d0a2008ee56e-runc.ZFsY0u.mount: Deactivated successfully. Sep 12 10:12:27.506677 kubelet[2589]: E0912 10:12:27.506112 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:27.536030 kubelet[2589]: I0912 10:12:27.535499 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5nhnh" podStartSLOduration=12.535481265 podStartE2EDuration="12.535481265s" podCreationTimestamp="2025-09-12 10:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:12:22.738794109 +0000 UTC m=+107.748049000" watchObservedRunningTime="2025-09-12 10:12:27.535481265 +0000 UTC m=+112.544736143" Sep 12 10:12:27.731393 kubelet[2589]: E0912 10:12:27.730956 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:28.057280 systemd-networkd[1380]: lxc_health: Gained IPv6LL Sep 12 10:12:28.734107 kubelet[2589]: E0912 10:12:28.733629 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 10:12:29.603237 systemd[1]: run-containerd-runc-k8s.io-ed631d658cf6aea60c27d95cbe588716071019770273cdffd2c7d0a2008ee56e-runc.ulEJkV.mount: Deactivated successfully. Sep 12 10:12:31.804383 systemd[1]: run-containerd-runc-k8s.io-ed631d658cf6aea60c27d95cbe588716071019770273cdffd2c7d0a2008ee56e-runc.7Rer8a.mount: Deactivated successfully. Sep 12 10:12:31.923592 sshd[4431]: Connection closed by 139.178.68.195 port 57244 Sep 12 10:12:31.924892 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Sep 12 10:12:31.933231 systemd[1]: sshd@27-64.23.164.42:22-139.178.68.195:57244.service: Deactivated successfully. Sep 12 10:12:31.936526 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 10:12:31.940494 systemd-logind[1463]: Session 28 logged out. Waiting for processes to exit. Sep 12 10:12:31.942181 systemd-logind[1463]: Removed session 28.