Sep 12 18:03:37.897162 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 18:03:37.897198 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 18:03:37.897208 kernel: BIOS-provided physical RAM map: Sep 12 18:03:37.897215 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 18:03:37.897222 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 18:03:37.897228 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 18:03:37.897236 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 12 18:03:37.897247 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 12 18:03:37.897256 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 18:03:37.897263 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 18:03:37.897269 kernel: NX (Execute Disable) protection: active Sep 12 18:03:37.897276 kernel: APIC: Static calls initialized Sep 12 18:03:37.897283 kernel: SMBIOS 2.8 present. Sep 12 18:03:37.897289 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 12 18:03:37.897325 kernel: DMI: Memory slots populated: 1/1 Sep 12 18:03:37.897333 kernel: Hypervisor detected: KVM Sep 12 18:03:37.897344 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 18:03:37.897352 kernel: kvm-clock: using sched offset of 4806997974 cycles Sep 12 18:03:37.897360 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 18:03:37.897367 kernel: tsc: Detected 1999.999 MHz processor Sep 12 18:03:37.897375 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 18:03:37.897382 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 18:03:37.897390 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 12 18:03:37.897400 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 18:03:37.897407 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 18:03:37.897415 kernel: ACPI: Early table checksum verification disabled Sep 12 18:03:37.897422 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 12 18:03:37.897429 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:03:37.897436 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:03:37.897443 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:03:37.897450 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 12 18:03:37.897457 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:03:37.897467 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:03:37.897474 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:03:37.897481 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 18:03:37.897488 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 12 18:03:37.897495 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 12 18:03:37.897502 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 12 18:03:37.897509 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 12 18:03:37.897516 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 12 18:03:37.897529 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 12 18:03:37.897537 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 12 18:03:37.897544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 18:03:37.897552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 18:03:37.897559 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Sep 12 18:03:37.897567 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Sep 12 18:03:37.897577 kernel: Zone ranges: Sep 12 18:03:37.897584 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 18:03:37.897592 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 12 18:03:37.897599 kernel: Normal empty Sep 12 18:03:37.897606 kernel: Device empty Sep 12 18:03:37.897614 kernel: Movable zone start for each node Sep 12 18:03:37.897621 kernel: Early memory node ranges Sep 12 18:03:37.897628 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 18:03:37.897636 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 12 18:03:37.897645 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 12 18:03:37.897653 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 18:03:37.897660 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 18:03:37.897668 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 12 18:03:37.897675 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 18:03:37.897683 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 18:03:37.897694 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 18:03:37.897702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 18:03:37.897713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 18:03:37.897723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 18:03:37.897730 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 18:03:37.897741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 18:03:37.897749 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 18:03:37.897756 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 18:03:37.897764 kernel: TSC deadline timer available Sep 12 18:03:37.897771 kernel: CPU topo: Max. logical packages: 1 Sep 12 18:03:37.897779 kernel: CPU topo: Max. logical dies: 1 Sep 12 18:03:37.897786 kernel: CPU topo: Max. dies per package: 1 Sep 12 18:03:37.897794 kernel: CPU topo: Max. threads per core: 1 Sep 12 18:03:37.897803 kernel: CPU topo: Num. cores per package: 2 Sep 12 18:03:37.897811 kernel: CPU topo: Num. threads per package: 2 Sep 12 18:03:37.897818 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 18:03:37.897826 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 18:03:37.897833 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 12 18:03:37.897841 kernel: Booting paravirtualized kernel on KVM Sep 12 18:03:37.897849 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 18:03:37.897864 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 18:03:37.897872 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 18:03:37.897882 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 18:03:37.897890 kernel: pcpu-alloc: [0] 0 1 Sep 12 18:03:37.897897 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 18:03:37.897907 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 18:03:37.897915 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 18:03:37.897923 kernel: random: crng init done Sep 12 18:03:37.897930 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 18:03:37.897938 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 18:03:37.897947 kernel: Fallback order for Node 0: 0 Sep 12 18:03:37.897955 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Sep 12 18:03:37.897962 kernel: Policy zone: DMA32 Sep 12 18:03:37.897970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 18:03:37.897977 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 18:03:37.897985 kernel: Kernel/User page tables isolation: enabled Sep 12 18:03:37.897992 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 18:03:37.898000 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 18:03:37.898007 kernel: Dynamic Preempt: voluntary Sep 12 18:03:37.898017 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 18:03:37.898026 kernel: rcu: RCU event tracing is enabled. Sep 12 18:03:37.898033 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 18:03:37.898041 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 18:03:37.898049 kernel: Rude variant of Tasks RCU enabled. Sep 12 18:03:37.898056 kernel: Tracing variant of Tasks RCU enabled. Sep 12 18:03:37.898064 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 18:03:37.898071 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 18:03:37.898079 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 18:03:37.898092 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 18:03:37.898100 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 18:03:37.898108 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 18:03:37.898115 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 18:03:37.898123 kernel: Console: colour VGA+ 80x25 Sep 12 18:03:37.898130 kernel: printk: legacy console [tty0] enabled Sep 12 18:03:37.898138 kernel: printk: legacy console [ttyS0] enabled Sep 12 18:03:37.898146 kernel: ACPI: Core revision 20240827 Sep 12 18:03:37.898154 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 18:03:37.898171 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 18:03:37.898179 kernel: x2apic enabled Sep 12 18:03:37.898188 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 18:03:37.898205 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 18:03:37.898224 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Sep 12 18:03:37.898237 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Sep 12 18:03:37.898249 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 18:03:37.898260 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 18:03:37.898273 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 18:03:37.898290 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 18:03:37.898336 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 18:03:37.898346 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 18:03:37.898355 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 18:03:37.898363 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 18:03:37.898371 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 18:03:37.898380 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 18:03:37.898391 kernel: active return thunk: its_return_thunk Sep 12 18:03:37.898399 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 18:03:37.898407 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 18:03:37.898416 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 18:03:37.898424 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 18:03:37.898432 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 18:03:37.898441 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 18:03:37.898449 kernel: Freeing SMP alternatives memory: 32K Sep 12 18:03:37.898457 kernel: pid_max: default: 32768 minimum: 301 Sep 12 18:03:37.898469 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 18:03:37.898478 kernel: landlock: Up and running. Sep 12 18:03:37.898492 kernel: SELinux: Initializing. Sep 12 18:03:37.898505 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 18:03:37.898516 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 18:03:37.898527 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 12 18:03:37.898539 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 12 18:03:37.898551 kernel: signal: max sigframe size: 1776 Sep 12 18:03:37.898564 kernel: rcu: Hierarchical SRCU implementation. Sep 12 18:03:37.898580 kernel: rcu: Max phase no-delay instances is 400. Sep 12 18:03:37.898592 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 18:03:37.898605 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 18:03:37.898618 kernel: smp: Bringing up secondary CPUs ... Sep 12 18:03:37.898637 kernel: smpboot: x86: Booting SMP configuration: Sep 12 18:03:37.898652 kernel: .... node #0, CPUs: #1 Sep 12 18:03:37.898666 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 18:03:37.898680 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Sep 12 18:03:37.898695 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 125148K reserved, 0K cma-reserved) Sep 12 18:03:37.898712 kernel: devtmpfs: initialized Sep 12 18:03:37.898726 kernel: x86/mm: Memory block size: 128MB Sep 12 18:03:37.898740 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 18:03:37.898754 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 18:03:37.898768 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 18:03:37.898782 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 18:03:37.898796 kernel: audit: initializing netlink subsys (disabled) Sep 12 18:03:37.898811 kernel: audit: type=2000 audit(1757700213.452:1): state=initialized audit_enabled=0 res=1 Sep 12 18:03:37.898824 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 18:03:37.898841 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 18:03:37.898855 kernel: cpuidle: using governor menu Sep 12 18:03:37.898868 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 18:03:37.898882 kernel: dca service started, version 1.12.1 Sep 12 18:03:37.898896 kernel: PCI: Using configuration type 1 for base access Sep 12 18:03:37.898910 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 18:03:37.898919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 18:03:37.898927 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 18:03:37.898936 kernel: ACPI: Added _OSI(Module Device) Sep 12 18:03:37.898947 kernel: ACPI: Added _OSI(Processor Device) Sep 12 18:03:37.898955 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 18:03:37.898964 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 18:03:37.898972 kernel: ACPI: Interpreter enabled Sep 12 18:03:37.898980 kernel: ACPI: PM: (supports S0 S5) Sep 12 18:03:37.898989 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 18:03:37.898997 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 18:03:37.899005 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 18:03:37.899014 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 18:03:37.899024 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 18:03:37.899247 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 18:03:37.899532 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 18:03:37.899632 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 18:03:37.899644 kernel: acpiphp: Slot [3] registered Sep 12 18:03:37.899652 kernel: acpiphp: Slot [4] registered Sep 12 18:03:37.899661 kernel: acpiphp: Slot [5] registered Sep 12 18:03:37.899673 kernel: acpiphp: Slot [6] registered Sep 12 18:03:37.899681 kernel: acpiphp: Slot [7] registered Sep 12 18:03:37.899690 kernel: acpiphp: Slot [8] registered Sep 12 18:03:37.899698 kernel: acpiphp: Slot [9] registered Sep 12 18:03:37.899706 kernel: acpiphp: Slot [10] registered Sep 12 18:03:37.899715 kernel: acpiphp: Slot [11] registered Sep 12 18:03:37.899723 kernel: acpiphp: Slot [12] registered Sep 12 18:03:37.899731 kernel: acpiphp: Slot [13] registered Sep 12 18:03:37.899740 kernel: acpiphp: Slot [14] registered Sep 12 18:03:37.899748 kernel: acpiphp: Slot [15] registered Sep 12 18:03:37.899759 kernel: acpiphp: Slot [16] registered Sep 12 18:03:37.899767 kernel: acpiphp: Slot [17] registered Sep 12 18:03:37.899775 kernel: acpiphp: Slot [18] registered Sep 12 18:03:37.899783 kernel: acpiphp: Slot [19] registered Sep 12 18:03:37.899791 kernel: acpiphp: Slot [20] registered Sep 12 18:03:37.899799 kernel: acpiphp: Slot [21] registered Sep 12 18:03:37.899807 kernel: acpiphp: Slot [22] registered Sep 12 18:03:37.899816 kernel: acpiphp: Slot [23] registered Sep 12 18:03:37.899824 kernel: acpiphp: Slot [24] registered Sep 12 18:03:37.899834 kernel: acpiphp: Slot [25] registered Sep 12 18:03:37.899842 kernel: acpiphp: Slot [26] registered Sep 12 18:03:37.899851 kernel: acpiphp: Slot [27] registered Sep 12 18:03:37.899858 kernel: acpiphp: Slot [28] registered Sep 12 18:03:37.899867 kernel: acpiphp: Slot [29] registered Sep 12 18:03:37.899875 kernel: acpiphp: Slot [30] registered Sep 12 18:03:37.899883 kernel: acpiphp: Slot [31] registered Sep 12 18:03:37.899892 kernel: PCI host bridge to bus 0000:00 Sep 12 18:03:37.900027 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 18:03:37.900143 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 18:03:37.900262 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 18:03:37.900362 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 18:03:37.900445 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 12 18:03:37.900525 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 18:03:37.900659 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 12 18:03:37.900784 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 12 18:03:37.900923 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Sep 12 18:03:37.901080 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Sep 12 18:03:37.901218 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 12 18:03:37.901335 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 12 18:03:37.901428 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 12 18:03:37.901524 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 12 18:03:37.901640 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Sep 12 18:03:37.901731 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Sep 12 18:03:37.901832 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 12 18:03:37.901922 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 12 18:03:37.902011 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 12 18:03:37.902118 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Sep 12 18:03:37.902214 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Sep 12 18:03:37.902323 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Sep 12 18:03:37.902416 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Sep 12 18:03:37.902560 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Sep 12 18:03:37.902664 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 18:03:37.902789 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 18:03:37.902883 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Sep 12 18:03:37.902978 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Sep 12 18:03:37.903068 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Sep 12 18:03:37.903174 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 18:03:37.903267 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Sep 12 18:03:37.903405 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Sep 12 18:03:37.903535 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 12 18:03:37.903635 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Sep 12 18:03:37.903732 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Sep 12 18:03:37.903823 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Sep 12 18:03:37.903914 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 12 18:03:37.904024 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 18:03:37.904116 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Sep 12 18:03:37.904207 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Sep 12 18:03:37.904374 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Sep 12 18:03:37.904511 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 18:03:37.904623 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Sep 12 18:03:37.904721 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Sep 12 18:03:37.904812 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Sep 12 18:03:37.904922 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 18:03:37.905015 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Sep 12 18:03:37.905111 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 12 18:03:37.905122 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 18:03:37.905131 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 18:03:37.905142 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 18:03:37.905158 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 18:03:37.905169 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 18:03:37.905181 kernel: iommu: Default domain type: Translated Sep 12 18:03:37.905194 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 18:03:37.905208 kernel: PCI: Using ACPI for IRQ routing Sep 12 18:03:37.905216 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 18:03:37.905232 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 18:03:37.905245 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 12 18:03:37.905408 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 12 18:03:37.905502 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 12 18:03:37.905596 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 18:03:37.905607 kernel: vgaarb: loaded Sep 12 18:03:37.905616 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 18:03:37.905629 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 18:03:37.905637 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 18:03:37.905646 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 18:03:37.905655 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 18:03:37.905663 kernel: pnp: PnP ACPI init Sep 12 18:03:37.905672 kernel: pnp: PnP ACPI: found 4 devices Sep 12 18:03:37.905680 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 18:03:37.905689 kernel: NET: Registered PF_INET protocol family Sep 12 18:03:37.905698 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 18:03:37.905709 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 18:03:37.905717 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 18:03:37.905726 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 18:03:37.905734 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 18:03:37.905743 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 18:03:37.905752 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 18:03:37.905760 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 18:03:37.905769 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 18:03:37.905778 kernel: NET: Registered PF_XDP protocol family Sep 12 18:03:37.905869 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 18:03:37.905953 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 18:03:37.906065 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 18:03:37.906189 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 18:03:37.906279 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 12 18:03:37.908461 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 12 18:03:37.908581 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 18:03:37.908643 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 18:03:37.908742 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 26587 usecs Sep 12 18:03:37.908754 kernel: PCI: CLS 0 bytes, default 64 Sep 12 18:03:37.908763 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 18:03:37.908772 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Sep 12 18:03:37.908781 kernel: Initialise system trusted keyrings Sep 12 18:03:37.908791 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 18:03:37.908800 kernel: Key type asymmetric registered Sep 12 18:03:37.908808 kernel: Asymmetric key parser 'x509' registered Sep 12 18:03:37.908820 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 18:03:37.908829 kernel: io scheduler mq-deadline registered Sep 12 18:03:37.908838 kernel: io scheduler kyber registered Sep 12 18:03:37.908847 kernel: io scheduler bfq registered Sep 12 18:03:37.908856 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 18:03:37.908864 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 12 18:03:37.908873 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 18:03:37.908882 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 18:03:37.908890 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 18:03:37.908901 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 18:03:37.908910 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 18:03:37.908919 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 18:03:37.908927 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 18:03:37.909058 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 18:03:37.909071 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 18:03:37.909157 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 18:03:37.909243 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T18:03:37 UTC (1757700217) Sep 12 18:03:37.909501 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 18:03:37.909519 kernel: intel_pstate: CPU model not supported Sep 12 18:03:37.909528 kernel: NET: Registered PF_INET6 protocol family Sep 12 18:03:37.909537 kernel: Segment Routing with IPv6 Sep 12 18:03:37.909545 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 18:03:37.909555 kernel: NET: Registered PF_PACKET protocol family Sep 12 18:03:37.909563 kernel: Key type dns_resolver registered Sep 12 18:03:37.909572 kernel: IPI shorthand broadcast: enabled Sep 12 18:03:37.909581 kernel: sched_clock: Marking stable (3921005831, 146162440)->(4094426292, -27258021) Sep 12 18:03:37.909593 kernel: registered taskstats version 1 Sep 12 18:03:37.909602 kernel: Loading compiled-in X.509 certificates Sep 12 18:03:37.909610 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 18:03:37.909619 kernel: Demotion targets for Node 0: null Sep 12 18:03:37.909627 kernel: Key type .fscrypt registered Sep 12 18:03:37.909636 kernel: Key type fscrypt-provisioning registered Sep 12 18:03:37.909663 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 18:03:37.909674 kernel: ima: Allocated hash algorithm: sha1 Sep 12 18:03:37.909683 kernel: ima: No architecture policies found Sep 12 18:03:37.909694 kernel: clk: Disabling unused clocks Sep 12 18:03:37.909702 kernel: Warning: unable to open an initial console. Sep 12 18:03:37.909712 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 18:03:37.909721 kernel: Write protecting the kernel read-only data: 24576k Sep 12 18:03:37.909729 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 18:03:37.909738 kernel: Run /init as init process Sep 12 18:03:37.909746 kernel: with arguments: Sep 12 18:03:37.909755 kernel: /init Sep 12 18:03:37.909764 kernel: with environment: Sep 12 18:03:37.909776 kernel: HOME=/ Sep 12 18:03:37.909784 kernel: TERM=linux Sep 12 18:03:37.909793 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 18:03:37.909803 systemd[1]: Successfully made /usr/ read-only. Sep 12 18:03:37.909816 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 18:03:37.909826 systemd[1]: Detected virtualization kvm. Sep 12 18:03:37.909835 systemd[1]: Detected architecture x86-64. Sep 12 18:03:37.909846 systemd[1]: Running in initrd. Sep 12 18:03:37.909855 systemd[1]: No hostname configured, using default hostname. Sep 12 18:03:37.909865 systemd[1]: Hostname set to . Sep 12 18:03:37.909873 systemd[1]: Initializing machine ID from VM UUID. Sep 12 18:03:37.909882 systemd[1]: Queued start job for default target initrd.target. Sep 12 18:03:37.909891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 18:03:37.909900 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 18:03:37.909910 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 18:03:37.909922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 18:03:37.909931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 18:03:37.909943 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 18:03:37.909953 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 18:03:37.909965 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 18:03:37.909975 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 18:03:37.909984 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 18:03:37.909993 systemd[1]: Reached target paths.target - Path Units. Sep 12 18:03:37.910002 systemd[1]: Reached target slices.target - Slice Units. Sep 12 18:03:37.910011 systemd[1]: Reached target swap.target - Swaps. Sep 12 18:03:37.910020 systemd[1]: Reached target timers.target - Timer Units. Sep 12 18:03:37.910030 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 18:03:37.910039 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 18:03:37.910050 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 18:03:37.910059 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 18:03:37.910068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 18:03:37.910077 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 18:03:37.910086 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 18:03:37.910096 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 18:03:37.910105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 18:03:37.910114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 18:03:37.910126 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 18:03:37.910135 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 18:03:37.910145 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 18:03:37.910154 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 18:03:37.910163 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 18:03:37.910172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:03:37.910181 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 18:03:37.910194 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 18:03:37.910236 systemd-journald[212]: Collecting audit messages is disabled. Sep 12 18:03:37.910263 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 18:03:37.910274 systemd-journald[212]: Journal started Sep 12 18:03:37.911343 systemd-journald[212]: Runtime Journal (/run/log/journal/fa1325ffb574447a8e2238bc17785c3b) is 4.9M, max 39.5M, 34.6M free. Sep 12 18:03:37.901219 systemd-modules-load[213]: Inserted module 'overlay' Sep 12 18:03:37.917338 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 18:03:37.922575 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 18:03:37.940639 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 18:03:37.942321 kernel: Bridge firewalling registered Sep 12 18:03:37.943493 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 18:03:37.944341 systemd-modules-load[213]: Inserted module 'br_netfilter' Sep 12 18:03:37.982937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 18:03:37.991480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:03:37.992420 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 18:03:37.998180 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 18:03:37.998255 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 18:03:38.003469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:03:38.006427 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 18:03:38.009576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 18:03:38.021347 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:03:38.024987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 18:03:38.026457 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 18:03:38.033183 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 18:03:38.041626 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 18:03:38.074146 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 18:03:38.081421 systemd-resolved[245]: Positive Trust Anchors: Sep 12 18:03:38.082234 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 18:03:38.082273 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 18:03:38.089168 systemd-resolved[245]: Defaulting to hostname 'linux'. Sep 12 18:03:38.092127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 18:03:38.093483 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 18:03:38.178332 kernel: SCSI subsystem initialized Sep 12 18:03:38.190325 kernel: Loading iSCSI transport class v2.0-870. Sep 12 18:03:38.202324 kernel: iscsi: registered transport (tcp) Sep 12 18:03:38.227371 kernel: iscsi: registered transport (qla4xxx) Sep 12 18:03:38.227460 kernel: QLogic iSCSI HBA Driver Sep 12 18:03:38.249136 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 18:03:38.268910 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 18:03:38.272213 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 18:03:38.322894 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 18:03:38.325086 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 18:03:38.387378 kernel: raid6: avx2x4 gen() 28505 MB/s Sep 12 18:03:38.404364 kernel: raid6: avx2x2 gen() 27379 MB/s Sep 12 18:03:38.421615 kernel: raid6: avx2x1 gen() 20967 MB/s Sep 12 18:03:38.421688 kernel: raid6: using algorithm avx2x4 gen() 28505 MB/s Sep 12 18:03:38.439657 kernel: raid6: .... xor() 10054 MB/s, rmw enabled Sep 12 18:03:38.439731 kernel: raid6: using avx2x2 recovery algorithm Sep 12 18:03:38.466370 kernel: xor: automatically using best checksumming function avx Sep 12 18:03:38.647362 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 18:03:38.655987 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 18:03:38.659815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 18:03:38.692272 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 12 18:03:38.698193 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 18:03:38.701760 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 18:03:38.731382 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Sep 12 18:03:38.761649 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 18:03:38.763921 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 18:03:38.833034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 18:03:38.836373 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 18:03:38.919348 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Sep 12 18:03:38.923331 kernel: scsi host0: Virtio SCSI HBA Sep 12 18:03:38.934380 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 12 18:03:38.946731 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 18:03:38.964333 kernel: ACPI: bus type USB registered Sep 12 18:03:38.966331 kernel: usbcore: registered new interface driver usbfs Sep 12 18:03:38.966399 kernel: usbcore: registered new interface driver hub Sep 12 18:03:38.967525 kernel: usbcore: registered new device driver usb Sep 12 18:03:38.974401 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 18:03:38.994074 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 18:03:38.994137 kernel: GPT:9289727 != 125829119 Sep 12 18:03:38.994149 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 18:03:38.994160 kernel: GPT:9289727 != 125829119 Sep 12 18:03:38.994171 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 18:03:38.994182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 18:03:38.993707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 18:03:38.993829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:03:38.999357 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:03:39.007331 kernel: AES CTR mode by8 optimization enabled Sep 12 18:03:39.009335 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 12 18:03:39.009559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:03:39.010571 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:03:39.014332 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 12 18:03:39.035332 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 18:03:39.099353 kernel: libata version 3.00 loaded. Sep 12 18:03:39.104353 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 12 18:03:39.114326 kernel: scsi host1: ata_piix Sep 12 18:03:39.115774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 18:03:39.177980 kernel: scsi host2: ata_piix Sep 12 18:03:39.178285 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Sep 12 18:03:39.178342 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Sep 12 18:03:39.178361 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 12 18:03:39.178522 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 12 18:03:39.178641 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 12 18:03:39.178754 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 12 18:03:39.178871 kernel: hub 1-0:1.0: USB hub found Sep 12 18:03:39.179076 kernel: hub 1-0:1.0: 2 ports detected Sep 12 18:03:39.182040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:03:39.203420 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 18:03:39.213058 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 18:03:39.213935 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 18:03:39.239888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 18:03:39.242266 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 18:03:39.262092 disk-uuid[608]: Primary Header is updated. Sep 12 18:03:39.262092 disk-uuid[608]: Secondary Entries is updated. Sep 12 18:03:39.262092 disk-uuid[608]: Secondary Header is updated. Sep 12 18:03:39.267336 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 18:03:39.274476 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 18:03:39.424490 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 18:03:39.442893 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 18:03:39.444631 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 18:03:39.446000 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 18:03:39.447613 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 18:03:39.481479 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 18:03:40.279362 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 18:03:40.281046 disk-uuid[609]: The operation has completed successfully. Sep 12 18:03:40.339005 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 18:03:40.339144 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 18:03:40.371102 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 18:03:40.406215 sh[634]: Success Sep 12 18:03:40.430608 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 18:03:40.430693 kernel: device-mapper: uevent: version 1.0.3 Sep 12 18:03:40.431715 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 18:03:40.444481 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 12 18:03:40.493901 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 18:03:40.499475 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 18:03:40.509725 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 18:03:40.523993 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (646) Sep 12 18:03:40.524067 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 18:03:40.525513 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:03:40.533728 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 18:03:40.533806 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 18:03:40.535584 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 18:03:40.537406 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 18:03:40.538941 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 18:03:40.541340 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 18:03:40.544472 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 18:03:40.576415 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (678) Sep 12 18:03:40.580414 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:03:40.582339 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:03:40.591899 kernel: BTRFS info (device vda6): turning on async discard Sep 12 18:03:40.591992 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 18:03:40.599430 kernel: BTRFS info (device vda6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:03:40.600614 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 18:03:40.603467 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 18:03:40.694333 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 18:03:40.698584 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 18:03:40.769680 systemd-networkd[815]: lo: Link UP Sep 12 18:03:40.769693 systemd-networkd[815]: lo: Gained carrier Sep 12 18:03:40.774865 systemd-networkd[815]: Enumeration completed Sep 12 18:03:40.775466 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 18:03:40.776094 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 18:03:40.776098 systemd-networkd[815]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 12 18:03:40.777416 systemd[1]: Reached target network.target - Network. Sep 12 18:03:40.782451 systemd-networkd[815]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 18:03:40.782462 systemd-networkd[815]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 18:03:40.783397 systemd-networkd[815]: eth0: Link UP Sep 12 18:03:40.783613 systemd-networkd[815]: eth1: Link UP Sep 12 18:03:40.783815 systemd-networkd[815]: eth0: Gained carrier Sep 12 18:03:40.783832 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 18:03:40.790113 systemd-networkd[815]: eth1: Gained carrier Sep 12 18:03:40.790136 systemd-networkd[815]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 18:03:40.800132 systemd-networkd[815]: eth0: DHCPv4 address 64.23.243.150/20, gateway 64.23.240.1 acquired from 169.254.169.253 Sep 12 18:03:40.811435 systemd-networkd[815]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 Sep 12 18:03:40.822995 ignition[728]: Ignition 2.21.0 Sep 12 18:03:40.823375 ignition[728]: Stage: fetch-offline Sep 12 18:03:40.823444 ignition[728]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:03:40.823461 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:03:40.823640 ignition[728]: parsed url from cmdline: "" Sep 12 18:03:40.823649 ignition[728]: no config URL provided Sep 12 18:03:40.823661 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 18:03:40.823678 ignition[728]: no config at "/usr/lib/ignition/user.ign" Sep 12 18:03:40.827396 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 18:03:40.823690 ignition[728]: failed to fetch config: resource requires networking Sep 12 18:03:40.824008 ignition[728]: Ignition finished successfully Sep 12 18:03:40.831528 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 18:03:40.871987 ignition[824]: Ignition 2.21.0 Sep 12 18:03:40.872537 ignition[824]: Stage: fetch Sep 12 18:03:40.872729 ignition[824]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:03:40.872739 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:03:40.872821 ignition[824]: parsed url from cmdline: "" Sep 12 18:03:40.872824 ignition[824]: no config URL provided Sep 12 18:03:40.872829 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 18:03:40.872836 ignition[824]: no config at "/usr/lib/ignition/user.ign" Sep 12 18:03:40.872875 ignition[824]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 12 18:03:40.892721 ignition[824]: GET result: OK Sep 12 18:03:40.892847 ignition[824]: parsing config with SHA512: 666fd064fca46bacdcd0e429f12d7046b5e1b37e5a31ff752539e68a125e2ae0a9c45cc72ad32bae95f30d4b16bd363508a8cd4fef2c1c3059a1917f371565a2 Sep 12 18:03:40.897507 unknown[824]: fetched base config from "system" Sep 12 18:03:40.897520 unknown[824]: fetched base config from "system" Sep 12 18:03:40.897842 ignition[824]: fetch: fetch complete Sep 12 18:03:40.897527 unknown[824]: fetched user config from "digitalocean" Sep 12 18:03:40.897847 ignition[824]: fetch: fetch passed Sep 12 18:03:40.901108 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 18:03:40.897900 ignition[824]: Ignition finished successfully Sep 12 18:03:40.904480 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 18:03:40.942745 ignition[830]: Ignition 2.21.0 Sep 12 18:03:40.942770 ignition[830]: Stage: kargs Sep 12 18:03:40.942949 ignition[830]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:03:40.942959 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:03:40.944835 ignition[830]: kargs: kargs passed Sep 12 18:03:40.944954 ignition[830]: Ignition finished successfully Sep 12 18:03:40.947405 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 18:03:40.949848 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 18:03:40.989124 ignition[836]: Ignition 2.21.0 Sep 12 18:03:40.989146 ignition[836]: Stage: disks Sep 12 18:03:40.989448 ignition[836]: no configs at "/usr/lib/ignition/base.d" Sep 12 18:03:40.989465 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:03:40.993917 ignition[836]: disks: disks passed Sep 12 18:03:40.994018 ignition[836]: Ignition finished successfully Sep 12 18:03:40.995941 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 18:03:40.996757 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 18:03:40.997588 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 18:03:40.998840 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 18:03:40.999949 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 18:03:41.001122 systemd[1]: Reached target basic.target - Basic System. Sep 12 18:03:41.003707 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 18:03:41.030896 systemd-fsck[845]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 18:03:41.033825 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 18:03:41.037541 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 18:03:41.178327 kernel: EXT4-fs (vda9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 18:03:41.180504 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 18:03:41.181905 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 18:03:41.185036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 18:03:41.188396 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 18:03:41.196530 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Sep 12 18:03:41.201381 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 18:03:41.205416 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 18:03:41.206732 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 18:03:41.209227 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (853) Sep 12 18:03:41.211976 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 18:03:41.214130 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:03:41.214157 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:03:41.218472 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 18:03:41.223566 kernel: BTRFS info (device vda6): turning on async discard Sep 12 18:03:41.223648 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 18:03:41.227917 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 18:03:41.303171 coreos-metadata[856]: Sep 12 18:03:41.303 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 18:03:41.308045 initrd-setup-root[883]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 18:03:41.313825 coreos-metadata[855]: Sep 12 18:03:41.313 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 18:03:41.317314 coreos-metadata[856]: Sep 12 18:03:41.315 INFO Fetch successful Sep 12 18:03:41.317942 initrd-setup-root[890]: cut: /sysroot/etc/group: No such file or directory Sep 12 18:03:41.322624 coreos-metadata[856]: Sep 12 18:03:41.322 INFO wrote hostname ci-4426.1.0-8-66567323f5 to /sysroot/etc/hostname Sep 12 18:03:41.326628 coreos-metadata[855]: Sep 12 18:03:41.324 INFO Fetch successful Sep 12 18:03:41.325963 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 18:03:41.327980 initrd-setup-root[897]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 18:03:41.333906 initrd-setup-root[905]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 18:03:41.335733 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Sep 12 18:03:41.335889 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Sep 12 18:03:41.449086 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 18:03:41.450824 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 18:03:41.453496 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 18:03:41.470389 kernel: BTRFS info (device vda6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:03:41.491922 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 18:03:41.509353 ignition[975]: INFO : Ignition 2.21.0 Sep 12 18:03:41.510173 ignition[975]: INFO : Stage: mount Sep 12 18:03:41.511404 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 18:03:41.511404 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:03:41.515603 ignition[975]: INFO : mount: mount passed Sep 12 18:03:41.516606 ignition[975]: INFO : Ignition finished successfully Sep 12 18:03:41.518226 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 18:03:41.520415 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 18:03:41.522610 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 18:03:41.545997 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 18:03:41.571331 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (987) Sep 12 18:03:41.574580 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 18:03:41.574675 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 18:03:41.581142 kernel: BTRFS info (device vda6): turning on async discard Sep 12 18:03:41.581228 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 18:03:41.584074 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 18:03:41.619486 ignition[1003]: INFO : Ignition 2.21.0 Sep 12 18:03:41.619486 ignition[1003]: INFO : Stage: files Sep 12 18:03:41.621994 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 18:03:41.621994 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:03:41.621994 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping Sep 12 18:03:41.624935 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 18:03:41.624935 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 18:03:41.629157 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 18:03:41.630145 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 18:03:41.630145 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 18:03:41.629787 unknown[1003]: wrote ssh authorized keys file for user: core Sep 12 18:03:41.632840 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 18:03:41.632840 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 18:03:41.667916 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 18:03:41.910506 systemd-networkd[815]: eth0: Gained IPv6LL Sep 12 18:03:42.102582 systemd-networkd[815]: eth1: Gained IPv6LL Sep 12 18:03:42.290645 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 18:03:42.290645 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 18:03:42.293551 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 18:03:42.534718 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 18:03:42.631638 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 18:03:42.631638 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 18:03:42.635156 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 18:03:42.635156 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 18:03:42.635156 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 18:03:42.635156 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 18:03:42.635156 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 18:03:42.635156 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 18:03:42.635156 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 18:03:42.642852 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 18:03:42.642852 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 18:03:42.642852 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 18:03:42.642852 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 18:03:42.642852 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 18:03:42.642852 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 18:03:43.100820 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 18:03:43.762535 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 18:03:43.762535 ignition[1003]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 18:03:43.765171 ignition[1003]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 18:03:43.768146 ignition[1003]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 18:03:43.768146 ignition[1003]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 18:03:43.768146 ignition[1003]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 18:03:43.770646 ignition[1003]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 18:03:43.770646 ignition[1003]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 18:03:43.770646 ignition[1003]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 18:03:43.770646 ignition[1003]: INFO : files: files passed Sep 12 18:03:43.770646 ignition[1003]: INFO : Ignition finished successfully Sep 12 18:03:43.770812 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 18:03:43.774442 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 18:03:43.777364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 18:03:43.793231 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 18:03:43.793418 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 18:03:43.802091 initrd-setup-root-after-ignition[1034]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 18:03:43.802091 initrd-setup-root-after-ignition[1034]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 18:03:43.804005 initrd-setup-root-after-ignition[1038]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 18:03:43.806429 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 18:03:43.807602 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 18:03:43.809742 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 18:03:43.856184 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 18:03:43.856334 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 18:03:43.858071 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 18:03:43.859264 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 18:03:43.860817 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 18:03:43.863481 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 18:03:43.891610 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 18:03:43.894173 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 18:03:43.919643 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 18:03:43.921334 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 18:03:43.922820 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 18:03:43.924049 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 18:03:43.924195 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 18:03:43.926103 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 18:03:43.926805 systemd[1]: Stopped target basic.target - Basic System. Sep 12 18:03:43.927849 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 18:03:43.929056 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 18:03:43.930096 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 18:03:43.931496 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 18:03:43.932810 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 18:03:43.934026 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 18:03:43.935151 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 18:03:43.936809 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 18:03:43.937674 systemd[1]: Stopped target swap.target - Swaps. Sep 12 18:03:43.938629 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 18:03:43.938810 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 18:03:43.940000 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 18:03:43.940773 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 18:03:43.941993 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 18:03:43.942335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 18:03:43.943225 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 18:03:43.943413 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 18:03:43.945081 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 18:03:43.945228 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 18:03:43.946697 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 18:03:43.946835 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 18:03:43.947629 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 18:03:43.947755 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 18:03:43.951444 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 18:03:43.953535 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 18:03:43.954431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 18:03:43.955444 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 18:03:43.962838 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 18:03:43.963001 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 18:03:43.970544 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 18:03:43.970650 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 18:03:43.989989 ignition[1058]: INFO : Ignition 2.21.0 Sep 12 18:03:43.992026 ignition[1058]: INFO : Stage: umount Sep 12 18:03:43.992026 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 18:03:43.992026 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 18:03:43.992026 ignition[1058]: INFO : umount: umount passed Sep 12 18:03:43.992026 ignition[1058]: INFO : Ignition finished successfully Sep 12 18:03:43.996330 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 18:03:43.997215 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 18:03:43.997405 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 18:03:44.000626 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 18:03:44.000751 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 18:03:44.001792 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 18:03:44.001862 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 18:03:44.003167 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 18:03:44.003236 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 18:03:44.005142 systemd[1]: Stopped target network.target - Network. Sep 12 18:03:44.006032 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 18:03:44.006092 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 18:03:44.007080 systemd[1]: Stopped target paths.target - Path Units. Sep 12 18:03:44.007989 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 18:03:44.011599 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 18:03:44.012605 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 18:03:44.013899 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 18:03:44.014981 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 18:03:44.015037 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 18:03:44.015937 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 18:03:44.015973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 18:03:44.017011 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 18:03:44.017082 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 18:03:44.018131 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 18:03:44.018178 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 18:03:44.019471 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 18:03:44.020353 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 18:03:44.021787 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 18:03:44.021897 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 18:03:44.023066 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 18:03:44.023167 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 18:03:44.030705 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 18:03:44.031404 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 18:03:44.035328 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 18:03:44.035615 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 18:03:44.035780 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 18:03:44.038050 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 18:03:44.039280 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 18:03:44.040141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 18:03:44.040232 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 18:03:44.042399 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 18:03:44.043562 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 18:03:44.043625 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 18:03:44.045663 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 18:03:44.045731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:03:44.048435 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 18:03:44.048489 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 18:03:44.050555 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 18:03:44.050626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 18:03:44.052092 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 18:03:44.057191 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 18:03:44.057283 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:03:44.067015 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 18:03:44.068107 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 18:03:44.070802 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 18:03:44.071710 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 18:03:44.073130 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 18:03:44.073228 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 18:03:44.075703 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 18:03:44.076403 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 18:03:44.077693 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 18:03:44.078473 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 18:03:44.079647 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 18:03:44.079694 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 18:03:44.080647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 18:03:44.080731 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 18:03:44.083084 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 18:03:44.085565 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 18:03:44.085641 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 18:03:44.088466 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 18:03:44.088642 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 18:03:44.090070 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 18:03:44.090128 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 18:03:44.091547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 18:03:44.091597 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 18:03:44.092711 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 18:03:44.092760 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:03:44.095549 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 18:03:44.095611 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 18:03:44.095646 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 18:03:44.095683 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:03:44.109945 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 18:03:44.110100 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 18:03:44.111566 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 18:03:44.113710 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 18:03:44.144970 systemd[1]: Switching root. Sep 12 18:03:44.182641 systemd-journald[212]: Journal stopped Sep 12 18:03:45.539205 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Sep 12 18:03:45.539315 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 18:03:45.539336 kernel: SELinux: policy capability open_perms=1 Sep 12 18:03:45.539356 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 18:03:45.539374 kernel: SELinux: policy capability always_check_network=0 Sep 12 18:03:45.539394 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 18:03:45.539411 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 18:03:45.539423 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 18:03:45.539434 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 18:03:45.539449 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 18:03:45.539475 kernel: audit: type=1403 audit(1757700224.503:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 18:03:45.539493 systemd[1]: Successfully loaded SELinux policy in 72.021ms. Sep 12 18:03:45.539518 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.866ms. Sep 12 18:03:45.539536 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 18:03:45.539557 systemd[1]: Detected virtualization kvm. Sep 12 18:03:45.539572 systemd[1]: Detected architecture x86-64. Sep 12 18:03:45.539588 systemd[1]: Detected first boot. Sep 12 18:03:45.539603 systemd[1]: Hostname set to . Sep 12 18:03:45.539618 systemd[1]: Initializing machine ID from VM UUID. Sep 12 18:03:45.539634 zram_generator::config[1104]: No configuration found. Sep 12 18:03:45.539652 kernel: Guest personality initialized and is inactive Sep 12 18:03:45.539667 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 18:03:45.539686 kernel: Initialized host personality Sep 12 18:03:45.539705 kernel: NET: Registered PF_VSOCK protocol family Sep 12 18:03:45.539723 systemd[1]: Populated /etc with preset unit settings. Sep 12 18:03:45.539742 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 18:03:45.539761 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 18:03:45.539778 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 18:03:45.539795 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 18:03:45.539812 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 18:03:45.539829 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 18:03:45.539851 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 18:03:45.539868 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 18:03:45.539887 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 18:03:45.539904 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 18:03:45.539924 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 18:03:45.539941 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 18:03:45.539957 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 18:03:45.539974 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 18:03:45.539990 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 18:03:45.540014 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 18:03:45.540033 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 18:03:45.540051 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 18:03:45.540069 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 18:03:45.540088 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 18:03:45.540108 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 18:03:45.540132 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 18:03:45.540149 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 18:03:45.540162 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 18:03:45.540174 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 18:03:45.540186 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 18:03:45.540197 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 18:03:45.540209 systemd[1]: Reached target slices.target - Slice Units. Sep 12 18:03:45.540221 systemd[1]: Reached target swap.target - Swaps. Sep 12 18:03:45.540232 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 18:03:45.540246 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 18:03:45.540258 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 18:03:45.540270 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 18:03:45.540281 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 18:03:45.542397 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 18:03:45.542432 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 18:03:45.542449 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 18:03:45.542462 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 18:03:45.542473 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 18:03:45.542491 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:45.542504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 18:03:45.542516 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 18:03:45.542527 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 18:03:45.542543 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 18:03:45.542555 systemd[1]: Reached target machines.target - Containers. Sep 12 18:03:45.542567 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 18:03:45.542579 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:03:45.542593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 18:03:45.542605 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 18:03:45.542617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 18:03:45.542628 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 18:03:45.542639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 18:03:45.542651 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 18:03:45.542662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 18:03:45.542674 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 18:03:45.542687 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 18:03:45.542700 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 18:03:45.542714 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 18:03:45.542725 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 18:03:45.542739 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:03:45.542751 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 18:03:45.542765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 18:03:45.542777 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 18:03:45.542789 kernel: fuse: init (API version 7.41) Sep 12 18:03:45.542802 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 18:03:45.542814 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 18:03:45.542826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 18:03:45.542841 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 18:03:45.542852 systemd[1]: Stopped verity-setup.service. Sep 12 18:03:45.542865 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:45.542876 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 18:03:45.542888 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 18:03:45.542900 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 18:03:45.542912 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 18:03:45.542926 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 18:03:45.542939 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 18:03:45.542952 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 18:03:45.542964 kernel: ACPI: bus type drm_connector registered Sep 12 18:03:45.542975 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 18:03:45.542986 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 18:03:45.542999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 18:03:45.543011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 18:03:45.543022 kernel: loop: module loaded Sep 12 18:03:45.543035 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 18:03:45.543048 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 18:03:45.543060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 18:03:45.543071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 18:03:45.543083 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 18:03:45.543095 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 18:03:45.543106 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 18:03:45.543118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 18:03:45.543130 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 18:03:45.543145 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 18:03:45.543157 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 18:03:45.543171 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 18:03:45.543183 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 18:03:45.543238 systemd-journald[1178]: Collecting audit messages is disabled. Sep 12 18:03:45.543266 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 18:03:45.543279 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 18:03:45.543326 systemd-journald[1178]: Journal started Sep 12 18:03:45.543355 systemd-journald[1178]: Runtime Journal (/run/log/journal/fa1325ffb574447a8e2238bc17785c3b) is 4.9M, max 39.5M, 34.6M free. Sep 12 18:03:45.166662 systemd[1]: Queued start job for default target multi-user.target. Sep 12 18:03:45.176148 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 18:03:45.176793 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 18:03:45.553328 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 18:03:45.553425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 18:03:45.556169 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 18:03:45.558332 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 18:03:45.564366 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 18:03:45.571414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:03:45.575362 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 18:03:45.583380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 18:03:45.590424 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 18:03:45.594370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 18:03:45.598359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:03:45.612335 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 18:03:45.617452 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 18:03:45.621337 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 18:03:45.625920 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 18:03:45.636892 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 18:03:45.640058 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 18:03:45.641003 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 18:03:45.674376 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 18:03:45.692816 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 18:03:45.699863 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 18:03:45.706697 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 18:03:45.708081 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:03:45.734004 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 18:03:45.754974 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 12 18:03:45.755001 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 12 18:03:45.761820 kernel: loop1: detected capacity change from 0 to 229808 Sep 12 18:03:45.769061 systemd-journald[1178]: Time spent on flushing to /var/log/journal/fa1325ffb574447a8e2238bc17785c3b is 35.484ms for 1029 entries. Sep 12 18:03:45.769061 systemd-journald[1178]: System Journal (/var/log/journal/fa1325ffb574447a8e2238bc17785c3b) is 8M, max 195.6M, 187.6M free. Sep 12 18:03:45.814841 systemd-journald[1178]: Received client request to flush runtime journal. Sep 12 18:03:45.770471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 18:03:45.777283 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 18:03:45.783566 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 18:03:45.823099 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 18:03:45.824006 kernel: loop2: detected capacity change from 0 to 111000 Sep 12 18:03:45.852447 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 18:03:45.873715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 18:03:45.888347 kernel: loop3: detected capacity change from 0 to 8 Sep 12 18:03:45.912756 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 12 18:03:45.912776 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 12 18:03:45.913397 kernel: loop4: detected capacity change from 0 to 128016 Sep 12 18:03:45.918803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 18:03:45.929328 kernel: loop5: detected capacity change from 0 to 229808 Sep 12 18:03:45.947270 kernel: loop6: detected capacity change from 0 to 111000 Sep 12 18:03:45.959964 kernel: loop7: detected capacity change from 0 to 8 Sep 12 18:03:45.963892 (sd-merge)[1254]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 12 18:03:45.964531 (sd-merge)[1254]: Merged extensions into '/usr'. Sep 12 18:03:45.976473 systemd[1]: Reload requested from client PID 1210 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 18:03:45.976516 systemd[1]: Reloading... Sep 12 18:03:46.156336 zram_generator::config[1283]: No configuration found. Sep 12 18:03:46.361724 ldconfig[1206]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 18:03:46.539760 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 18:03:46.539949 systemd[1]: Reloading finished in 562 ms. Sep 12 18:03:46.564177 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 18:03:46.565339 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 18:03:46.575635 systemd[1]: Starting ensure-sysext.service... Sep 12 18:03:46.579554 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 18:03:46.610862 systemd[1]: Reload requested from client PID 1324 ('systemctl') (unit ensure-sysext.service)... Sep 12 18:03:46.610883 systemd[1]: Reloading... Sep 12 18:03:46.637934 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 18:03:46.637993 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 18:03:46.638280 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 18:03:46.640723 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 18:03:46.642278 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 18:03:46.644424 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Sep 12 18:03:46.644508 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Sep 12 18:03:46.650161 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 18:03:46.650177 systemd-tmpfiles[1325]: Skipping /boot Sep 12 18:03:46.669965 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 18:03:46.669980 systemd-tmpfiles[1325]: Skipping /boot Sep 12 18:03:46.725336 zram_generator::config[1352]: No configuration found. Sep 12 18:03:47.019480 systemd[1]: Reloading finished in 406 ms. Sep 12 18:03:47.032562 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 18:03:47.045739 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 18:03:47.058405 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 18:03:47.063660 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 18:03:47.067892 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 18:03:47.073741 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 18:03:47.083839 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 18:03:47.088705 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 18:03:47.095252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:47.095586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:03:47.098503 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 18:03:47.106783 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 18:03:47.114638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 18:03:47.115575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:03:47.115768 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:03:47.115910 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:47.128502 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 18:03:47.132636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:47.132881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:03:47.133116 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:03:47.133227 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:03:47.133370 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:47.139190 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:47.139572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:03:47.143747 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 18:03:47.145623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:03:47.145835 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:03:47.146055 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:47.149383 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 18:03:47.168673 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 18:03:47.171440 systemd[1]: Finished ensure-sysext.service. Sep 12 18:03:47.183981 systemd-udevd[1401]: Using default interface naming scheme 'v255'. Sep 12 18:03:47.192573 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 18:03:47.195641 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 18:03:47.200774 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 18:03:47.208933 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 18:03:47.211449 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 18:03:47.212420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 18:03:47.219495 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 18:03:47.233657 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 18:03:47.236009 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 18:03:47.237428 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 18:03:47.244583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 18:03:47.246631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 18:03:47.251337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 18:03:47.266980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 18:03:47.267975 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 18:03:47.270927 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 18:03:47.319847 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 18:03:47.357128 augenrules[1463]: No rules Sep 12 18:03:47.360827 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 18:03:47.362111 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 18:03:47.433097 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Sep 12 18:03:47.434398 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 18:03:47.440600 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 12 18:03:47.442001 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:47.442206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 18:03:47.449620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 18:03:47.462522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 18:03:47.470946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 18:03:47.471834 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 18:03:47.471899 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 18:03:47.471949 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 18:03:47.471975 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 18:03:47.477979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 18:03:47.478273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 18:03:47.484339 kernel: ISO 9660 Extensions: RRIP_1991A Sep 12 18:03:47.488117 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 12 18:03:47.509830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 18:03:47.510776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 18:03:47.512669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 18:03:47.529273 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 18:03:47.529648 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 18:03:47.531885 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 18:03:47.546964 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 18:03:47.638096 systemd-networkd[1432]: lo: Link UP Sep 12 18:03:47.638115 systemd-networkd[1432]: lo: Gained carrier Sep 12 18:03:47.639728 systemd-networkd[1432]: Enumeration completed Sep 12 18:03:47.639912 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 18:03:47.644518 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 18:03:47.651613 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 18:03:47.657069 systemd-networkd[1432]: eth1: Configuring with /run/systemd/network/10-8e:d4:f2:1f:0e:79.network. Sep 12 18:03:47.657908 systemd-networkd[1432]: eth1: Link UP Sep 12 18:03:47.658113 systemd-networkd[1432]: eth1: Gained carrier Sep 12 18:03:47.686633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 18:03:47.695760 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 18:03:47.708940 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 18:03:47.727972 systemd-resolved[1400]: Positive Trust Anchors: Sep 12 18:03:47.728949 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 18:03:47.728993 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 18:03:47.736201 systemd-resolved[1400]: Using system hostname 'ci-4426.1.0-8-66567323f5'. Sep 12 18:03:47.739930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 18:03:47.741683 systemd[1]: Reached target network.target - Network. Sep 12 18:03:47.742242 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 18:03:47.746932 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 18:03:47.756186 systemd-networkd[1432]: eth0: Configuring with /run/systemd/network/10-9e:63:60:c4:b7:78.network. Sep 12 18:03:47.757154 systemd-networkd[1432]: eth0: Link UP Sep 12 18:03:47.758413 systemd-networkd[1432]: eth0: Gained carrier Sep 12 18:03:47.765862 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 18:03:47.766899 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 18:03:47.768636 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 18:03:47.769428 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 18:03:47.770146 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 18:03:47.772026 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 18:03:47.773504 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 18:03:47.773558 systemd[1]: Reached target paths.target - Path Units. Sep 12 18:03:47.774585 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 18:03:47.775607 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 18:03:47.777000 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 18:03:47.778356 systemd[1]: Reached target timers.target - Timer Units. Sep 12 18:03:47.780705 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 18:03:47.786350 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 18:03:47.787318 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 18:03:47.793518 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 18:03:47.794327 kernel: ACPI: button: Power Button [PWRF] Sep 12 18:03:47.795957 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 18:03:47.797409 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 18:03:47.808379 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 18:03:47.810117 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 18:03:47.814277 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 18:03:47.818781 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 18:03:47.820548 systemd[1]: Reached target basic.target - Basic System. Sep 12 18:03:47.822507 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 18:03:47.822559 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 18:03:47.826469 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 18:03:47.851078 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 12 18:03:47.851497 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 18:03:47.858717 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 18:03:47.866665 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 18:03:47.873681 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 18:03:47.877624 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 18:03:47.882371 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 18:03:47.886777 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 18:03:47.887860 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 18:03:47.893524 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 18:03:47.897523 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 18:03:47.906947 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 18:03:47.911981 jq[1511]: false Sep 12 18:03:47.919641 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 18:03:47.925201 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 18:03:47.941964 extend-filesystems[1512]: Found /dev/vda6 Sep 12 18:03:47.944880 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 18:03:47.948849 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 18:03:47.949741 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 18:03:47.952939 extend-filesystems[1512]: Found /dev/vda9 Sep 12 18:03:47.955654 coreos-metadata[1508]: Sep 12 18:03:47.955 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 18:03:47.955741 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 18:03:47.960357 extend-filesystems[1512]: Checking size of /dev/vda9 Sep 12 18:03:47.966586 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 18:03:47.972844 coreos-metadata[1508]: Sep 12 18:03:47.969 INFO Fetch successful Sep 12 18:03:47.977555 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 18:03:47.979909 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 18:03:47.980289 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 18:03:47.987655 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 18:03:47.988235 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing passwd entry cache Sep 12 18:03:47.988265 oslogin_cache_refresh[1515]: Refreshing passwd entry cache Sep 12 18:03:47.993563 extend-filesystems[1512]: Resized partition /dev/vda9 Sep 12 18:03:47.993599 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 18:03:47.999329 extend-filesystems[1544]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 18:03:48.006667 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 12 18:03:48.022358 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting users, quitting Sep 12 18:03:48.022358 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 18:03:48.022358 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing group entry cache Sep 12 18:03:48.021718 oslogin_cache_refresh[1515]: Failure getting users, quitting Sep 12 18:03:48.021744 oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 18:03:48.021823 oslogin_cache_refresh[1515]: Refreshing group entry cache Sep 12 18:03:48.031554 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting groups, quitting Sep 12 18:03:48.031554 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 18:03:48.031487 oslogin_cache_refresh[1515]: Failure getting groups, quitting Sep 12 18:03:48.031509 oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 18:03:48.031957 systemd-timesyncd[1418]: Contacted time server 23.157.160.168:123 (0.flatcar.pool.ntp.org). Sep 12 18:03:48.032016 systemd-timesyncd[1418]: Initial clock synchronization to Fri 2025-09-12 18:03:47.958048 UTC. Sep 12 18:03:48.048350 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 18:03:48.048720 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 18:03:48.063651 tar[1536]: linux-amd64/LICENSE Sep 12 18:03:48.067232 tar[1536]: linux-amd64/helm Sep 12 18:03:48.109364 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 18:03:48.112213 jq[1533]: true Sep 12 18:03:48.132075 dbus-daemon[1509]: [system] SELinux support is enabled Sep 12 18:03:48.132406 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 18:03:48.137422 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 18:03:48.137422 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 18:03:48.137422 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 18:03:48.199386 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 12 18:03:48.199430 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 12 18:03:48.199740 kernel: Console: switching to colour dummy device 80x25 Sep 12 18:03:48.199764 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 18:03:48.199784 kernel: [drm] features: -context_init Sep 12 18:03:48.199841 kernel: [drm] number of scanouts: 1 Sep 12 18:03:48.199862 kernel: [drm] number of cap sets: 0 Sep 12 18:03:48.199882 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Sep 12 18:03:48.199955 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Sep 12 18:03:48.209763 update_engine[1529]: I20250912 18:03:48.163223 1529 main.cc:92] Flatcar Update Engine starting Sep 12 18:03:48.209763 update_engine[1529]: I20250912 18:03:48.168354 1529 update_check_scheduler.cc:74] Next update check in 3m3s Sep 12 18:03:48.141042 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 18:03:48.143483 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 18:03:48.147879 (ntainerd)[1564]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 18:03:48.213609 jq[1566]: true Sep 12 18:03:48.165056 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 18:03:48.165121 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 18:03:48.198131 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 18:03:48.198248 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 12 18:03:48.198278 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 18:03:48.198895 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 18:03:48.200173 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 18:03:48.207370 systemd[1]: Started update-engine.service - Update Engine. Sep 12 18:03:48.220086 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 18:03:48.256391 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 18:03:48.277495 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 18:03:48.277606 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 18:03:48.287265 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 18:03:48.290937 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 18:03:48.501740 bash[1597]: Updated "/home/core/.ssh/authorized_keys" Sep 12 18:03:48.504363 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 18:03:48.512954 systemd[1]: Starting sshkeys.service... Sep 12 18:03:48.646398 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 18:03:48.652153 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 18:03:48.717539 containerd[1564]: time="2025-09-12T18:03:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 18:03:48.729552 containerd[1564]: time="2025-09-12T18:03:48.729490650Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 18:03:48.772080 containerd[1564]: time="2025-09-12T18:03:48.772012090Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.718µs" Sep 12 18:03:48.772865 containerd[1564]: time="2025-09-12T18:03:48.772224679Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 18:03:48.772865 containerd[1564]: time="2025-09-12T18:03:48.772266936Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 18:03:48.772865 containerd[1564]: time="2025-09-12T18:03:48.772537659Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 18:03:48.772865 containerd[1564]: time="2025-09-12T18:03:48.772564471Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 18:03:48.772865 containerd[1564]: time="2025-09-12T18:03:48.772612864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 18:03:48.772865 containerd[1564]: time="2025-09-12T18:03:48.772712156Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 18:03:48.772865 containerd[1564]: time="2025-09-12T18:03:48.772741524Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 18:03:48.784972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:03:48.792766 containerd[1564]: time="2025-09-12T18:03:48.792702161Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 18:03:48.793720 containerd[1564]: time="2025-09-12T18:03:48.793679495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 18:03:48.793892 containerd[1564]: time="2025-09-12T18:03:48.793865013Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 18:03:48.793967 containerd[1564]: time="2025-09-12T18:03:48.793952921Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 18:03:48.794196 containerd[1564]: time="2025-09-12T18:03:48.794174752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 18:03:48.794674 containerd[1564]: time="2025-09-12T18:03:48.794641279Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 18:03:48.794810 containerd[1564]: time="2025-09-12T18:03:48.794790063Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 18:03:48.794877 containerd[1564]: time="2025-09-12T18:03:48.794863102Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 18:03:48.795011 containerd[1564]: time="2025-09-12T18:03:48.794991991Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 18:03:48.795466 containerd[1564]: time="2025-09-12T18:03:48.795437851Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 18:03:48.795644 containerd[1564]: time="2025-09-12T18:03:48.795622863Z" level=info msg="metadata content store policy set" policy=shared Sep 12 18:03:48.803408 containerd[1564]: time="2025-09-12T18:03:48.803353602Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 18:03:48.803597 containerd[1564]: time="2025-09-12T18:03:48.803578476Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 18:03:48.803677 containerd[1564]: time="2025-09-12T18:03:48.803659299Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 18:03:48.803749 containerd[1564]: time="2025-09-12T18:03:48.803735359Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 18:03:48.803841 containerd[1564]: time="2025-09-12T18:03:48.803825112Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 18:03:48.803907 containerd[1564]: time="2025-09-12T18:03:48.803893648Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 18:03:48.803976 containerd[1564]: time="2025-09-12T18:03:48.803962671Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 18:03:48.804043 containerd[1564]: time="2025-09-12T18:03:48.804030115Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 18:03:48.804108 containerd[1564]: time="2025-09-12T18:03:48.804095409Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 18:03:48.804172 containerd[1564]: time="2025-09-12T18:03:48.804159371Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 18:03:48.804241 containerd[1564]: time="2025-09-12T18:03:48.804227290Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 18:03:48.804328 containerd[1564]: time="2025-09-12T18:03:48.804313373Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 18:03:48.804647 containerd[1564]: time="2025-09-12T18:03:48.804619022Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 18:03:48.805485 containerd[1564]: time="2025-09-12T18:03:48.805447228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 18:03:48.805642 containerd[1564]: time="2025-09-12T18:03:48.805619264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 18:03:48.805723 containerd[1564]: time="2025-09-12T18:03:48.805706120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 18:03:48.805815 containerd[1564]: time="2025-09-12T18:03:48.805798153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 18:03:48.805895 containerd[1564]: time="2025-09-12T18:03:48.805880092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 18:03:48.805970 containerd[1564]: time="2025-09-12T18:03:48.805954966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 18:03:48.806034 containerd[1564]: time="2025-09-12T18:03:48.806019945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 18:03:48.806099 containerd[1564]: time="2025-09-12T18:03:48.806084654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 18:03:48.806171 containerd[1564]: time="2025-09-12T18:03:48.806155204Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 18:03:48.806239 containerd[1564]: time="2025-09-12T18:03:48.806226822Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 18:03:48.806451 containerd[1564]: time="2025-09-12T18:03:48.806421649Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 18:03:48.807376 containerd[1564]: time="2025-09-12T18:03:48.807346812Z" level=info msg="Start snapshots syncer" Sep 12 18:03:48.807549 containerd[1564]: time="2025-09-12T18:03:48.807525139Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 18:03:48.808133 containerd[1564]: time="2025-09-12T18:03:48.808067242Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 18:03:48.808523 containerd[1564]: time="2025-09-12T18:03:48.808492914Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 18:03:48.812464 containerd[1564]: time="2025-09-12T18:03:48.812388616Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 18:03:48.812906 containerd[1564]: time="2025-09-12T18:03:48.812846559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 18:03:48.813073 containerd[1564]: time="2025-09-12T18:03:48.813051130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 18:03:48.813154 containerd[1564]: time="2025-09-12T18:03:48.813137695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 18:03:48.813245 containerd[1564]: time="2025-09-12T18:03:48.813229392Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 18:03:48.813332 containerd[1564]: time="2025-09-12T18:03:48.813317721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 18:03:48.813396 containerd[1564]: time="2025-09-12T18:03:48.813384020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 18:03:48.813464 containerd[1564]: time="2025-09-12T18:03:48.813451464Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 18:03:48.813553 containerd[1564]: time="2025-09-12T18:03:48.813539272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 18:03:48.813618 containerd[1564]: time="2025-09-12T18:03:48.813606097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 18:03:48.813680 containerd[1564]: time="2025-09-12T18:03:48.813665847Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 18:03:48.813815 containerd[1564]: time="2025-09-12T18:03:48.813792978Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 18:03:48.813905 containerd[1564]: time="2025-09-12T18:03:48.813887740Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 18:03:48.813968 containerd[1564]: time="2025-09-12T18:03:48.813953799Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 18:03:48.814034 containerd[1564]: time="2025-09-12T18:03:48.814018548Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 18:03:48.814091 containerd[1564]: time="2025-09-12T18:03:48.814078614Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 18:03:48.814150 containerd[1564]: time="2025-09-12T18:03:48.814139190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 18:03:48.814229 containerd[1564]: time="2025-09-12T18:03:48.814212392Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 18:03:48.814334 containerd[1564]: time="2025-09-12T18:03:48.814318757Z" level=info msg="runtime interface created" Sep 12 18:03:48.814400 containerd[1564]: time="2025-09-12T18:03:48.814388764Z" level=info msg="created NRI interface" Sep 12 18:03:48.814463 containerd[1564]: time="2025-09-12T18:03:48.814450187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 18:03:48.814531 containerd[1564]: time="2025-09-12T18:03:48.814520260Z" level=info msg="Connect containerd service" Sep 12 18:03:48.814637 containerd[1564]: time="2025-09-12T18:03:48.814622252Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 18:03:48.815859 containerd[1564]: time="2025-09-12T18:03:48.815821177Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 18:03:48.914730 coreos-metadata[1600]: Sep 12 18:03:48.914 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 18:03:48.931502 coreos-metadata[1600]: Sep 12 18:03:48.931 INFO Fetch successful Sep 12 18:03:48.956251 unknown[1600]: wrote ssh authorized keys file for user: core Sep 12 18:03:49.072187 update-ssh-keys[1614]: Updated "/home/core/.ssh/authorized_keys" Sep 12 18:03:49.072787 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 18:03:49.080089 systemd[1]: Finished sshkeys.service. Sep 12 18:03:49.118635 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 18:03:49.121052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:03:49.143840 systemd-networkd[1432]: eth1: Gained IPv6LL Sep 12 18:03:49.156368 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 18:03:49.162039 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 18:03:49.188976 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 18:03:49.196677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:03:49.200766 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 18:03:49.210481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:03:49.281770 systemd-logind[1525]: New seat seat0. Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.297942691Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298050286Z" level=info msg="Start subscribing containerd event" Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298110277Z" level=info msg="Start recovering state" Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298062513Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298209010Z" level=info msg="Start event monitor" Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298220597Z" level=info msg="Start cni network conf syncer for default" Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298231103Z" level=info msg="Start streaming server" Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298242366Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298249928Z" level=info msg="runtime interface starting up..." Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298255834Z" level=info msg="starting plugins..." Sep 12 18:03:49.299762 containerd[1564]: time="2025-09-12T18:03:49.298271569Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 18:03:49.298580 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 18:03:49.306704 containerd[1564]: time="2025-09-12T18:03:49.301119021Z" level=info msg="containerd successfully booted in 0.584192s" Sep 12 18:03:49.302718 systemd-logind[1525]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 18:03:49.302747 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 18:03:49.303200 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 18:03:49.305442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 18:03:49.305755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:03:49.311953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 18:03:49.341674 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 18:03:49.387803 kernel: EDAC MC: Ver: 3.0.0 Sep 12 18:03:49.408477 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 18:03:49.526473 systemd-networkd[1432]: eth0: Gained IPv6LL Sep 12 18:03:49.560766 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 18:03:49.586791 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 18:03:49.623934 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 18:03:49.631648 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 18:03:49.664811 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 18:03:49.665421 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 18:03:49.669172 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 18:03:49.701710 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 18:03:49.708683 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 18:03:49.714719 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 18:03:49.716089 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 18:03:49.901931 tar[1536]: linux-amd64/README.md Sep 12 18:03:49.920528 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 18:03:50.721162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:03:50.723599 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 18:03:50.725495 systemd[1]: Startup finished in 4.019s (kernel) + 6.822s (initrd) + 6.292s (userspace) = 17.134s. Sep 12 18:03:50.733359 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 18:03:51.429699 kubelet[1681]: E0912 18:03:51.429596 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 18:03:51.434528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 18:03:51.435069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 18:03:51.435865 systemd[1]: kubelet.service: Consumed 1.547s CPU time, 266.8M memory peak. Sep 12 18:03:51.625228 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 18:03:51.628313 systemd[1]: Started sshd@0-64.23.243.150:22-139.178.89.65:54874.service - OpenSSH per-connection server daemon (139.178.89.65:54874). Sep 12 18:03:51.746528 sshd[1693]: Accepted publickey for core from 139.178.89.65 port 54874 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:03:51.748734 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:03:51.770921 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 18:03:51.772748 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 18:03:51.775768 systemd-logind[1525]: New session 1 of user core. Sep 12 18:03:51.804864 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 18:03:51.807991 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 18:03:51.823550 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 18:03:51.826943 systemd-logind[1525]: New session c1 of user core. Sep 12 18:03:52.047078 systemd[1698]: Queued start job for default target default.target. Sep 12 18:03:52.059102 systemd[1698]: Created slice app.slice - User Application Slice. Sep 12 18:03:52.059158 systemd[1698]: Reached target paths.target - Paths. Sep 12 18:03:52.059242 systemd[1698]: Reached target timers.target - Timers. Sep 12 18:03:52.061484 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 18:03:52.095791 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 18:03:52.096170 systemd[1698]: Reached target sockets.target - Sockets. Sep 12 18:03:52.096420 systemd[1698]: Reached target basic.target - Basic System. Sep 12 18:03:52.096461 systemd[1698]: Reached target default.target - Main User Target. Sep 12 18:03:52.096491 systemd[1698]: Startup finished in 260ms. Sep 12 18:03:52.096522 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 18:03:52.105505 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 18:03:52.184465 systemd[1]: Started sshd@1-64.23.243.150:22-139.178.89.65:54886.service - OpenSSH per-connection server daemon (139.178.89.65:54886). Sep 12 18:03:52.265614 sshd[1709]: Accepted publickey for core from 139.178.89.65 port 54886 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:03:52.267606 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:03:52.276023 systemd-logind[1525]: New session 2 of user core. Sep 12 18:03:52.285606 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 18:03:52.351422 sshd[1712]: Connection closed by 139.178.89.65 port 54886 Sep 12 18:03:52.352080 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Sep 12 18:03:52.362999 systemd[1]: sshd@1-64.23.243.150:22-139.178.89.65:54886.service: Deactivated successfully. Sep 12 18:03:52.365167 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 18:03:52.366236 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Sep 12 18:03:52.369789 systemd[1]: Started sshd@2-64.23.243.150:22-139.178.89.65:54888.service - OpenSSH per-connection server daemon (139.178.89.65:54888). Sep 12 18:03:52.370986 systemd-logind[1525]: Removed session 2. Sep 12 18:03:52.444377 sshd[1718]: Accepted publickey for core from 139.178.89.65 port 54888 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:03:52.446684 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:03:52.452988 systemd-logind[1525]: New session 3 of user core. Sep 12 18:03:52.457494 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 18:03:52.513036 sshd[1721]: Connection closed by 139.178.89.65 port 54888 Sep 12 18:03:52.513786 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Sep 12 18:03:52.529751 systemd[1]: sshd@2-64.23.243.150:22-139.178.89.65:54888.service: Deactivated successfully. Sep 12 18:03:52.531994 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 18:03:52.532999 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Sep 12 18:03:52.537596 systemd[1]: Started sshd@3-64.23.243.150:22-139.178.89.65:54904.service - OpenSSH per-connection server daemon (139.178.89.65:54904). Sep 12 18:03:52.538777 systemd-logind[1525]: Removed session 3. Sep 12 18:03:52.609119 sshd[1727]: Accepted publickey for core from 139.178.89.65 port 54904 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:03:52.611817 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:03:52.619626 systemd-logind[1525]: New session 4 of user core. Sep 12 18:03:52.628707 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 18:03:52.691882 sshd[1730]: Connection closed by 139.178.89.65 port 54904 Sep 12 18:03:52.692522 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 12 18:03:52.703366 systemd[1]: sshd@3-64.23.243.150:22-139.178.89.65:54904.service: Deactivated successfully. Sep 12 18:03:52.705515 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 18:03:52.707593 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Sep 12 18:03:52.711805 systemd[1]: Started sshd@4-64.23.243.150:22-139.178.89.65:54908.service - OpenSSH per-connection server daemon (139.178.89.65:54908). Sep 12 18:03:52.713566 systemd-logind[1525]: Removed session 4. Sep 12 18:03:52.785217 sshd[1736]: Accepted publickey for core from 139.178.89.65 port 54908 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:03:52.787075 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:03:52.793447 systemd-logind[1525]: New session 5 of user core. Sep 12 18:03:52.801687 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 18:03:52.878478 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 18:03:52.878913 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:03:52.893660 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 12 18:03:52.897608 sshd[1739]: Connection closed by 139.178.89.65 port 54908 Sep 12 18:03:52.898495 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 12 18:03:52.913224 systemd[1]: sshd@4-64.23.243.150:22-139.178.89.65:54908.service: Deactivated successfully. Sep 12 18:03:52.915553 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 18:03:52.916722 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Sep 12 18:03:52.921018 systemd[1]: Started sshd@5-64.23.243.150:22-139.178.89.65:54924.service - OpenSSH per-connection server daemon (139.178.89.65:54924). Sep 12 18:03:52.922437 systemd-logind[1525]: Removed session 5. Sep 12 18:03:52.994635 sshd[1746]: Accepted publickey for core from 139.178.89.65 port 54924 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:03:52.996436 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:03:53.004503 systemd-logind[1525]: New session 6 of user core. Sep 12 18:03:53.011607 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 18:03:53.075532 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 18:03:53.075890 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:03:53.082567 sudo[1751]: pam_unix(sudo:session): session closed for user root Sep 12 18:03:53.091100 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 18:03:53.092045 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:03:53.105674 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 18:03:53.158519 augenrules[1773]: No rules Sep 12 18:03:53.159588 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 18:03:53.159912 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 18:03:53.161619 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 12 18:03:53.165855 sshd[1749]: Connection closed by 139.178.89.65 port 54924 Sep 12 18:03:53.166537 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 12 18:03:53.178809 systemd[1]: sshd@5-64.23.243.150:22-139.178.89.65:54924.service: Deactivated successfully. Sep 12 18:03:53.181524 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 18:03:53.182718 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Sep 12 18:03:53.187142 systemd[1]: Started sshd@6-64.23.243.150:22-139.178.89.65:54938.service - OpenSSH per-connection server daemon (139.178.89.65:54938). Sep 12 18:03:53.188453 systemd-logind[1525]: Removed session 6. Sep 12 18:03:53.266121 sshd[1782]: Accepted publickey for core from 139.178.89.65 port 54938 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:03:53.267829 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:03:53.276229 systemd-logind[1525]: New session 7 of user core. Sep 12 18:03:53.281653 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 18:03:53.338947 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 18:03:53.339250 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 18:03:53.911247 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 18:03:53.922937 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 18:03:54.344802 dockerd[1806]: time="2025-09-12T18:03:54.344732893Z" level=info msg="Starting up" Sep 12 18:03:54.350028 dockerd[1806]: time="2025-09-12T18:03:54.349985478Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 18:03:54.372505 dockerd[1806]: time="2025-09-12T18:03:54.372409409Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 18:03:54.395263 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3437169233-merged.mount: Deactivated successfully. Sep 12 18:03:54.558698 dockerd[1806]: time="2025-09-12T18:03:54.558374400Z" level=info msg="Loading containers: start." Sep 12 18:03:54.573418 kernel: Initializing XFRM netlink socket Sep 12 18:03:54.920954 systemd-networkd[1432]: docker0: Link UP Sep 12 18:03:54.926860 dockerd[1806]: time="2025-09-12T18:03:54.926169783Z" level=info msg="Loading containers: done." Sep 12 18:03:54.946224 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck830851835-merged.mount: Deactivated successfully. Sep 12 18:03:54.949179 dockerd[1806]: time="2025-09-12T18:03:54.949111001Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 18:03:54.949365 dockerd[1806]: time="2025-09-12T18:03:54.949212570Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 18:03:54.949397 dockerd[1806]: time="2025-09-12T18:03:54.949382294Z" level=info msg="Initializing buildkit" Sep 12 18:03:54.983763 dockerd[1806]: time="2025-09-12T18:03:54.983703142Z" level=info msg="Completed buildkit initialization" Sep 12 18:03:54.995334 dockerd[1806]: time="2025-09-12T18:03:54.994692725Z" level=info msg="Daemon has completed initialization" Sep 12 18:03:54.995334 dockerd[1806]: time="2025-09-12T18:03:54.994817196Z" level=info msg="API listen on /run/docker.sock" Sep 12 18:03:54.995739 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 18:03:55.937683 containerd[1564]: time="2025-09-12T18:03:55.937629350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 18:03:56.518658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134242784.mount: Deactivated successfully. Sep 12 18:03:58.017429 containerd[1564]: time="2025-09-12T18:03:58.016505835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:03:58.017956 containerd[1564]: time="2025-09-12T18:03:58.017919458Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 18:03:58.018094 containerd[1564]: time="2025-09-12T18:03:58.018069915Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:03:58.020761 containerd[1564]: time="2025-09-12T18:03:58.020723902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:03:58.022062 containerd[1564]: time="2025-09-12T18:03:58.022018903Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.084346755s" Sep 12 18:03:58.022062 containerd[1564]: time="2025-09-12T18:03:58.022064937Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 18:03:58.023105 containerd[1564]: time="2025-09-12T18:03:58.023080547Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 18:03:59.554842 containerd[1564]: time="2025-09-12T18:03:59.554772513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:03:59.556224 containerd[1564]: time="2025-09-12T18:03:59.556177382Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 18:03:59.557407 containerd[1564]: time="2025-09-12T18:03:59.556790171Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:03:59.560318 containerd[1564]: time="2025-09-12T18:03:59.560260738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:03:59.561820 containerd[1564]: time="2025-09-12T18:03:59.561775604Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.538514082s" Sep 12 18:03:59.561820 containerd[1564]: time="2025-09-12T18:03:59.561822208Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 18:03:59.562448 containerd[1564]: time="2025-09-12T18:03:59.562413413Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 18:04:00.960748 containerd[1564]: time="2025-09-12T18:04:00.959636638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:00.960748 containerd[1564]: time="2025-09-12T18:04:00.960679998Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 18:04:00.961421 containerd[1564]: time="2025-09-12T18:04:00.961391656Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:00.964687 containerd[1564]: time="2025-09-12T18:04:00.964635686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:00.966194 containerd[1564]: time="2025-09-12T18:04:00.966137651Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.40361075s" Sep 12 18:04:00.966194 containerd[1564]: time="2025-09-12T18:04:00.966195152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 18:04:00.967568 containerd[1564]: time="2025-09-12T18:04:00.967523176Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 18:04:01.510527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 18:04:01.514563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:04:01.737251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:04:01.750927 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 18:04:01.848463 kubelet[2100]: E0912 18:04:01.847824 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 18:04:01.852427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 18:04:01.852603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 18:04:01.853211 systemd[1]: kubelet.service: Consumed 237ms CPU time, 109.4M memory peak. Sep 12 18:04:02.303412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387465198.mount: Deactivated successfully. Sep 12 18:04:03.064164 containerd[1564]: time="2025-09-12T18:04:03.064075283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:03.065916 containerd[1564]: time="2025-09-12T18:04:03.065650280Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 18:04:03.066779 containerd[1564]: time="2025-09-12T18:04:03.066727557Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:03.068821 containerd[1564]: time="2025-09-12T18:04:03.068770094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:03.069709 containerd[1564]: time="2025-09-12T18:04:03.069671792Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.102107291s" Sep 12 18:04:03.069849 containerd[1564]: time="2025-09-12T18:04:03.069826981Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 18:04:03.070546 containerd[1564]: time="2025-09-12T18:04:03.070417379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 18:04:03.072512 systemd-resolved[1400]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 12 18:04:03.571171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811207229.mount: Deactivated successfully. Sep 12 18:04:04.717776 containerd[1564]: time="2025-09-12T18:04:04.717696898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:04.719237 containerd[1564]: time="2025-09-12T18:04:04.719124757Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 18:04:04.720635 containerd[1564]: time="2025-09-12T18:04:04.719976766Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:04.723357 containerd[1564]: time="2025-09-12T18:04:04.723278831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:04.724510 containerd[1564]: time="2025-09-12T18:04:04.724465907Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.653811391s" Sep 12 18:04:04.724510 containerd[1564]: time="2025-09-12T18:04:04.724510926Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 18:04:04.725016 containerd[1564]: time="2025-09-12T18:04:04.724992260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 18:04:05.205626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906503733.mount: Deactivated successfully. Sep 12 18:04:05.212439 containerd[1564]: time="2025-09-12T18:04:05.212371617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:04:05.213986 containerd[1564]: time="2025-09-12T18:04:05.213945071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 18:04:05.214725 containerd[1564]: time="2025-09-12T18:04:05.214682553Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:04:05.218066 containerd[1564]: time="2025-09-12T18:04:05.218017649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 18:04:05.219585 containerd[1564]: time="2025-09-12T18:04:05.219529934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 494.507025ms" Sep 12 18:04:05.219585 containerd[1564]: time="2025-09-12T18:04:05.219569378Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 18:04:05.220225 containerd[1564]: time="2025-09-12T18:04:05.220183900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 18:04:05.761187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250588301.mount: Deactivated successfully. Sep 12 18:04:06.167545 systemd-resolved[1400]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 12 18:04:07.882193 containerd[1564]: time="2025-09-12T18:04:07.882123430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:07.883199 containerd[1564]: time="2025-09-12T18:04:07.883157737Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 18:04:07.885338 containerd[1564]: time="2025-09-12T18:04:07.884323888Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:07.888117 containerd[1564]: time="2025-09-12T18:04:07.887176097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:07.888688 containerd[1564]: time="2025-09-12T18:04:07.888656874Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.668435602s" Sep 12 18:04:07.888761 containerd[1564]: time="2025-09-12T18:04:07.888693507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 18:04:12.010641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 18:04:12.015621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:04:12.085885 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 18:04:12.086008 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 18:04:12.086455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:04:12.097837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:04:12.135435 systemd[1]: Reload requested from client PID 2258 ('systemctl') (unit session-7.scope)... Sep 12 18:04:12.135641 systemd[1]: Reloading... Sep 12 18:04:12.304512 zram_generator::config[2304]: No configuration found. Sep 12 18:04:12.549032 systemd[1]: Reloading finished in 412 ms. Sep 12 18:04:12.607498 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:04:12.611205 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 18:04:12.611480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:04:12.611542 systemd[1]: kubelet.service: Consumed 132ms CPU time, 98.3M memory peak. Sep 12 18:04:12.614396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:04:12.792479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:04:12.805535 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 18:04:12.875893 kubelet[2357]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:04:12.876386 kubelet[2357]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 18:04:12.876446 kubelet[2357]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:04:12.877850 kubelet[2357]: I0912 18:04:12.877786 2357 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 18:04:12.994407 kubelet[2357]: I0912 18:04:12.994349 2357 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 18:04:12.994407 kubelet[2357]: I0912 18:04:12.994385 2357 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 18:04:12.995894 kubelet[2357]: I0912 18:04:12.995862 2357 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 18:04:13.035883 kubelet[2357]: I0912 18:04:13.035375 2357 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 18:04:13.038614 kubelet[2357]: E0912 18:04:13.038556 2357 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.243.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.243.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 18:04:13.057000 kubelet[2357]: I0912 18:04:13.056830 2357 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 18:04:13.069459 kubelet[2357]: I0912 18:04:13.069409 2357 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 18:04:13.069739 kubelet[2357]: I0912 18:04:13.069695 2357 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 18:04:13.073466 kubelet[2357]: I0912 18:04:13.069731 2357 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-8-66567323f5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 18:04:13.073466 kubelet[2357]: I0912 18:04:13.073459 2357 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 18:04:13.073808 kubelet[2357]: I0912 18:04:13.073493 2357 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 18:04:13.073808 kubelet[2357]: I0912 18:04:13.073763 2357 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:04:13.077566 kubelet[2357]: I0912 18:04:13.076898 2357 kubelet.go:480] "Attempting to sync node with API server" Sep 12 18:04:13.080070 kubelet[2357]: I0912 18:04:13.079719 2357 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 18:04:13.080070 kubelet[2357]: I0912 18:04:13.079790 2357 kubelet.go:386] "Adding apiserver pod source" Sep 12 18:04:13.080070 kubelet[2357]: I0912 18:04:13.079807 2357 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 18:04:13.091660 kubelet[2357]: E0912 18:04:13.090538 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.243.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-8-66567323f5&limit=500&resourceVersion=0\": dial tcp 64.23.243.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 18:04:13.091660 kubelet[2357]: E0912 18:04:13.091226 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.243.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.243.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 18:04:13.092233 kubelet[2357]: I0912 18:04:13.092189 2357 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 18:04:13.093538 kubelet[2357]: I0912 18:04:13.093512 2357 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 18:04:13.094629 kubelet[2357]: W0912 18:04:13.094606 2357 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 18:04:13.099514 kubelet[2357]: I0912 18:04:13.099472 2357 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 18:04:13.099667 kubelet[2357]: I0912 18:04:13.099557 2357 server.go:1289] "Started kubelet" Sep 12 18:04:13.103335 kubelet[2357]: I0912 18:04:13.102416 2357 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 18:04:13.107993 kubelet[2357]: E0912 18:04:13.106369 2357 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.243.150:6443/api/v1/namespaces/default/events\": dial tcp 64.23.243.150:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.1.0-8-66567323f5.18649b11479c2c8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.1.0-8-66567323f5,UID:ci-4426.1.0-8-66567323f5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.1.0-8-66567323f5,},FirstTimestamp:2025-09-12 18:04:13.099502734 +0000 UTC m=+0.287373460,LastTimestamp:2025-09-12 18:04:13.099502734 +0000 UTC m=+0.287373460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.1.0-8-66567323f5,}" Sep 12 18:04:13.109623 kubelet[2357]: I0912 18:04:13.109543 2357 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 18:04:13.111704 kubelet[2357]: I0912 18:04:13.111654 2357 server.go:317] "Adding debug handlers to kubelet server" Sep 12 18:04:13.116348 kubelet[2357]: I0912 18:04:13.115467 2357 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 18:04:13.116348 kubelet[2357]: E0912 18:04:13.115883 2357 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-8-66567323f5\" not found" Sep 12 18:04:13.118333 kubelet[2357]: I0912 18:04:13.117413 2357 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 18:04:13.118333 kubelet[2357]: I0912 18:04:13.117650 2357 reconciler.go:26] "Reconciler: start to sync state" Sep 12 18:04:13.121717 kubelet[2357]: E0912 18:04:13.121488 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.243.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.243.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 18:04:13.121717 kubelet[2357]: E0912 18:04:13.121620 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.243.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-8-66567323f5?timeout=10s\": dial tcp 64.23.243.150:6443: connect: connection refused" interval="200ms" Sep 12 18:04:13.121717 kubelet[2357]: I0912 18:04:13.121677 2357 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 18:04:13.121962 kubelet[2357]: I0912 18:04:13.121820 2357 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 18:04:13.122157 kubelet[2357]: I0912 18:04:13.122134 2357 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 18:04:13.129208 kubelet[2357]: I0912 18:04:13.129041 2357 factory.go:223] Registration of the systemd container factory successfully Sep 12 18:04:13.129208 kubelet[2357]: I0912 18:04:13.129212 2357 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 18:04:13.133918 kubelet[2357]: E0912 18:04:13.133853 2357 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 18:04:13.134362 kubelet[2357]: I0912 18:04:13.133933 2357 factory.go:223] Registration of the containerd container factory successfully Sep 12 18:04:13.163682 kubelet[2357]: I0912 18:04:13.163528 2357 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 18:04:13.167849 kubelet[2357]: I0912 18:04:13.167624 2357 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 18:04:13.167849 kubelet[2357]: I0912 18:04:13.167699 2357 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 18:04:13.167849 kubelet[2357]: I0912 18:04:13.167751 2357 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 18:04:13.167849 kubelet[2357]: I0912 18:04:13.167765 2357 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 18:04:13.169635 kubelet[2357]: E0912 18:04:13.167854 2357 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 18:04:13.171974 kubelet[2357]: E0912 18:04:13.171917 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.243.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.243.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 18:04:13.181778 kubelet[2357]: I0912 18:04:13.181726 2357 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 18:04:13.181778 kubelet[2357]: I0912 18:04:13.181763 2357 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 18:04:13.182074 kubelet[2357]: I0912 18:04:13.181800 2357 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:04:13.238060 kubelet[2357]: E0912 18:04:13.236573 2357 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-8-66567323f5\" not found" Sep 12 18:04:13.268727 kubelet[2357]: E0912 18:04:13.268661 2357 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 18:04:13.279157 kubelet[2357]: I0912 18:04:13.279097 2357 policy_none.go:49] "None policy: Start" Sep 12 18:04:13.279157 kubelet[2357]: I0912 18:04:13.279153 2357 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 18:04:13.279157 kubelet[2357]: I0912 18:04:13.279179 2357 state_mem.go:35] "Initializing new in-memory state store" Sep 12 18:04:13.289005 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 18:04:13.305804 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 18:04:13.310985 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 18:04:13.322333 kubelet[2357]: E0912 18:04:13.322266 2357 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 18:04:13.322784 kubelet[2357]: I0912 18:04:13.322592 2357 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 18:04:13.322784 kubelet[2357]: I0912 18:04:13.322617 2357 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 18:04:13.322784 kubelet[2357]: E0912 18:04:13.322692 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.243.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-8-66567323f5?timeout=10s\": dial tcp 64.23.243.150:6443: connect: connection refused" interval="400ms" Sep 12 18:04:13.323548 kubelet[2357]: I0912 18:04:13.323496 2357 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 18:04:13.328883 kubelet[2357]: E0912 18:04:13.328799 2357 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 18:04:13.330155 kubelet[2357]: E0912 18:04:13.329192 2357 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.1.0-8-66567323f5\" not found" Sep 12 18:04:13.423955 kubelet[2357]: I0912 18:04:13.423918 2357 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.424426 kubelet[2357]: E0912 18:04:13.424377 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.243.150:6443/api/v1/nodes\": dial tcp 64.23.243.150:6443: connect: connection refused" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.485332 systemd[1]: Created slice kubepods-burstable-podd2638b239053c1927248d9c0a31bcad2.slice - libcontainer container kubepods-burstable-podd2638b239053c1927248d9c0a31bcad2.slice. Sep 12 18:04:13.498854 kubelet[2357]: E0912 18:04:13.498517 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-8-66567323f5\" not found" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.502896 systemd[1]: Created slice kubepods-burstable-pod004786e4beed33d364c8ffe63914ef2d.slice - libcontainer container kubepods-burstable-pod004786e4beed33d364c8ffe63914ef2d.slice. Sep 12 18:04:13.514560 kubelet[2357]: E0912 18:04:13.514518 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-8-66567323f5\" not found" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.519276 systemd[1]: Created slice kubepods-burstable-poda89cb0e1bf2708af8b24745dd09dbc75.slice - libcontainer container kubepods-burstable-poda89cb0e1bf2708af8b24745dd09dbc75.slice. Sep 12 18:04:13.521859 kubelet[2357]: E0912 18:04:13.521648 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-8-66567323f5\" not found" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.538330 kubelet[2357]: I0912 18:04:13.538211 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.538330 kubelet[2357]: I0912 18:04:13.538279 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a89cb0e1bf2708af8b24745dd09dbc75-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-8-66567323f5\" (UID: \"a89cb0e1bf2708af8b24745dd09dbc75\") " pod="kube-system/kube-scheduler-ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.538330 kubelet[2357]: I0912 18:04:13.538345 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2638b239053c1927248d9c0a31bcad2-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-8-66567323f5\" (UID: \"d2638b239053c1927248d9c0a31bcad2\") " pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.538549 kubelet[2357]: I0912 18:04:13.538375 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2638b239053c1927248d9c0a31bcad2-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-8-66567323f5\" (UID: \"d2638b239053c1927248d9c0a31bcad2\") " pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.538549 kubelet[2357]: I0912 18:04:13.538401 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.538549 kubelet[2357]: I0912 18:04:13.538427 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.538549 kubelet[2357]: I0912 18:04:13.538453 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.538549 kubelet[2357]: I0912 18:04:13.538481 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.538698 kubelet[2357]: I0912 18:04:13.538514 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2638b239053c1927248d9c0a31bcad2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-8-66567323f5\" (UID: \"d2638b239053c1927248d9c0a31bcad2\") " pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.626850 kubelet[2357]: I0912 18:04:13.626694 2357 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.627355 kubelet[2357]: E0912 18:04:13.627153 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.243.150:6443/api/v1/nodes\": dial tcp 64.23.243.150:6443: connect: connection refused" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:13.723941 kubelet[2357]: E0912 18:04:13.723875 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.243.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-8-66567323f5?timeout=10s\": dial tcp 64.23.243.150:6443: connect: connection refused" interval="800ms" Sep 12 18:04:13.799633 kubelet[2357]: E0912 18:04:13.799587 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:13.800533 containerd[1564]: time="2025-09-12T18:04:13.800470403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-8-66567323f5,Uid:d2638b239053c1927248d9c0a31bcad2,Namespace:kube-system,Attempt:0,}" Sep 12 18:04:13.816209 kubelet[2357]: E0912 18:04:13.815832 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:13.822141 kubelet[2357]: E0912 18:04:13.822081 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:13.824625 containerd[1564]: time="2025-09-12T18:04:13.824541067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-8-66567323f5,Uid:004786e4beed33d364c8ffe63914ef2d,Namespace:kube-system,Attempt:0,}" Sep 12 18:04:13.827829 containerd[1564]: time="2025-09-12T18:04:13.826658990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-8-66567323f5,Uid:a89cb0e1bf2708af8b24745dd09dbc75,Namespace:kube-system,Attempt:0,}" Sep 12 18:04:13.922068 containerd[1564]: time="2025-09-12T18:04:13.921881523Z" level=info msg="connecting to shim d4fd15807d942f5c3100fb92975533195d9182f34854e5c9e5f83d3760737c7c" address="unix:///run/containerd/s/15fc1f16168647e82a2f16aa69430ec4afbf4ac5a19cfadcd622fcb97265d7e6" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:04:13.932841 containerd[1564]: time="2025-09-12T18:04:13.932619270Z" level=info msg="connecting to shim b09b21b4f312b49e514af9a821b0a5162eecfe80e994923cce9f5aa0e4655529" address="unix:///run/containerd/s/ae4cab7d9be620f61fa41d30d7d7ad20288d94c6b20a2125d7f5865cd7248572" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:04:13.942578 containerd[1564]: time="2025-09-12T18:04:13.942517872Z" level=info msg="connecting to shim 7db0e34f5d6d10a820d7d6360c4e460a97e6127c6d09e8ddeed9ef7b3f195402" address="unix:///run/containerd/s/3fbccbfb3dc5698f4a460d51eb10da8ac7493735a391f77e11c3debc88f15727" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:04:14.018750 kubelet[2357]: E0912 18:04:14.018506 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.243.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.243.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 18:04:14.029014 kubelet[2357]: I0912 18:04:14.028629 2357 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:14.029207 kubelet[2357]: E0912 18:04:14.029089 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.243.150:6443/api/v1/nodes\": dial tcp 64.23.243.150:6443: connect: connection refused" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:14.076690 systemd[1]: Started cri-containerd-7db0e34f5d6d10a820d7d6360c4e460a97e6127c6d09e8ddeed9ef7b3f195402.scope - libcontainer container 7db0e34f5d6d10a820d7d6360c4e460a97e6127c6d09e8ddeed9ef7b3f195402. Sep 12 18:04:14.080277 systemd[1]: Started cri-containerd-b09b21b4f312b49e514af9a821b0a5162eecfe80e994923cce9f5aa0e4655529.scope - libcontainer container b09b21b4f312b49e514af9a821b0a5162eecfe80e994923cce9f5aa0e4655529. Sep 12 18:04:14.083789 systemd[1]: Started cri-containerd-d4fd15807d942f5c3100fb92975533195d9182f34854e5c9e5f83d3760737c7c.scope - libcontainer container d4fd15807d942f5c3100fb92975533195d9182f34854e5c9e5f83d3760737c7c. Sep 12 18:04:14.095831 kubelet[2357]: E0912 18:04:14.095529 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.243.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-8-66567323f5&limit=500&resourceVersion=0\": dial tcp 64.23.243.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 18:04:14.196424 containerd[1564]: time="2025-09-12T18:04:14.196258798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-8-66567323f5,Uid:d2638b239053c1927248d9c0a31bcad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4fd15807d942f5c3100fb92975533195d9182f34854e5c9e5f83d3760737c7c\"" Sep 12 18:04:14.200520 kubelet[2357]: E0912 18:04:14.200472 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:14.211738 containerd[1564]: time="2025-09-12T18:04:14.211402176Z" level=info msg="CreateContainer within sandbox \"d4fd15807d942f5c3100fb92975533195d9182f34854e5c9e5f83d3760737c7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 18:04:14.213487 containerd[1564]: time="2025-09-12T18:04:14.213361214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-8-66567323f5,Uid:004786e4beed33d364c8ffe63914ef2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b09b21b4f312b49e514af9a821b0a5162eecfe80e994923cce9f5aa0e4655529\"" Sep 12 18:04:14.215013 kubelet[2357]: E0912 18:04:14.214982 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:14.224489 containerd[1564]: time="2025-09-12T18:04:14.224412597Z" level=info msg="CreateContainer within sandbox \"b09b21b4f312b49e514af9a821b0a5162eecfe80e994923cce9f5aa0e4655529\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 18:04:14.229925 containerd[1564]: time="2025-09-12T18:04:14.229866210Z" level=info msg="Container d86e7d840add46eddea42b5a216ceaefeacead17f5ded6a2f8d00ec9d7569ba4: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:14.249094 containerd[1564]: time="2025-09-12T18:04:14.248765566Z" level=info msg="Container 7ce6973c33e6954675b904f2d7564ee784fa77adff0a0be2fe1be71bc558a4a6: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:14.264827 containerd[1564]: time="2025-09-12T18:04:14.264754624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-8-66567323f5,Uid:a89cb0e1bf2708af8b24745dd09dbc75,Namespace:kube-system,Attempt:0,} returns sandbox id \"7db0e34f5d6d10a820d7d6360c4e460a97e6127c6d09e8ddeed9ef7b3f195402\"" Sep 12 18:04:14.265874 kubelet[2357]: E0912 18:04:14.265847 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:14.270878 containerd[1564]: time="2025-09-12T18:04:14.270838765Z" level=info msg="CreateContainer within sandbox \"b09b21b4f312b49e514af9a821b0a5162eecfe80e994923cce9f5aa0e4655529\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7ce6973c33e6954675b904f2d7564ee784fa77adff0a0be2fe1be71bc558a4a6\"" Sep 12 18:04:14.271455 containerd[1564]: time="2025-09-12T18:04:14.270986806Z" level=info msg="CreateContainer within sandbox \"d4fd15807d942f5c3100fb92975533195d9182f34854e5c9e5f83d3760737c7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d86e7d840add46eddea42b5a216ceaefeacead17f5ded6a2f8d00ec9d7569ba4\"" Sep 12 18:04:14.271810 containerd[1564]: time="2025-09-12T18:04:14.271778196Z" level=info msg="StartContainer for \"7ce6973c33e6954675b904f2d7564ee784fa77adff0a0be2fe1be71bc558a4a6\"" Sep 12 18:04:14.272257 containerd[1564]: time="2025-09-12T18:04:14.272218416Z" level=info msg="CreateContainer within sandbox \"7db0e34f5d6d10a820d7d6360c4e460a97e6127c6d09e8ddeed9ef7b3f195402\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 18:04:14.277351 containerd[1564]: time="2025-09-12T18:04:14.277261753Z" level=info msg="connecting to shim 7ce6973c33e6954675b904f2d7564ee784fa77adff0a0be2fe1be71bc558a4a6" address="unix:///run/containerd/s/ae4cab7d9be620f61fa41d30d7d7ad20288d94c6b20a2125d7f5865cd7248572" protocol=ttrpc version=3 Sep 12 18:04:14.279367 containerd[1564]: time="2025-09-12T18:04:14.279317829Z" level=info msg="StartContainer for \"d86e7d840add46eddea42b5a216ceaefeacead17f5ded6a2f8d00ec9d7569ba4\"" Sep 12 18:04:14.281429 containerd[1564]: time="2025-09-12T18:04:14.281343925Z" level=info msg="connecting to shim d86e7d840add46eddea42b5a216ceaefeacead17f5ded6a2f8d00ec9d7569ba4" address="unix:///run/containerd/s/15fc1f16168647e82a2f16aa69430ec4afbf4ac5a19cfadcd622fcb97265d7e6" protocol=ttrpc version=3 Sep 12 18:04:14.282947 containerd[1564]: time="2025-09-12T18:04:14.282901178Z" level=info msg="Container 25050976d7ab2aae4eb919c769f40b89fdf936ddc24be97db441fb8d209c2eb4: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:14.308642 systemd[1]: Started cri-containerd-7ce6973c33e6954675b904f2d7564ee784fa77adff0a0be2fe1be71bc558a4a6.scope - libcontainer container 7ce6973c33e6954675b904f2d7564ee784fa77adff0a0be2fe1be71bc558a4a6. Sep 12 18:04:14.313663 systemd[1]: Started cri-containerd-d86e7d840add46eddea42b5a216ceaefeacead17f5ded6a2f8d00ec9d7569ba4.scope - libcontainer container d86e7d840add46eddea42b5a216ceaefeacead17f5ded6a2f8d00ec9d7569ba4. Sep 12 18:04:14.316205 containerd[1564]: time="2025-09-12T18:04:14.315885505Z" level=info msg="CreateContainer within sandbox \"7db0e34f5d6d10a820d7d6360c4e460a97e6127c6d09e8ddeed9ef7b3f195402\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"25050976d7ab2aae4eb919c769f40b89fdf936ddc24be97db441fb8d209c2eb4\"" Sep 12 18:04:14.317505 containerd[1564]: time="2025-09-12T18:04:14.317474561Z" level=info msg="StartContainer for \"25050976d7ab2aae4eb919c769f40b89fdf936ddc24be97db441fb8d209c2eb4\"" Sep 12 18:04:14.324514 containerd[1564]: time="2025-09-12T18:04:14.324145123Z" level=info msg="connecting to shim 25050976d7ab2aae4eb919c769f40b89fdf936ddc24be97db441fb8d209c2eb4" address="unix:///run/containerd/s/3fbccbfb3dc5698f4a460d51eb10da8ac7493735a391f77e11c3debc88f15727" protocol=ttrpc version=3 Sep 12 18:04:14.372546 systemd[1]: Started cri-containerd-25050976d7ab2aae4eb919c769f40b89fdf936ddc24be97db441fb8d209c2eb4.scope - libcontainer container 25050976d7ab2aae4eb919c769f40b89fdf936ddc24be97db441fb8d209c2eb4. Sep 12 18:04:14.428968 containerd[1564]: time="2025-09-12T18:04:14.428928697Z" level=info msg="StartContainer for \"7ce6973c33e6954675b904f2d7564ee784fa77adff0a0be2fe1be71bc558a4a6\" returns successfully" Sep 12 18:04:14.449272 containerd[1564]: time="2025-09-12T18:04:14.449154364Z" level=info msg="StartContainer for \"d86e7d840add46eddea42b5a216ceaefeacead17f5ded6a2f8d00ec9d7569ba4\" returns successfully" Sep 12 18:04:14.506651 kubelet[2357]: E0912 18:04:14.506582 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.243.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.243.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 18:04:14.516996 containerd[1564]: time="2025-09-12T18:04:14.516957225Z" level=info msg="StartContainer for \"25050976d7ab2aae4eb919c769f40b89fdf936ddc24be97db441fb8d209c2eb4\" returns successfully" Sep 12 18:04:14.525280 kubelet[2357]: E0912 18:04:14.525224 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.243.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-8-66567323f5?timeout=10s\": dial tcp 64.23.243.150:6443: connect: connection refused" interval="1.6s" Sep 12 18:04:14.563337 kubelet[2357]: E0912 18:04:14.563028 2357 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.243.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.243.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 18:04:14.832416 kubelet[2357]: I0912 18:04:14.832009 2357 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:15.187039 kubelet[2357]: E0912 18:04:15.186920 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-8-66567323f5\" not found" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:15.188787 kubelet[2357]: E0912 18:04:15.188669 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:15.192710 kubelet[2357]: E0912 18:04:15.192470 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-8-66567323f5\" not found" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:15.192710 kubelet[2357]: E0912 18:04:15.192613 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:15.198782 kubelet[2357]: E0912 18:04:15.198746 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-8-66567323f5\" not found" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:15.199170 kubelet[2357]: E0912 18:04:15.199099 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:16.202955 kubelet[2357]: E0912 18:04:16.202895 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-8-66567323f5\" not found" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:16.204223 kubelet[2357]: E0912 18:04:16.203782 2357 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-8-66567323f5\" not found" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:16.204223 kubelet[2357]: E0912 18:04:16.204043 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:16.204223 kubelet[2357]: E0912 18:04:16.204115 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:16.932499 kubelet[2357]: I0912 18:04:16.931576 2357 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:17.017448 kubelet[2357]: I0912 18:04:17.017401 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-8-66567323f5" Sep 12 18:04:17.033264 kubelet[2357]: E0912 18:04:17.032891 2357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.1.0-8-66567323f5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4426.1.0-8-66567323f5" Sep 12 18:04:17.033264 kubelet[2357]: I0912 18:04:17.032943 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:17.036712 kubelet[2357]: E0912 18:04:17.036663 2357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.1.0-8-66567323f5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:17.037258 kubelet[2357]: I0912 18:04:17.036982 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:17.039920 kubelet[2357]: E0912 18:04:17.039869 2357 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:17.094318 kubelet[2357]: I0912 18:04:17.093973 2357 apiserver.go:52] "Watching apiserver" Sep 12 18:04:17.117719 kubelet[2357]: I0912 18:04:17.117672 2357 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 18:04:19.093477 kubelet[2357]: I0912 18:04:19.093432 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:19.102118 kubelet[2357]: I0912 18:04:19.101422 2357 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:04:19.102118 kubelet[2357]: E0912 18:04:19.101906 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:19.209479 kubelet[2357]: E0912 18:04:19.209438 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:19.265748 systemd[1]: Reload requested from client PID 2644 ('systemctl') (unit session-7.scope)... Sep 12 18:04:19.265772 systemd[1]: Reloading... Sep 12 18:04:19.383341 zram_generator::config[2687]: No configuration found. Sep 12 18:04:19.403868 kubelet[2357]: I0912 18:04:19.403822 2357 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:19.415106 kubelet[2357]: I0912 18:04:19.415031 2357 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:04:19.416996 kubelet[2357]: E0912 18:04:19.416944 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:19.677631 systemd[1]: Reloading finished in 411 ms. Sep 12 18:04:19.713969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:04:19.729679 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 18:04:19.730011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:04:19.730086 systemd[1]: kubelet.service: Consumed 838ms CPU time, 126.2M memory peak. Sep 12 18:04:19.732414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 18:04:19.902542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 18:04:19.917504 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 18:04:19.988358 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:04:19.988358 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 18:04:19.988358 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 18:04:19.988969 kubelet[2738]: I0912 18:04:19.988278 2738 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 18:04:20.000092 kubelet[2738]: I0912 18:04:19.999996 2738 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 18:04:20.000092 kubelet[2738]: I0912 18:04:20.000051 2738 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 18:04:20.000606 kubelet[2738]: I0912 18:04:20.000574 2738 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 18:04:20.002837 kubelet[2738]: I0912 18:04:20.002788 2738 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 18:04:20.010207 kubelet[2738]: I0912 18:04:20.009936 2738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 18:04:20.019644 kubelet[2738]: I0912 18:04:20.019550 2738 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 18:04:20.023761 kubelet[2738]: I0912 18:04:20.023714 2738 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 18:04:20.024110 kubelet[2738]: I0912 18:04:20.024074 2738 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 18:04:20.024386 kubelet[2738]: I0912 18:04:20.024116 2738 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-8-66567323f5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 18:04:20.024485 kubelet[2738]: I0912 18:04:20.024401 2738 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 18:04:20.024485 kubelet[2738]: I0912 18:04:20.024417 2738 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 18:04:20.024536 kubelet[2738]: I0912 18:04:20.024489 2738 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:04:20.024781 kubelet[2738]: I0912 18:04:20.024762 2738 kubelet.go:480] "Attempting to sync node with API server" Sep 12 18:04:20.024829 kubelet[2738]: I0912 18:04:20.024792 2738 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 18:04:20.024829 kubelet[2738]: I0912 18:04:20.024825 2738 kubelet.go:386] "Adding apiserver pod source" Sep 12 18:04:20.024873 kubelet[2738]: I0912 18:04:20.024848 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 18:04:20.028866 kubelet[2738]: I0912 18:04:20.028826 2738 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 18:04:20.031275 kubelet[2738]: I0912 18:04:20.029777 2738 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 18:04:20.034753 kubelet[2738]: I0912 18:04:20.033704 2738 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 18:04:20.034753 kubelet[2738]: I0912 18:04:20.033773 2738 server.go:1289] "Started kubelet" Sep 12 18:04:20.036203 kubelet[2738]: I0912 18:04:20.036158 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 18:04:20.047513 kubelet[2738]: I0912 18:04:20.047459 2738 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 18:04:20.051579 kubelet[2738]: I0912 18:04:20.051401 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 18:04:20.052312 kubelet[2738]: I0912 18:04:20.051794 2738 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 18:04:20.054196 kubelet[2738]: I0912 18:04:20.054165 2738 server.go:317] "Adding debug handlers to kubelet server" Sep 12 18:04:20.055980 kubelet[2738]: I0912 18:04:20.055629 2738 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 18:04:20.055980 kubelet[2738]: E0912 18:04:20.055945 2738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-8-66567323f5\" not found" Sep 12 18:04:20.056288 kubelet[2738]: I0912 18:04:20.056258 2738 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 18:04:20.056466 kubelet[2738]: I0912 18:04:20.056452 2738 reconciler.go:26] "Reconciler: start to sync state" Sep 12 18:04:20.064647 kubelet[2738]: I0912 18:04:20.063804 2738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 18:04:20.065005 kubelet[2738]: E0912 18:04:20.064981 2738 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 18:04:20.065511 kubelet[2738]: I0912 18:04:20.065405 2738 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 18:04:20.073053 kubelet[2738]: I0912 18:04:20.072690 2738 factory.go:223] Registration of the containerd container factory successfully Sep 12 18:04:20.073053 kubelet[2738]: I0912 18:04:20.072715 2738 factory.go:223] Registration of the systemd container factory successfully Sep 12 18:04:20.104274 kubelet[2738]: I0912 18:04:20.104062 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 18:04:20.106667 kubelet[2738]: I0912 18:04:20.106624 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 18:04:20.107377 kubelet[2738]: I0912 18:04:20.106849 2738 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 18:04:20.107377 kubelet[2738]: I0912 18:04:20.106891 2738 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 18:04:20.107377 kubelet[2738]: I0912 18:04:20.106905 2738 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 18:04:20.107377 kubelet[2738]: E0912 18:04:20.106968 2738 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 18:04:20.148350 kubelet[2738]: I0912 18:04:20.148321 2738 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 18:04:20.148554 kubelet[2738]: I0912 18:04:20.148537 2738 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 18:04:20.148632 kubelet[2738]: I0912 18:04:20.148625 2738 state_mem.go:36] "Initialized new in-memory state store" Sep 12 18:04:20.148827 kubelet[2738]: I0912 18:04:20.148801 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 18:04:20.148920 kubelet[2738]: I0912 18:04:20.148884 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 18:04:20.149029 kubelet[2738]: I0912 18:04:20.148975 2738 policy_none.go:49] "None policy: Start" Sep 12 18:04:20.149101 kubelet[2738]: I0912 18:04:20.149093 2738 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 18:04:20.149144 kubelet[2738]: I0912 18:04:20.149138 2738 state_mem.go:35] "Initializing new in-memory state store" Sep 12 18:04:20.149381 kubelet[2738]: I0912 18:04:20.149364 2738 state_mem.go:75] "Updated machine memory state" Sep 12 18:04:20.156125 kubelet[2738]: E0912 18:04:20.156091 2738 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 18:04:20.156719 kubelet[2738]: I0912 18:04:20.156350 2738 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 18:04:20.156719 kubelet[2738]: I0912 18:04:20.156374 2738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 18:04:20.156719 kubelet[2738]: I0912 18:04:20.156666 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 18:04:20.164179 kubelet[2738]: E0912 18:04:20.164078 2738 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 18:04:20.208447 kubelet[2738]: I0912 18:04:20.208412 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.209271 kubelet[2738]: I0912 18:04:20.208969 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.209467 kubelet[2738]: I0912 18:04:20.209456 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.214540 kubelet[2738]: I0912 18:04:20.214480 2738 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:04:20.216217 kubelet[2738]: I0912 18:04:20.216177 2738 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:04:20.216448 kubelet[2738]: E0912 18:04:20.216265 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.1.0-8-66567323f5\" already exists" pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.216448 kubelet[2738]: I0912 18:04:20.216338 2738 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:04:20.216594 kubelet[2738]: E0912 18:04:20.216537 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" already exists" pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.265749 kubelet[2738]: I0912 18:04:20.265704 2738 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.276275 kubelet[2738]: I0912 18:04:20.275877 2738 kubelet_node_status.go:124] "Node was previously registered" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.276275 kubelet[2738]: I0912 18:04:20.275967 2738 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.285800 sudo[2776]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 18:04:20.286282 sudo[2776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 18:04:20.357763 kubelet[2738]: I0912 18:04:20.357617 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2638b239053c1927248d9c0a31bcad2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-8-66567323f5\" (UID: \"d2638b239053c1927248d9c0a31bcad2\") " pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.358334 kubelet[2738]: I0912 18:04:20.358184 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.358644 kubelet[2738]: I0912 18:04:20.358574 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.358933 kubelet[2738]: I0912 18:04:20.358869 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.359268 kubelet[2738]: I0912 18:04:20.359195 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.359563 kubelet[2738]: I0912 18:04:20.359480 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2638b239053c1927248d9c0a31bcad2-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-8-66567323f5\" (UID: \"d2638b239053c1927248d9c0a31bcad2\") " pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.359808 kubelet[2738]: I0912 18:04:20.359759 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2638b239053c1927248d9c0a31bcad2-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-8-66567323f5\" (UID: \"d2638b239053c1927248d9c0a31bcad2\") " pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.360108 kubelet[2738]: I0912 18:04:20.360052 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/004786e4beed33d364c8ffe63914ef2d-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-8-66567323f5\" (UID: \"004786e4beed33d364c8ffe63914ef2d\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.360278 kubelet[2738]: I0912 18:04:20.360091 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a89cb0e1bf2708af8b24745dd09dbc75-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-8-66567323f5\" (UID: \"a89cb0e1bf2708af8b24745dd09dbc75\") " pod="kube-system/kube-scheduler-ci-4426.1.0-8-66567323f5" Sep 12 18:04:20.517804 kubelet[2738]: E0912 18:04:20.516271 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:20.517804 kubelet[2738]: E0912 18:04:20.517631 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:20.518593 kubelet[2738]: E0912 18:04:20.518561 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:20.861623 sudo[2776]: pam_unix(sudo:session): session closed for user root Sep 12 18:04:21.027006 kubelet[2738]: I0912 18:04:21.026363 2738 apiserver.go:52] "Watching apiserver" Sep 12 18:04:21.056467 kubelet[2738]: I0912 18:04:21.056389 2738 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 18:04:21.132121 kubelet[2738]: E0912 18:04:21.132003 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:21.132919 kubelet[2738]: I0912 18:04:21.132792 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:21.138333 kubelet[2738]: E0912 18:04:21.135436 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:21.143239 kubelet[2738]: I0912 18:04:21.143206 2738 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 18:04:21.143409 kubelet[2738]: E0912 18:04:21.143273 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.1.0-8-66567323f5\" already exists" pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" Sep 12 18:04:21.143491 kubelet[2738]: E0912 18:04:21.143473 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:21.178700 kubelet[2738]: I0912 18:04:21.178632 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.1.0-8-66567323f5" podStartSLOduration=1.178611319 podStartE2EDuration="1.178611319s" podCreationTimestamp="2025-09-12 18:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:04:21.166205875 +0000 UTC m=+1.240157207" watchObservedRunningTime="2025-09-12 18:04:21.178611319 +0000 UTC m=+1.252562648" Sep 12 18:04:21.194561 kubelet[2738]: I0912 18:04:21.194497 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.1.0-8-66567323f5" podStartSLOduration=2.194478466 podStartE2EDuration="2.194478466s" podCreationTimestamp="2025-09-12 18:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:04:21.193139796 +0000 UTC m=+1.267091127" watchObservedRunningTime="2025-09-12 18:04:21.194478466 +0000 UTC m=+1.268429795" Sep 12 18:04:21.194766 kubelet[2738]: I0912 18:04:21.194582 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.1.0-8-66567323f5" podStartSLOduration=2.194578005 podStartE2EDuration="2.194578005s" podCreationTimestamp="2025-09-12 18:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:04:21.179331255 +0000 UTC m=+1.253282580" watchObservedRunningTime="2025-09-12 18:04:21.194578005 +0000 UTC m=+1.268529334" Sep 12 18:04:22.138778 kubelet[2738]: E0912 18:04:22.137632 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:22.141059 kubelet[2738]: E0912 18:04:22.140905 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:22.209053 kubelet[2738]: E0912 18:04:22.208985 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:22.398352 sudo[1786]: pam_unix(sudo:session): session closed for user root Sep 12 18:04:22.401342 sshd[1785]: Connection closed by 139.178.89.65 port 54938 Sep 12 18:04:22.402978 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Sep 12 18:04:22.409372 systemd[1]: sshd@6-64.23.243.150:22-139.178.89.65:54938.service: Deactivated successfully. Sep 12 18:04:22.413795 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 18:04:22.414237 systemd[1]: session-7.scope: Consumed 6.248s CPU time, 223.8M memory peak. Sep 12 18:04:22.416506 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Sep 12 18:04:22.418824 systemd-logind[1525]: Removed session 7. Sep 12 18:04:23.138761 kubelet[2738]: E0912 18:04:23.138722 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:23.140263 kubelet[2738]: E0912 18:04:23.140234 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:24.140398 kubelet[2738]: E0912 18:04:24.140260 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:25.757056 kubelet[2738]: I0912 18:04:25.756980 2738 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 18:04:25.758914 containerd[1564]: time="2025-09-12T18:04:25.758195493Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 18:04:25.759290 kubelet[2738]: I0912 18:04:25.758547 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 18:04:26.430022 systemd[1]: Created slice kubepods-besteffort-pod8fa34b1c_4b69_442c_93b2_c21022140c7c.slice - libcontainer container kubepods-besteffort-pod8fa34b1c_4b69_442c_93b2_c21022140c7c.slice. Sep 12 18:04:26.444264 systemd[1]: Created slice kubepods-burstable-pod174403b6_1adf_4677_8e24_d8a86b2ea600.slice - libcontainer container kubepods-burstable-pod174403b6_1adf_4677_8e24_d8a86b2ea600.slice. Sep 12 18:04:26.497144 kubelet[2738]: I0912 18:04:26.497092 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-hostproc\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.497379 kubelet[2738]: I0912 18:04:26.497360 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/174403b6-1adf-4677-8e24-d8a86b2ea600-clustermesh-secrets\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.497584 kubelet[2738]: I0912 18:04:26.497564 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/174403b6-1adf-4677-8e24-d8a86b2ea600-hubble-tls\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.497920 kubelet[2738]: I0912 18:04:26.497808 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fa34b1c-4b69-442c-93b2-c21022140c7c-lib-modules\") pod \"kube-proxy-jq458\" (UID: \"8fa34b1c-4b69-442c-93b2-c21022140c7c\") " pod="kube-system/kube-proxy-jq458" Sep 12 18:04:26.497920 kubelet[2738]: I0912 18:04:26.497869 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-bpf-maps\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.497920 kubelet[2738]: I0912 18:04:26.497887 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-host-proc-sys-net\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.498045 kubelet[2738]: I0912 18:04:26.498032 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj9z7\" (UniqueName: \"kubernetes.io/projected/174403b6-1adf-4677-8e24-d8a86b2ea600-kube-api-access-dj9z7\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.498124 kubelet[2738]: I0912 18:04:26.498114 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bn4z\" (UniqueName: \"kubernetes.io/projected/8fa34b1c-4b69-442c-93b2-c21022140c7c-kube-api-access-5bn4z\") pod \"kube-proxy-jq458\" (UID: \"8fa34b1c-4b69-442c-93b2-c21022140c7c\") " pod="kube-system/kube-proxy-jq458" Sep 12 18:04:26.498210 kubelet[2738]: I0912 18:04:26.498185 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-cgroup\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.498316 kubelet[2738]: I0912 18:04:26.498269 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cni-path\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.498366 kubelet[2738]: I0912 18:04:26.498286 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-etc-cni-netd\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.498441 kubelet[2738]: I0912 18:04:26.498422 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-xtables-lock\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.498519 kubelet[2738]: I0912 18:04:26.498505 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-lib-modules\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.498869 kubelet[2738]: I0912 18:04:26.498584 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-config-path\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.498869 kubelet[2738]: I0912 18:04:26.498610 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-host-proc-sys-kernel\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.498869 kubelet[2738]: I0912 18:04:26.498637 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8fa34b1c-4b69-442c-93b2-c21022140c7c-kube-proxy\") pod \"kube-proxy-jq458\" (UID: \"8fa34b1c-4b69-442c-93b2-c21022140c7c\") " pod="kube-system/kube-proxy-jq458" Sep 12 18:04:26.498869 kubelet[2738]: I0912 18:04:26.498661 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fa34b1c-4b69-442c-93b2-c21022140c7c-xtables-lock\") pod \"kube-proxy-jq458\" (UID: \"8fa34b1c-4b69-442c-93b2-c21022140c7c\") " pod="kube-system/kube-proxy-jq458" Sep 12 18:04:26.498869 kubelet[2738]: I0912 18:04:26.498685 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-run\") pod \"cilium-bn4mq\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " pod="kube-system/cilium-bn4mq" Sep 12 18:04:26.742049 kubelet[2738]: E0912 18:04:26.740859 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:26.742181 containerd[1564]: time="2025-09-12T18:04:26.741614881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jq458,Uid:8fa34b1c-4b69-442c-93b2-c21022140c7c,Namespace:kube-system,Attempt:0,}" Sep 12 18:04:26.749898 kubelet[2738]: E0912 18:04:26.749487 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:26.750512 containerd[1564]: time="2025-09-12T18:04:26.750480609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bn4mq,Uid:174403b6-1adf-4677-8e24-d8a86b2ea600,Namespace:kube-system,Attempt:0,}" Sep 12 18:04:26.789447 containerd[1564]: time="2025-09-12T18:04:26.789193668Z" level=info msg="connecting to shim d55f1124f38e49d7d939ff23f00d14893d7d6c694393c11aa4b9106ac17649c5" address="unix:///run/containerd/s/dc44f2243a52b73ff7b7d8dbfc6123a85d46ce3a00fe9fa0222ace56ea7e9723" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:04:26.793840 containerd[1564]: time="2025-09-12T18:04:26.793490747Z" level=info msg="connecting to shim ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449" address="unix:///run/containerd/s/c7065281710374d628fc0d2cf86ccd7bfc3c8ac34a0ee5fc20ec10a11e1cf076" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:04:26.864590 systemd[1]: Started cri-containerd-ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449.scope - libcontainer container ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449. Sep 12 18:04:26.888566 systemd[1]: Started cri-containerd-d55f1124f38e49d7d939ff23f00d14893d7d6c694393c11aa4b9106ac17649c5.scope - libcontainer container d55f1124f38e49d7d939ff23f00d14893d7d6c694393c11aa4b9106ac17649c5. Sep 12 18:04:26.899275 systemd[1]: Created slice kubepods-besteffort-pod2b4852fe_9f34_432c_9856_cf54b82389e0.slice - libcontainer container kubepods-besteffort-pod2b4852fe_9f34_432c_9856_cf54b82389e0.slice. Sep 12 18:04:26.903730 kubelet[2738]: I0912 18:04:26.903445 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj4gn\" (UniqueName: \"kubernetes.io/projected/2b4852fe-9f34-432c-9856-cf54b82389e0-kube-api-access-jj4gn\") pod \"cilium-operator-6c4d7847fc-lxgtk\" (UID: \"2b4852fe-9f34-432c-9856-cf54b82389e0\") " pod="kube-system/cilium-operator-6c4d7847fc-lxgtk" Sep 12 18:04:26.904597 kubelet[2738]: I0912 18:04:26.904500 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b4852fe-9f34-432c-9856-cf54b82389e0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lxgtk\" (UID: \"2b4852fe-9f34-432c-9856-cf54b82389e0\") " pod="kube-system/cilium-operator-6c4d7847fc-lxgtk" Sep 12 18:04:26.960027 containerd[1564]: time="2025-09-12T18:04:26.959935098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bn4mq,Uid:174403b6-1adf-4677-8e24-d8a86b2ea600,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\"" Sep 12 18:04:26.962077 kubelet[2738]: E0912 18:04:26.961808 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:26.966352 containerd[1564]: time="2025-09-12T18:04:26.966287270Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 18:04:26.973748 systemd-resolved[1400]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 12 18:04:26.974915 containerd[1564]: time="2025-09-12T18:04:26.974479208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jq458,Uid:8fa34b1c-4b69-442c-93b2-c21022140c7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d55f1124f38e49d7d939ff23f00d14893d7d6c694393c11aa4b9106ac17649c5\"" Sep 12 18:04:26.977610 kubelet[2738]: E0912 18:04:26.977579 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:26.985235 containerd[1564]: time="2025-09-12T18:04:26.985170322Z" level=info msg="CreateContainer within sandbox \"d55f1124f38e49d7d939ff23f00d14893d7d6c694393c11aa4b9106ac17649c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 18:04:26.999858 containerd[1564]: time="2025-09-12T18:04:26.998904640Z" level=info msg="Container 6e04956e93f82b39616fc3309774e00507e13172c96e39a74c7b81ce67a57652: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:27.014231 containerd[1564]: time="2025-09-12T18:04:27.014147491Z" level=info msg="CreateContainer within sandbox \"d55f1124f38e49d7d939ff23f00d14893d7d6c694393c11aa4b9106ac17649c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6e04956e93f82b39616fc3309774e00507e13172c96e39a74c7b81ce67a57652\"" Sep 12 18:04:27.018323 containerd[1564]: time="2025-09-12T18:04:27.017563389Z" level=info msg="StartContainer for \"6e04956e93f82b39616fc3309774e00507e13172c96e39a74c7b81ce67a57652\"" Sep 12 18:04:27.019447 containerd[1564]: time="2025-09-12T18:04:27.019413129Z" level=info msg="connecting to shim 6e04956e93f82b39616fc3309774e00507e13172c96e39a74c7b81ce67a57652" address="unix:///run/containerd/s/dc44f2243a52b73ff7b7d8dbfc6123a85d46ce3a00fe9fa0222ace56ea7e9723" protocol=ttrpc version=3 Sep 12 18:04:27.045565 systemd[1]: Started cri-containerd-6e04956e93f82b39616fc3309774e00507e13172c96e39a74c7b81ce67a57652.scope - libcontainer container 6e04956e93f82b39616fc3309774e00507e13172c96e39a74c7b81ce67a57652. Sep 12 18:04:27.100613 containerd[1564]: time="2025-09-12T18:04:27.100497840Z" level=info msg="StartContainer for \"6e04956e93f82b39616fc3309774e00507e13172c96e39a74c7b81ce67a57652\" returns successfully" Sep 12 18:04:27.153029 kubelet[2738]: E0912 18:04:27.152961 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:27.169337 kubelet[2738]: I0912 18:04:27.168200 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jq458" podStartSLOduration=1.168178579 podStartE2EDuration="1.168178579s" podCreationTimestamp="2025-09-12 18:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:04:27.167792045 +0000 UTC m=+7.241743384" watchObservedRunningTime="2025-09-12 18:04:27.168178579 +0000 UTC m=+7.242129898" Sep 12 18:04:27.204323 kubelet[2738]: E0912 18:04:27.203611 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:27.205096 containerd[1564]: time="2025-09-12T18:04:27.205063480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lxgtk,Uid:2b4852fe-9f34-432c-9856-cf54b82389e0,Namespace:kube-system,Attempt:0,}" Sep 12 18:04:27.241890 containerd[1564]: time="2025-09-12T18:04:27.241835232Z" level=info msg="connecting to shim 2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec" address="unix:///run/containerd/s/6c3138afdddf8667d8da43e90f6263e285b363971f2f2603d68be457fa318ca1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:04:27.291549 systemd[1]: Started cri-containerd-2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec.scope - libcontainer container 2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec. Sep 12 18:04:27.375859 containerd[1564]: time="2025-09-12T18:04:27.375807025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lxgtk,Uid:2b4852fe-9f34-432c-9856-cf54b82389e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec\"" Sep 12 18:04:27.377235 kubelet[2738]: E0912 18:04:27.377203 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:31.140680 kubelet[2738]: E0912 18:04:31.140638 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:31.170935 kubelet[2738]: E0912 18:04:31.170190 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:32.216781 kubelet[2738]: E0912 18:04:32.216735 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:32.589686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086905637.mount: Deactivated successfully. Sep 12 18:04:33.164133 kubelet[2738]: E0912 18:04:33.163830 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:33.595975 update_engine[1529]: I20250912 18:04:33.595889 1529 update_attempter.cc:509] Updating boot flags... Sep 12 18:04:35.467975 containerd[1564]: time="2025-09-12T18:04:35.467045905Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 18:04:35.502685 containerd[1564]: time="2025-09-12T18:04:35.502481459Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:35.505139 containerd[1564]: time="2025-09-12T18:04:35.504634239Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.538006599s" Sep 12 18:04:35.505139 containerd[1564]: time="2025-09-12T18:04:35.504698647Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 18:04:35.505837 containerd[1564]: time="2025-09-12T18:04:35.505775831Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:35.508059 containerd[1564]: time="2025-09-12T18:04:35.508022514Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 18:04:35.513234 containerd[1564]: time="2025-09-12T18:04:35.512948508Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 18:04:35.533727 containerd[1564]: time="2025-09-12T18:04:35.533676164Z" level=info msg="Container c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:35.566916 containerd[1564]: time="2025-09-12T18:04:35.566776271Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\"" Sep 12 18:04:35.567714 containerd[1564]: time="2025-09-12T18:04:35.567668856Z" level=info msg="StartContainer for \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\"" Sep 12 18:04:35.571633 containerd[1564]: time="2025-09-12T18:04:35.571497882Z" level=info msg="connecting to shim c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20" address="unix:///run/containerd/s/c7065281710374d628fc0d2cf86ccd7bfc3c8ac34a0ee5fc20ec10a11e1cf076" protocol=ttrpc version=3 Sep 12 18:04:35.601446 systemd[1]: Started cri-containerd-c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20.scope - libcontainer container c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20. Sep 12 18:04:35.649871 containerd[1564]: time="2025-09-12T18:04:35.649833809Z" level=info msg="StartContainer for \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\" returns successfully" Sep 12 18:04:35.662096 systemd[1]: cri-containerd-c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20.scope: Deactivated successfully. Sep 12 18:04:35.680326 containerd[1564]: time="2025-09-12T18:04:35.680244665Z" level=info msg="received exit event container_id:\"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\" id:\"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\" pid:3178 exited_at:{seconds:1757700275 nanos:666109056}" Sep 12 18:04:35.693881 containerd[1564]: time="2025-09-12T18:04:35.693733585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\" id:\"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\" pid:3178 exited_at:{seconds:1757700275 nanos:666109056}" Sep 12 18:04:35.723544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20-rootfs.mount: Deactivated successfully. Sep 12 18:04:36.194967 kubelet[2738]: E0912 18:04:36.194931 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:36.205900 containerd[1564]: time="2025-09-12T18:04:36.205798322Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 18:04:36.214029 containerd[1564]: time="2025-09-12T18:04:36.213982981Z" level=info msg="Container 247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:36.223547 containerd[1564]: time="2025-09-12T18:04:36.223497337Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\"" Sep 12 18:04:36.225981 containerd[1564]: time="2025-09-12T18:04:36.225924746Z" level=info msg="StartContainer for \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\"" Sep 12 18:04:36.229649 containerd[1564]: time="2025-09-12T18:04:36.229592072Z" level=info msg="connecting to shim 247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e" address="unix:///run/containerd/s/c7065281710374d628fc0d2cf86ccd7bfc3c8ac34a0ee5fc20ec10a11e1cf076" protocol=ttrpc version=3 Sep 12 18:04:36.261571 systemd[1]: Started cri-containerd-247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e.scope - libcontainer container 247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e. Sep 12 18:04:36.320810 containerd[1564]: time="2025-09-12T18:04:36.320677002Z" level=info msg="StartContainer for \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\" returns successfully" Sep 12 18:04:36.326216 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 18:04:36.326833 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:04:36.327471 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:04:36.330694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 18:04:36.334390 systemd[1]: cri-containerd-247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e.scope: Deactivated successfully. Sep 12 18:04:36.335983 containerd[1564]: time="2025-09-12T18:04:36.335739960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\" id:\"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\" pid:3222 exited_at:{seconds:1757700276 nanos:334032786}" Sep 12 18:04:36.337086 containerd[1564]: time="2025-09-12T18:04:36.336443940Z" level=info msg="received exit event container_id:\"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\" id:\"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\" pid:3222 exited_at:{seconds:1757700276 nanos:334032786}" Sep 12 18:04:36.358988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 18:04:36.892259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459981170.mount: Deactivated successfully. Sep 12 18:04:37.201288 kubelet[2738]: E0912 18:04:37.199724 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:37.205748 containerd[1564]: time="2025-09-12T18:04:37.205154919Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 18:04:37.226064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688351263.mount: Deactivated successfully. Sep 12 18:04:37.231342 containerd[1564]: time="2025-09-12T18:04:37.230975140Z" level=info msg="Container 06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:37.243385 containerd[1564]: time="2025-09-12T18:04:37.243343107Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\"" Sep 12 18:04:37.245679 containerd[1564]: time="2025-09-12T18:04:37.245645872Z" level=info msg="StartContainer for \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\"" Sep 12 18:04:37.247808 containerd[1564]: time="2025-09-12T18:04:37.247769816Z" level=info msg="connecting to shim 06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565" address="unix:///run/containerd/s/c7065281710374d628fc0d2cf86ccd7bfc3c8ac34a0ee5fc20ec10a11e1cf076" protocol=ttrpc version=3 Sep 12 18:04:37.302586 systemd[1]: Started cri-containerd-06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565.scope - libcontainer container 06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565. Sep 12 18:04:37.387038 systemd[1]: cri-containerd-06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565.scope: Deactivated successfully. Sep 12 18:04:37.389660 containerd[1564]: time="2025-09-12T18:04:37.389468493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\" id:\"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\" pid:3280 exited_at:{seconds:1757700277 nanos:389055250}" Sep 12 18:04:37.389660 containerd[1564]: time="2025-09-12T18:04:37.389551778Z" level=info msg="received exit event container_id:\"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\" id:\"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\" pid:3280 exited_at:{seconds:1757700277 nanos:389055250}" Sep 12 18:04:37.401142 containerd[1564]: time="2025-09-12T18:04:37.401093558Z" level=info msg="StartContainer for \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\" returns successfully" Sep 12 18:04:37.534416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394738066.mount: Deactivated successfully. Sep 12 18:04:37.739842 containerd[1564]: time="2025-09-12T18:04:37.739774143Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:37.740555 containerd[1564]: time="2025-09-12T18:04:37.740522585Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 18:04:37.741398 containerd[1564]: time="2025-09-12T18:04:37.741162150Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 18:04:37.742460 containerd[1564]: time="2025-09-12T18:04:37.742429591Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.233875631s" Sep 12 18:04:37.742671 containerd[1564]: time="2025-09-12T18:04:37.742640531Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 18:04:37.749550 containerd[1564]: time="2025-09-12T18:04:37.749490643Z" level=info msg="CreateContainer within sandbox \"2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 18:04:37.771580 containerd[1564]: time="2025-09-12T18:04:37.770975702Z" level=info msg="Container 3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:37.781239 containerd[1564]: time="2025-09-12T18:04:37.781120758Z" level=info msg="CreateContainer within sandbox \"2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\"" Sep 12 18:04:37.783492 containerd[1564]: time="2025-09-12T18:04:37.783454387Z" level=info msg="StartContainer for \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\"" Sep 12 18:04:37.786062 containerd[1564]: time="2025-09-12T18:04:37.785843764Z" level=info msg="connecting to shim 3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f" address="unix:///run/containerd/s/6c3138afdddf8667d8da43e90f6263e285b363971f2f2603d68be457fa318ca1" protocol=ttrpc version=3 Sep 12 18:04:37.818677 systemd[1]: Started cri-containerd-3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f.scope - libcontainer container 3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f. Sep 12 18:04:37.860109 containerd[1564]: time="2025-09-12T18:04:37.860036058Z" level=info msg="StartContainer for \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" returns successfully" Sep 12 18:04:38.214088 kubelet[2738]: E0912 18:04:38.213932 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:38.226855 kubelet[2738]: E0912 18:04:38.226782 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:38.228355 containerd[1564]: time="2025-09-12T18:04:38.227229454Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 18:04:38.247239 containerd[1564]: time="2025-09-12T18:04:38.244068029Z" level=info msg="Container cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:38.251640 containerd[1564]: time="2025-09-12T18:04:38.251130050Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\"" Sep 12 18:04:38.256118 containerd[1564]: time="2025-09-12T18:04:38.256075781Z" level=info msg="StartContainer for \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\"" Sep 12 18:04:38.257215 containerd[1564]: time="2025-09-12T18:04:38.257158305Z" level=info msg="connecting to shim cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005" address="unix:///run/containerd/s/c7065281710374d628fc0d2cf86ccd7bfc3c8ac34a0ee5fc20ec10a11e1cf076" protocol=ttrpc version=3 Sep 12 18:04:38.293170 kubelet[2738]: I0912 18:04:38.292842 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lxgtk" podStartSLOduration=1.9291197850000001 podStartE2EDuration="12.292813004s" podCreationTimestamp="2025-09-12 18:04:26 +0000 UTC" firstStartedPulling="2025-09-12 18:04:27.379882366 +0000 UTC m=+7.453833671" lastFinishedPulling="2025-09-12 18:04:37.743575586 +0000 UTC m=+17.817526890" observedRunningTime="2025-09-12 18:04:38.289620589 +0000 UTC m=+18.363571937" watchObservedRunningTime="2025-09-12 18:04:38.292813004 +0000 UTC m=+18.366764345" Sep 12 18:04:38.297768 systemd[1]: Started cri-containerd-cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005.scope - libcontainer container cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005. Sep 12 18:04:38.395380 systemd[1]: cri-containerd-cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005.scope: Deactivated successfully. Sep 12 18:04:38.398718 containerd[1564]: time="2025-09-12T18:04:38.398225890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\" id:\"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\" pid:3354 exited_at:{seconds:1757700278 nanos:395611746}" Sep 12 18:04:38.398933 containerd[1564]: time="2025-09-12T18:04:38.398908437Z" level=info msg="received exit event container_id:\"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\" id:\"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\" pid:3354 exited_at:{seconds:1757700278 nanos:395611746}" Sep 12 18:04:38.426953 containerd[1564]: time="2025-09-12T18:04:38.426908708Z" level=info msg="StartContainer for \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\" returns successfully" Sep 12 18:04:39.229586 kubelet[2738]: E0912 18:04:39.229540 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:39.231565 kubelet[2738]: E0912 18:04:39.230594 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:39.242101 containerd[1564]: time="2025-09-12T18:04:39.242020246Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 18:04:39.259318 containerd[1564]: time="2025-09-12T18:04:39.259219060Z" level=info msg="Container e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:39.263645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293612270.mount: Deactivated successfully. Sep 12 18:04:39.275987 containerd[1564]: time="2025-09-12T18:04:39.275865026Z" level=info msg="CreateContainer within sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\"" Sep 12 18:04:39.277057 containerd[1564]: time="2025-09-12T18:04:39.276854773Z" level=info msg="StartContainer for \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\"" Sep 12 18:04:39.279198 containerd[1564]: time="2025-09-12T18:04:39.279130131Z" level=info msg="connecting to shim e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593" address="unix:///run/containerd/s/c7065281710374d628fc0d2cf86ccd7bfc3c8ac34a0ee5fc20ec10a11e1cf076" protocol=ttrpc version=3 Sep 12 18:04:39.320702 systemd[1]: Started cri-containerd-e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593.scope - libcontainer container e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593. Sep 12 18:04:39.386226 containerd[1564]: time="2025-09-12T18:04:39.386108439Z" level=info msg="StartContainer for \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" returns successfully" Sep 12 18:04:39.524176 containerd[1564]: time="2025-09-12T18:04:39.521844298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" id:\"d2b25577b26debe5d7979206d5f9b1bbcd1fe8377af5416cd111e388fd28871d\" pid:3420 exited_at:{seconds:1757700279 nanos:520483184}" Sep 12 18:04:39.583724 kubelet[2738]: I0912 18:04:39.583688 2738 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 18:04:39.661795 systemd[1]: Created slice kubepods-burstable-podc6cb1a82_fe1d_46cc_9240_a6e971eb2016.slice - libcontainer container kubepods-burstable-podc6cb1a82_fe1d_46cc_9240_a6e971eb2016.slice. Sep 12 18:04:39.671711 systemd[1]: Created slice kubepods-burstable-pod12aea24c_f044_47eb_b97f_46fe8b338d5b.slice - libcontainer container kubepods-burstable-pod12aea24c_f044_47eb_b97f_46fe8b338d5b.slice. Sep 12 18:04:39.811659 kubelet[2738]: I0912 18:04:39.811494 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d8bl\" (UniqueName: \"kubernetes.io/projected/12aea24c-f044-47eb-b97f-46fe8b338d5b-kube-api-access-7d8bl\") pod \"coredns-674b8bbfcf-z2cfm\" (UID: \"12aea24c-f044-47eb-b97f-46fe8b338d5b\") " pod="kube-system/coredns-674b8bbfcf-z2cfm" Sep 12 18:04:39.811659 kubelet[2738]: I0912 18:04:39.811540 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6cb1a82-fe1d-46cc-9240-a6e971eb2016-config-volume\") pod \"coredns-674b8bbfcf-8vvl9\" (UID: \"c6cb1a82-fe1d-46cc-9240-a6e971eb2016\") " pod="kube-system/coredns-674b8bbfcf-8vvl9" Sep 12 18:04:39.811659 kubelet[2738]: I0912 18:04:39.811579 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx2n9\" (UniqueName: \"kubernetes.io/projected/c6cb1a82-fe1d-46cc-9240-a6e971eb2016-kube-api-access-fx2n9\") pod \"coredns-674b8bbfcf-8vvl9\" (UID: \"c6cb1a82-fe1d-46cc-9240-a6e971eb2016\") " pod="kube-system/coredns-674b8bbfcf-8vvl9" Sep 12 18:04:39.811659 kubelet[2738]: I0912 18:04:39.811610 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12aea24c-f044-47eb-b97f-46fe8b338d5b-config-volume\") pod \"coredns-674b8bbfcf-z2cfm\" (UID: \"12aea24c-f044-47eb-b97f-46fe8b338d5b\") " pod="kube-system/coredns-674b8bbfcf-z2cfm" Sep 12 18:04:39.967764 kubelet[2738]: E0912 18:04:39.967677 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:39.968808 containerd[1564]: time="2025-09-12T18:04:39.968749232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vvl9,Uid:c6cb1a82-fe1d-46cc-9240-a6e971eb2016,Namespace:kube-system,Attempt:0,}" Sep 12 18:04:39.981057 kubelet[2738]: E0912 18:04:39.980998 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:39.992070 containerd[1564]: time="2025-09-12T18:04:39.991983000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z2cfm,Uid:12aea24c-f044-47eb-b97f-46fe8b338d5b,Namespace:kube-system,Attempt:0,}" Sep 12 18:04:40.261112 kubelet[2738]: E0912 18:04:40.261052 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:40.297387 kubelet[2738]: I0912 18:04:40.297233 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bn4mq" podStartSLOduration=5.754096843 podStartE2EDuration="14.297207051s" podCreationTimestamp="2025-09-12 18:04:26 +0000 UTC" firstStartedPulling="2025-09-12 18:04:26.964620795 +0000 UTC m=+7.038572130" lastFinishedPulling="2025-09-12 18:04:35.507731021 +0000 UTC m=+15.581682338" observedRunningTime="2025-09-12 18:04:40.294453978 +0000 UTC m=+20.368405306" watchObservedRunningTime="2025-09-12 18:04:40.297207051 +0000 UTC m=+20.371158389" Sep 12 18:04:41.264661 kubelet[2738]: E0912 18:04:41.264545 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:42.131845 systemd-networkd[1432]: cilium_host: Link UP Sep 12 18:04:42.132053 systemd-networkd[1432]: cilium_net: Link UP Sep 12 18:04:42.132274 systemd-networkd[1432]: cilium_net: Gained carrier Sep 12 18:04:42.135370 systemd-networkd[1432]: cilium_host: Gained carrier Sep 12 18:04:42.254686 systemd-networkd[1432]: cilium_net: Gained IPv6LL Sep 12 18:04:42.268738 kubelet[2738]: E0912 18:04:42.268646 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:42.313915 systemd-networkd[1432]: cilium_vxlan: Link UP Sep 12 18:04:42.313926 systemd-networkd[1432]: cilium_vxlan: Gained carrier Sep 12 18:04:42.738407 kernel: NET: Registered PF_ALG protocol family Sep 12 18:04:42.902556 systemd-networkd[1432]: cilium_host: Gained IPv6LL Sep 12 18:04:43.704686 systemd-networkd[1432]: lxc_health: Link UP Sep 12 18:04:43.715496 systemd-networkd[1432]: lxc_health: Gained carrier Sep 12 18:04:43.990528 systemd-networkd[1432]: cilium_vxlan: Gained IPv6LL Sep 12 18:04:44.029623 kernel: eth0: renamed from tmpc11db Sep 12 18:04:44.031989 systemd-networkd[1432]: lxcbf328dfc8b0a: Link UP Sep 12 18:04:44.035139 systemd-networkd[1432]: lxcbf328dfc8b0a: Gained carrier Sep 12 18:04:44.093320 systemd-networkd[1432]: lxc8f4bd030a96f: Link UP Sep 12 18:04:44.099466 kernel: eth0: renamed from tmp15c30 Sep 12 18:04:44.101015 systemd-networkd[1432]: lxc8f4bd030a96f: Gained carrier Sep 12 18:04:44.752105 kubelet[2738]: E0912 18:04:44.752063 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:45.334941 systemd-networkd[1432]: lxcbf328dfc8b0a: Gained IPv6LL Sep 12 18:04:45.526982 systemd-networkd[1432]: lxc_health: Gained IPv6LL Sep 12 18:04:45.911416 systemd-networkd[1432]: lxc8f4bd030a96f: Gained IPv6LL Sep 12 18:04:47.571815 kubelet[2738]: I0912 18:04:47.570558 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 18:04:47.571815 kubelet[2738]: E0912 18:04:47.571088 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:48.283242 kubelet[2738]: E0912 18:04:48.283183 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:49.632074 containerd[1564]: time="2025-09-12T18:04:49.631441715Z" level=info msg="connecting to shim 15c30beca33d79a4c6647bd0b97c14287084c13b328dadce587044271042e9f3" address="unix:///run/containerd/s/0c57ae245d1a91f0ee9bca6799ceba0f94009ea885c0a1b45e2b09b83288ce36" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:04:49.692433 containerd[1564]: time="2025-09-12T18:04:49.692377849Z" level=info msg="connecting to shim c11dbea8bbbbf3961cb1dcd9eff042945507b9d64fcd5f733cdbb529ab68093c" address="unix:///run/containerd/s/5b778cdca623f36c2d2165299792b8a20bce39e3b243b5120b545407ecae0416" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:04:49.715351 systemd[1]: Started cri-containerd-15c30beca33d79a4c6647bd0b97c14287084c13b328dadce587044271042e9f3.scope - libcontainer container 15c30beca33d79a4c6647bd0b97c14287084c13b328dadce587044271042e9f3. Sep 12 18:04:49.774584 systemd[1]: Started cri-containerd-c11dbea8bbbbf3961cb1dcd9eff042945507b9d64fcd5f733cdbb529ab68093c.scope - libcontainer container c11dbea8bbbbf3961cb1dcd9eff042945507b9d64fcd5f733cdbb529ab68093c. Sep 12 18:04:49.825252 containerd[1564]: time="2025-09-12T18:04:49.825210246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z2cfm,Uid:12aea24c-f044-47eb-b97f-46fe8b338d5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"15c30beca33d79a4c6647bd0b97c14287084c13b328dadce587044271042e9f3\"" Sep 12 18:04:49.827086 kubelet[2738]: E0912 18:04:49.827027 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:49.836520 containerd[1564]: time="2025-09-12T18:04:49.836230031Z" level=info msg="CreateContainer within sandbox \"15c30beca33d79a4c6647bd0b97c14287084c13b328dadce587044271042e9f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 18:04:49.868912 containerd[1564]: time="2025-09-12T18:04:49.868771353Z" level=info msg="Container f976bd7531fdaf747f2d85284fef398f425f9c312aa1430624a4d2eb27b4fca8: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:49.872050 containerd[1564]: time="2025-09-12T18:04:49.871948125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vvl9,Uid:c6cb1a82-fe1d-46cc-9240-a6e971eb2016,Namespace:kube-system,Attempt:0,} returns sandbox id \"c11dbea8bbbbf3961cb1dcd9eff042945507b9d64fcd5f733cdbb529ab68093c\"" Sep 12 18:04:49.873387 kubelet[2738]: E0912 18:04:49.873352 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:49.883451 containerd[1564]: time="2025-09-12T18:04:49.882697037Z" level=info msg="CreateContainer within sandbox \"15c30beca33d79a4c6647bd0b97c14287084c13b328dadce587044271042e9f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f976bd7531fdaf747f2d85284fef398f425f9c312aa1430624a4d2eb27b4fca8\"" Sep 12 18:04:49.885367 containerd[1564]: time="2025-09-12T18:04:49.885086958Z" level=info msg="CreateContainer within sandbox \"c11dbea8bbbbf3961cb1dcd9eff042945507b9d64fcd5f733cdbb529ab68093c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 18:04:49.886613 containerd[1564]: time="2025-09-12T18:04:49.886566986Z" level=info msg="StartContainer for \"f976bd7531fdaf747f2d85284fef398f425f9c312aa1430624a4d2eb27b4fca8\"" Sep 12 18:04:49.888618 containerd[1564]: time="2025-09-12T18:04:49.888457252Z" level=info msg="connecting to shim f976bd7531fdaf747f2d85284fef398f425f9c312aa1430624a4d2eb27b4fca8" address="unix:///run/containerd/s/0c57ae245d1a91f0ee9bca6799ceba0f94009ea885c0a1b45e2b09b83288ce36" protocol=ttrpc version=3 Sep 12 18:04:49.897345 containerd[1564]: time="2025-09-12T18:04:49.897081594Z" level=info msg="Container bba12ea45b6c861649ac457830904c24f80f2d50de6a8cf0d52dbc08fe54905c: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:04:49.912605 containerd[1564]: time="2025-09-12T18:04:49.912284432Z" level=info msg="CreateContainer within sandbox \"c11dbea8bbbbf3961cb1dcd9eff042945507b9d64fcd5f733cdbb529ab68093c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bba12ea45b6c861649ac457830904c24f80f2d50de6a8cf0d52dbc08fe54905c\"" Sep 12 18:04:49.917015 containerd[1564]: time="2025-09-12T18:04:49.916968307Z" level=info msg="StartContainer for \"bba12ea45b6c861649ac457830904c24f80f2d50de6a8cf0d52dbc08fe54905c\"" Sep 12 18:04:49.918567 containerd[1564]: time="2025-09-12T18:04:49.918504208Z" level=info msg="connecting to shim bba12ea45b6c861649ac457830904c24f80f2d50de6a8cf0d52dbc08fe54905c" address="unix:///run/containerd/s/5b778cdca623f36c2d2165299792b8a20bce39e3b243b5120b545407ecae0416" protocol=ttrpc version=3 Sep 12 18:04:49.919539 systemd[1]: Started cri-containerd-f976bd7531fdaf747f2d85284fef398f425f9c312aa1430624a4d2eb27b4fca8.scope - libcontainer container f976bd7531fdaf747f2d85284fef398f425f9c312aa1430624a4d2eb27b4fca8. Sep 12 18:04:49.947542 systemd[1]: Started cri-containerd-bba12ea45b6c861649ac457830904c24f80f2d50de6a8cf0d52dbc08fe54905c.scope - libcontainer container bba12ea45b6c861649ac457830904c24f80f2d50de6a8cf0d52dbc08fe54905c. Sep 12 18:04:49.989069 containerd[1564]: time="2025-09-12T18:04:49.988951424Z" level=info msg="StartContainer for \"f976bd7531fdaf747f2d85284fef398f425f9c312aa1430624a4d2eb27b4fca8\" returns successfully" Sep 12 18:04:50.011264 containerd[1564]: time="2025-09-12T18:04:50.011210289Z" level=info msg="StartContainer for \"bba12ea45b6c861649ac457830904c24f80f2d50de6a8cf0d52dbc08fe54905c\" returns successfully" Sep 12 18:04:50.292192 kubelet[2738]: E0912 18:04:50.292104 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:50.298949 kubelet[2738]: E0912 18:04:50.298752 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:50.318456 kubelet[2738]: I0912 18:04:50.317120 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8vvl9" podStartSLOduration=24.317099058 podStartE2EDuration="24.317099058s" podCreationTimestamp="2025-09-12 18:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:04:50.315685464 +0000 UTC m=+30.389636796" watchObservedRunningTime="2025-09-12 18:04:50.317099058 +0000 UTC m=+30.391050389" Sep 12 18:04:50.337600 kubelet[2738]: I0912 18:04:50.337493 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z2cfm" podStartSLOduration=24.337462938 podStartE2EDuration="24.337462938s" podCreationTimestamp="2025-09-12 18:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:04:50.335211773 +0000 UTC m=+30.409163115" watchObservedRunningTime="2025-09-12 18:04:50.337462938 +0000 UTC m=+30.411414279" Sep 12 18:04:50.610609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109354953.mount: Deactivated successfully. Sep 12 18:04:51.300599 kubelet[2738]: E0912 18:04:51.300544 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:04:52.302254 kubelet[2738]: E0912 18:04:52.302171 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:00.292097 kubelet[2738]: E0912 18:05:00.292048 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:00.325443 kubelet[2738]: E0912 18:05:00.324060 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:03.068451 systemd[1]: Started sshd@7-64.23.243.150:22-139.178.89.65:50336.service - OpenSSH per-connection server daemon (139.178.89.65:50336). Sep 12 18:05:03.217859 sshd[4068]: Accepted publickey for core from 139.178.89.65 port 50336 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:03.221450 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:03.229247 systemd-logind[1525]: New session 8 of user core. Sep 12 18:05:03.243579 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 18:05:04.042984 sshd[4071]: Connection closed by 139.178.89.65 port 50336 Sep 12 18:05:04.044125 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:04.053283 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Sep 12 18:05:04.053941 systemd[1]: sshd@7-64.23.243.150:22-139.178.89.65:50336.service: Deactivated successfully. Sep 12 18:05:04.058796 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 18:05:04.062952 systemd-logind[1525]: Removed session 8. Sep 12 18:05:09.063962 systemd[1]: Started sshd@8-64.23.243.150:22-139.178.89.65:50340.service - OpenSSH per-connection server daemon (139.178.89.65:50340). Sep 12 18:05:09.157330 sshd[4084]: Accepted publickey for core from 139.178.89.65 port 50340 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:09.159285 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:09.166510 systemd-logind[1525]: New session 9 of user core. Sep 12 18:05:09.174636 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 18:05:09.326765 sshd[4087]: Connection closed by 139.178.89.65 port 50340 Sep 12 18:05:09.327614 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:09.334156 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Sep 12 18:05:09.334245 systemd[1]: sshd@8-64.23.243.150:22-139.178.89.65:50340.service: Deactivated successfully. Sep 12 18:05:09.337822 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 18:05:09.340692 systemd-logind[1525]: Removed session 9. Sep 12 18:05:14.343793 systemd[1]: Started sshd@9-64.23.243.150:22-139.178.89.65:60400.service - OpenSSH per-connection server daemon (139.178.89.65:60400). Sep 12 18:05:14.460392 sshd[4100]: Accepted publickey for core from 139.178.89.65 port 60400 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:14.465828 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:14.478681 systemd-logind[1525]: New session 10 of user core. Sep 12 18:05:14.486630 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 18:05:14.642586 sshd[4103]: Connection closed by 139.178.89.65 port 60400 Sep 12 18:05:14.643576 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:14.648212 systemd[1]: sshd@9-64.23.243.150:22-139.178.89.65:60400.service: Deactivated successfully. Sep 12 18:05:14.651029 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 18:05:14.652974 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Sep 12 18:05:14.654771 systemd-logind[1525]: Removed session 10. Sep 12 18:05:19.661606 systemd[1]: Started sshd@10-64.23.243.150:22-139.178.89.65:60408.service - OpenSSH per-connection server daemon (139.178.89.65:60408). Sep 12 18:05:19.741590 sshd[4116]: Accepted publickey for core from 139.178.89.65 port 60408 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:19.743774 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:19.752210 systemd-logind[1525]: New session 11 of user core. Sep 12 18:05:19.756632 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 18:05:19.897323 sshd[4119]: Connection closed by 139.178.89.65 port 60408 Sep 12 18:05:19.899770 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:19.913872 systemd[1]: sshd@10-64.23.243.150:22-139.178.89.65:60408.service: Deactivated successfully. Sep 12 18:05:19.918540 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 18:05:19.920571 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Sep 12 18:05:19.926255 systemd[1]: Started sshd@11-64.23.243.150:22-139.178.89.65:40018.service - OpenSSH per-connection server daemon (139.178.89.65:40018). Sep 12 18:05:19.929135 systemd-logind[1525]: Removed session 11. Sep 12 18:05:20.000899 sshd[4132]: Accepted publickey for core from 139.178.89.65 port 40018 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:20.003269 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:20.009842 systemd-logind[1525]: New session 12 of user core. Sep 12 18:05:20.020714 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 18:05:20.274490 sshd[4135]: Connection closed by 139.178.89.65 port 40018 Sep 12 18:05:20.276712 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:20.288767 systemd[1]: sshd@11-64.23.243.150:22-139.178.89.65:40018.service: Deactivated successfully. Sep 12 18:05:20.294486 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 18:05:20.297405 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Sep 12 18:05:20.305690 systemd[1]: Started sshd@12-64.23.243.150:22-139.178.89.65:40028.service - OpenSSH per-connection server daemon (139.178.89.65:40028). Sep 12 18:05:20.311740 systemd-logind[1525]: Removed session 12. Sep 12 18:05:20.394160 sshd[4147]: Accepted publickey for core from 139.178.89.65 port 40028 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:20.396117 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:20.408944 systemd-logind[1525]: New session 13 of user core. Sep 12 18:05:20.414883 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 18:05:20.582419 sshd[4150]: Connection closed by 139.178.89.65 port 40028 Sep 12 18:05:20.583038 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:20.588161 systemd[1]: sshd@12-64.23.243.150:22-139.178.89.65:40028.service: Deactivated successfully. Sep 12 18:05:20.590484 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 18:05:20.591577 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Sep 12 18:05:20.594146 systemd-logind[1525]: Removed session 13. Sep 12 18:05:25.598027 systemd[1]: Started sshd@13-64.23.243.150:22-139.178.89.65:40044.service - OpenSSH per-connection server daemon (139.178.89.65:40044). Sep 12 18:05:25.674886 sshd[4162]: Accepted publickey for core from 139.178.89.65 port 40044 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:25.676636 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:25.683944 systemd-logind[1525]: New session 14 of user core. Sep 12 18:05:25.692645 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 18:05:25.833992 sshd[4165]: Connection closed by 139.178.89.65 port 40044 Sep 12 18:05:25.834891 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:25.839491 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Sep 12 18:05:25.839674 systemd[1]: sshd@13-64.23.243.150:22-139.178.89.65:40044.service: Deactivated successfully. Sep 12 18:05:25.842495 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 18:05:25.845689 systemd-logind[1525]: Removed session 14. Sep 12 18:05:30.848896 systemd[1]: Started sshd@14-64.23.243.150:22-139.178.89.65:58918.service - OpenSSH per-connection server daemon (139.178.89.65:58918). Sep 12 18:05:30.932396 sshd[4179]: Accepted publickey for core from 139.178.89.65 port 58918 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:30.934240 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:30.940965 systemd-logind[1525]: New session 15 of user core. Sep 12 18:05:30.948642 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 18:05:31.090727 sshd[4182]: Connection closed by 139.178.89.65 port 58918 Sep 12 18:05:31.090166 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:31.094353 systemd[1]: sshd@14-64.23.243.150:22-139.178.89.65:58918.service: Deactivated successfully. Sep 12 18:05:31.096981 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 18:05:31.099822 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Sep 12 18:05:31.101854 systemd-logind[1525]: Removed session 15. Sep 12 18:05:33.109624 kubelet[2738]: E0912 18:05:33.109461 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:36.111879 systemd[1]: Started sshd@15-64.23.243.150:22-139.178.89.65:58922.service - OpenSSH per-connection server daemon (139.178.89.65:58922). Sep 12 18:05:36.196165 sshd[4194]: Accepted publickey for core from 139.178.89.65 port 58922 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:36.197747 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:36.204383 systemd-logind[1525]: New session 16 of user core. Sep 12 18:05:36.212583 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 18:05:36.348054 sshd[4197]: Connection closed by 139.178.89.65 port 58922 Sep 12 18:05:36.349638 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:36.357492 systemd[1]: sshd@15-64.23.243.150:22-139.178.89.65:58922.service: Deactivated successfully. Sep 12 18:05:36.360765 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 18:05:36.361952 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Sep 12 18:05:36.366854 systemd[1]: Started sshd@16-64.23.243.150:22-139.178.89.65:58936.service - OpenSSH per-connection server daemon (139.178.89.65:58936). Sep 12 18:05:36.369384 systemd-logind[1525]: Removed session 16. Sep 12 18:05:36.436985 sshd[4209]: Accepted publickey for core from 139.178.89.65 port 58936 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:36.438754 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:36.444623 systemd-logind[1525]: New session 17 of user core. Sep 12 18:05:36.451595 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 18:05:36.827329 sshd[4212]: Connection closed by 139.178.89.65 port 58936 Sep 12 18:05:36.828384 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:36.839717 systemd[1]: sshd@16-64.23.243.150:22-139.178.89.65:58936.service: Deactivated successfully. Sep 12 18:05:36.841792 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 18:05:36.842964 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Sep 12 18:05:36.847602 systemd[1]: Started sshd@17-64.23.243.150:22-139.178.89.65:58944.service - OpenSSH per-connection server daemon (139.178.89.65:58944). Sep 12 18:05:36.850619 systemd-logind[1525]: Removed session 17. Sep 12 18:05:36.944410 sshd[4221]: Accepted publickey for core from 139.178.89.65 port 58944 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:36.945877 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:36.951971 systemd-logind[1525]: New session 18 of user core. Sep 12 18:05:36.960579 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 18:05:37.714205 sshd[4224]: Connection closed by 139.178.89.65 port 58944 Sep 12 18:05:37.714581 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:37.731677 systemd[1]: sshd@17-64.23.243.150:22-139.178.89.65:58944.service: Deactivated successfully. Sep 12 18:05:37.736238 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 18:05:37.739519 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Sep 12 18:05:37.748739 systemd[1]: Started sshd@18-64.23.243.150:22-139.178.89.65:58950.service - OpenSSH per-connection server daemon (139.178.89.65:58950). Sep 12 18:05:37.752369 systemd-logind[1525]: Removed session 18. Sep 12 18:05:37.842601 sshd[4241]: Accepted publickey for core from 139.178.89.65 port 58950 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:37.844219 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:37.850470 systemd-logind[1525]: New session 19 of user core. Sep 12 18:05:37.855549 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 18:05:38.151659 sshd[4244]: Connection closed by 139.178.89.65 port 58950 Sep 12 18:05:38.152550 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:38.166139 systemd[1]: sshd@18-64.23.243.150:22-139.178.89.65:58950.service: Deactivated successfully. Sep 12 18:05:38.169043 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 18:05:38.172456 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Sep 12 18:05:38.177778 systemd[1]: Started sshd@19-64.23.243.150:22-139.178.89.65:58964.service - OpenSSH per-connection server daemon (139.178.89.65:58964). Sep 12 18:05:38.181543 systemd-logind[1525]: Removed session 19. Sep 12 18:05:38.239140 sshd[4254]: Accepted publickey for core from 139.178.89.65 port 58964 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:38.241010 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:38.246498 systemd-logind[1525]: New session 20 of user core. Sep 12 18:05:38.256645 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 18:05:38.395244 sshd[4257]: Connection closed by 139.178.89.65 port 58964 Sep 12 18:05:38.395969 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:38.400367 systemd[1]: sshd@19-64.23.243.150:22-139.178.89.65:58964.service: Deactivated successfully. Sep 12 18:05:38.403871 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 18:05:38.405432 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Sep 12 18:05:38.409118 systemd-logind[1525]: Removed session 20. Sep 12 18:05:40.108699 kubelet[2738]: E0912 18:05:40.107989 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:41.108240 kubelet[2738]: E0912 18:05:41.108173 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:43.418598 systemd[1]: Started sshd@20-64.23.243.150:22-139.178.89.65:53052.service - OpenSSH per-connection server daemon (139.178.89.65:53052). Sep 12 18:05:43.509852 sshd[4269]: Accepted publickey for core from 139.178.89.65 port 53052 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:43.512194 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:43.523702 systemd-logind[1525]: New session 21 of user core. Sep 12 18:05:43.529550 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 18:05:43.673030 sshd[4274]: Connection closed by 139.178.89.65 port 53052 Sep 12 18:05:43.674095 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:43.680253 systemd[1]: sshd@20-64.23.243.150:22-139.178.89.65:53052.service: Deactivated successfully. Sep 12 18:05:43.684224 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 18:05:43.687496 systemd-logind[1525]: Session 21 logged out. Waiting for processes to exit. Sep 12 18:05:43.689727 systemd-logind[1525]: Removed session 21. Sep 12 18:05:48.692904 systemd[1]: Started sshd@21-64.23.243.150:22-139.178.89.65:53056.service - OpenSSH per-connection server daemon (139.178.89.65:53056). Sep 12 18:05:48.780684 sshd[4285]: Accepted publickey for core from 139.178.89.65 port 53056 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:48.782558 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:48.791068 systemd-logind[1525]: New session 22 of user core. Sep 12 18:05:48.797655 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 18:05:48.940235 sshd[4288]: Connection closed by 139.178.89.65 port 53056 Sep 12 18:05:48.941166 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:48.946287 systemd[1]: sshd@21-64.23.243.150:22-139.178.89.65:53056.service: Deactivated successfully. Sep 12 18:05:48.948658 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 18:05:48.951287 systemd-logind[1525]: Session 22 logged out. Waiting for processes to exit. Sep 12 18:05:48.952871 systemd-logind[1525]: Removed session 22. Sep 12 18:05:49.108417 kubelet[2738]: E0912 18:05:49.108285 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:53.959824 systemd[1]: Started sshd@22-64.23.243.150:22-139.178.89.65:57990.service - OpenSSH per-connection server daemon (139.178.89.65:57990). Sep 12 18:05:54.034499 sshd[4300]: Accepted publickey for core from 139.178.89.65 port 57990 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:54.036991 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:54.044776 systemd-logind[1525]: New session 23 of user core. Sep 12 18:05:54.048647 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 18:05:54.200766 sshd[4303]: Connection closed by 139.178.89.65 port 57990 Sep 12 18:05:54.201485 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:54.211534 systemd[1]: sshd@22-64.23.243.150:22-139.178.89.65:57990.service: Deactivated successfully. Sep 12 18:05:54.214266 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 18:05:54.215508 systemd-logind[1525]: Session 23 logged out. Waiting for processes to exit. Sep 12 18:05:54.220166 systemd[1]: Started sshd@23-64.23.243.150:22-139.178.89.65:57992.service - OpenSSH per-connection server daemon (139.178.89.65:57992). Sep 12 18:05:54.222039 systemd-logind[1525]: Removed session 23. Sep 12 18:05:54.292284 sshd[4315]: Accepted publickey for core from 139.178.89.65 port 57992 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:54.294247 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:54.301114 systemd-logind[1525]: New session 24 of user core. Sep 12 18:05:54.309606 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 18:05:56.075976 containerd[1564]: time="2025-09-12T18:05:56.075929953Z" level=info msg="StopContainer for \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" with timeout 30 (s)" Sep 12 18:05:56.078322 containerd[1564]: time="2025-09-12T18:05:56.077461154Z" level=info msg="Stop container \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" with signal terminated" Sep 12 18:05:56.106231 systemd[1]: cri-containerd-3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f.scope: Deactivated successfully. Sep 12 18:05:56.111545 containerd[1564]: time="2025-09-12T18:05:56.111493829Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 18:05:56.115252 containerd[1564]: time="2025-09-12T18:05:56.115111666Z" level=info msg="received exit event container_id:\"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" id:\"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" pid:3324 exited_at:{seconds:1757700356 nanos:114106884}" Sep 12 18:05:56.115546 containerd[1564]: time="2025-09-12T18:05:56.115494433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" id:\"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" pid:3324 exited_at:{seconds:1757700356 nanos:114106884}" Sep 12 18:05:56.120624 containerd[1564]: time="2025-09-12T18:05:56.120570934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" id:\"db262fb0319066de39bcba44987941bf37e6a9f71b6558cbee25bdd84fe3a788\" pid:4337 exited_at:{seconds:1757700356 nanos:120078024}" Sep 12 18:05:56.125631 containerd[1564]: time="2025-09-12T18:05:56.125576016Z" level=info msg="StopContainer for \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" with timeout 2 (s)" Sep 12 18:05:56.126402 containerd[1564]: time="2025-09-12T18:05:56.126340015Z" level=info msg="Stop container \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" with signal terminated" Sep 12 18:05:56.142561 systemd-networkd[1432]: lxc_health: Link DOWN Sep 12 18:05:56.142572 systemd-networkd[1432]: lxc_health: Lost carrier Sep 12 18:05:56.176599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f-rootfs.mount: Deactivated successfully. Sep 12 18:05:56.179482 containerd[1564]: time="2025-09-12T18:05:56.177960079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" id:\"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" pid:3391 exited_at:{seconds:1757700356 nanos:177583509}" Sep 12 18:05:56.179482 containerd[1564]: time="2025-09-12T18:05:56.178235359Z" level=info msg="received exit event container_id:\"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" id:\"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" pid:3391 exited_at:{seconds:1757700356 nanos:177583509}" Sep 12 18:05:56.178896 systemd[1]: cri-containerd-e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593.scope: Deactivated successfully. Sep 12 18:05:56.180011 systemd[1]: cri-containerd-e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593.scope: Consumed 9.315s CPU time, 191M memory peak, 69.4M read from disk, 13.3M written to disk. Sep 12 18:05:56.189503 containerd[1564]: time="2025-09-12T18:05:56.189437584Z" level=info msg="StopContainer for \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" returns successfully" Sep 12 18:05:56.191601 containerd[1564]: time="2025-09-12T18:05:56.191477175Z" level=info msg="StopPodSandbox for \"2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec\"" Sep 12 18:05:56.192137 containerd[1564]: time="2025-09-12T18:05:56.191633926Z" level=info msg="Container to stop \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:05:56.207754 systemd[1]: cri-containerd-2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec.scope: Deactivated successfully. Sep 12 18:05:56.214554 containerd[1564]: time="2025-09-12T18:05:56.214507623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec\" id:\"2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec\" pid:2981 exit_status:137 exited_at:{seconds:1757700356 nanos:214223888}" Sep 12 18:05:56.230754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593-rootfs.mount: Deactivated successfully. Sep 12 18:05:56.245900 containerd[1564]: time="2025-09-12T18:05:56.245828446Z" level=info msg="StopContainer for \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" returns successfully" Sep 12 18:05:56.246658 containerd[1564]: time="2025-09-12T18:05:56.246600988Z" level=info msg="StopPodSandbox for \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\"" Sep 12 18:05:56.246738 containerd[1564]: time="2025-09-12T18:05:56.246692336Z" level=info msg="Container to stop \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:05:56.246857 containerd[1564]: time="2025-09-12T18:05:56.246760797Z" level=info msg="Container to stop \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:05:56.246857 containerd[1564]: time="2025-09-12T18:05:56.246779504Z" level=info msg="Container to stop \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:05:56.246857 containerd[1564]: time="2025-09-12T18:05:56.246788893Z" level=info msg="Container to stop \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:05:56.246857 containerd[1564]: time="2025-09-12T18:05:56.246798916Z" level=info msg="Container to stop \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 18:05:56.254147 systemd[1]: cri-containerd-ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449.scope: Deactivated successfully. Sep 12 18:05:56.265590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec-rootfs.mount: Deactivated successfully. Sep 12 18:05:56.270331 containerd[1564]: time="2025-09-12T18:05:56.270192860Z" level=info msg="shim disconnected" id=2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec namespace=k8s.io Sep 12 18:05:56.270331 containerd[1564]: time="2025-09-12T18:05:56.270232042Z" level=warning msg="cleaning up after shim disconnected" id=2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec namespace=k8s.io Sep 12 18:05:56.277967 containerd[1564]: time="2025-09-12T18:05:56.270267855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:05:56.292290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449-rootfs.mount: Deactivated successfully. Sep 12 18:05:56.298032 containerd[1564]: time="2025-09-12T18:05:56.297772648Z" level=info msg="shim disconnected" id=ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449 namespace=k8s.io Sep 12 18:05:56.298032 containerd[1564]: time="2025-09-12T18:05:56.298030151Z" level=warning msg="cleaning up after shim disconnected" id=ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449 namespace=k8s.io Sep 12 18:05:56.298442 containerd[1564]: time="2025-09-12T18:05:56.298040374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 18:05:56.306134 containerd[1564]: time="2025-09-12T18:05:56.305974371Z" level=info msg="received exit event sandbox_id:\"2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec\" exit_status:137 exited_at:{seconds:1757700356 nanos:214223888}" Sep 12 18:05:56.308727 containerd[1564]: time="2025-09-12T18:05:56.308592748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" id:\"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" pid:2882 exit_status:137 exited_at:{seconds:1757700356 nanos:259608544}" Sep 12 18:05:56.308943 containerd[1564]: time="2025-09-12T18:05:56.308767744Z" level=info msg="received exit event sandbox_id:\"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" exit_status:137 exited_at:{seconds:1757700356 nanos:259608544}" Sep 12 18:05:56.309164 containerd[1564]: time="2025-09-12T18:05:56.309134629Z" level=info msg="TearDown network for sandbox \"2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec\" successfully" Sep 12 18:05:56.309224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec-shm.mount: Deactivated successfully. Sep 12 18:05:56.310736 containerd[1564]: time="2025-09-12T18:05:56.310521313Z" level=info msg="StopPodSandbox for \"2196e2031e9fe4050cfd11f25d1bd3c976b0097510953044e9896152d42914ec\" returns successfully" Sep 12 18:05:56.311043 containerd[1564]: time="2025-09-12T18:05:56.310506580Z" level=info msg="TearDown network for sandbox \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" successfully" Sep 12 18:05:56.311275 containerd[1564]: time="2025-09-12T18:05:56.311146091Z" level=info msg="StopPodSandbox for \"ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449\" returns successfully" Sep 12 18:05:56.405368 kubelet[2738]: I0912 18:05:56.405142 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-host-proc-sys-kernel\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.405368 kubelet[2738]: I0912 18:05:56.405285 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.407959 kubelet[2738]: I0912 18:05:56.406526 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-hostproc" (OuterVolumeSpecName: "hostproc") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.407959 kubelet[2738]: I0912 18:05:56.406620 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-hostproc\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.407959 kubelet[2738]: I0912 18:05:56.406673 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/174403b6-1adf-4677-8e24-d8a86b2ea600-clustermesh-secrets\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.407959 kubelet[2738]: I0912 18:05:56.406697 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-lib-modules\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.407959 kubelet[2738]: I0912 18:05:56.406726 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/174403b6-1adf-4677-8e24-d8a86b2ea600-hubble-tls\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.407959 kubelet[2738]: I0912 18:05:56.406755 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj9z7\" (UniqueName: \"kubernetes.io/projected/174403b6-1adf-4677-8e24-d8a86b2ea600-kube-api-access-dj9z7\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.408131 kubelet[2738]: I0912 18:05:56.406777 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cni-path\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.408131 kubelet[2738]: I0912 18:05:56.406805 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b4852fe-9f34-432c-9856-cf54b82389e0-cilium-config-path\") pod \"2b4852fe-9f34-432c-9856-cf54b82389e0\" (UID: \"2b4852fe-9f34-432c-9856-cf54b82389e0\") " Sep 12 18:05:56.408131 kubelet[2738]: I0912 18:05:56.406830 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-host-proc-sys-net\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.408131 kubelet[2738]: I0912 18:05:56.406853 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-run\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.408131 kubelet[2738]: I0912 18:05:56.406876 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-xtables-lock\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.408131 kubelet[2738]: I0912 18:05:56.406904 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj4gn\" (UniqueName: \"kubernetes.io/projected/2b4852fe-9f34-432c-9856-cf54b82389e0-kube-api-access-jj4gn\") pod \"2b4852fe-9f34-432c-9856-cf54b82389e0\" (UID: \"2b4852fe-9f34-432c-9856-cf54b82389e0\") " Sep 12 18:05:56.408275 kubelet[2738]: I0912 18:05:56.406929 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-bpf-maps\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.408275 kubelet[2738]: I0912 18:05:56.406950 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-cgroup\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.408275 kubelet[2738]: I0912 18:05:56.406990 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-etc-cni-netd\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.408275 kubelet[2738]: I0912 18:05:56.407010 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-config-path\") pod \"174403b6-1adf-4677-8e24-d8a86b2ea600\" (UID: \"174403b6-1adf-4677-8e24-d8a86b2ea600\") " Sep 12 18:05:56.408275 kubelet[2738]: I0912 18:05:56.407058 2738 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-hostproc\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.408275 kubelet[2738]: I0912 18:05:56.407069 2738 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-host-proc-sys-kernel\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.408442 kubelet[2738]: I0912 18:05:56.407508 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.410845 kubelet[2738]: I0912 18:05:56.410772 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.411317 kubelet[2738]: I0912 18:05:56.410823 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cni-path" (OuterVolumeSpecName: "cni-path") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.412273 kubelet[2738]: I0912 18:05:56.412187 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.412273 kubelet[2738]: I0912 18:05:56.412232 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.412273 kubelet[2738]: I0912 18:05:56.412250 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.417431 kubelet[2738]: I0912 18:05:56.416706 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.417431 kubelet[2738]: I0912 18:05:56.416758 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 18:05:56.419120 kubelet[2738]: I0912 18:05:56.419075 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 18:05:56.423906 kubelet[2738]: I0912 18:05:56.423853 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b4852fe-9f34-432c-9856-cf54b82389e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b4852fe-9f34-432c-9856-cf54b82389e0" (UID: "2b4852fe-9f34-432c-9856-cf54b82389e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 18:05:56.424584 kubelet[2738]: I0912 18:05:56.424509 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174403b6-1adf-4677-8e24-d8a86b2ea600-kube-api-access-dj9z7" (OuterVolumeSpecName: "kube-api-access-dj9z7") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "kube-api-access-dj9z7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 18:05:56.424762 kubelet[2738]: I0912 18:05:56.424736 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174403b6-1adf-4677-8e24-d8a86b2ea600-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 18:05:56.424976 kubelet[2738]: I0912 18:05:56.424953 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4852fe-9f34-432c-9856-cf54b82389e0-kube-api-access-jj4gn" (OuterVolumeSpecName: "kube-api-access-jj4gn") pod "2b4852fe-9f34-432c-9856-cf54b82389e0" (UID: "2b4852fe-9f34-432c-9856-cf54b82389e0"). InnerVolumeSpecName "kube-api-access-jj4gn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 18:05:56.425063 kubelet[2738]: I0912 18:05:56.424980 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174403b6-1adf-4677-8e24-d8a86b2ea600-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "174403b6-1adf-4677-8e24-d8a86b2ea600" (UID: "174403b6-1adf-4677-8e24-d8a86b2ea600"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 18:05:56.477225 kubelet[2738]: I0912 18:05:56.477163 2738 scope.go:117] "RemoveContainer" containerID="3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f" Sep 12 18:05:56.480962 containerd[1564]: time="2025-09-12T18:05:56.480885504Z" level=info msg="RemoveContainer for \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\"" Sep 12 18:05:56.490569 systemd[1]: Removed slice kubepods-besteffort-pod2b4852fe_9f34_432c_9856_cf54b82389e0.slice - libcontainer container kubepods-besteffort-pod2b4852fe_9f34_432c_9856_cf54b82389e0.slice. Sep 12 18:05:56.491716 containerd[1564]: time="2025-09-12T18:05:56.491636623Z" level=info msg="RemoveContainer for \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" returns successfully" Sep 12 18:05:56.492656 kubelet[2738]: I0912 18:05:56.492590 2738 scope.go:117] "RemoveContainer" containerID="3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f" Sep 12 18:05:56.498831 containerd[1564]: time="2025-09-12T18:05:56.494213047Z" level=error msg="ContainerStatus for \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\": not found" Sep 12 18:05:56.499112 kubelet[2738]: E0912 18:05:56.498826 2738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\": not found" containerID="3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f" Sep 12 18:05:56.499112 kubelet[2738]: I0912 18:05:56.498861 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f"} err="failed to get container status \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3759f410e941d3f3ab1aac89a11c1695bc25957f1d964cd0f748c15bed483f8f\": not found" Sep 12 18:05:56.499112 kubelet[2738]: I0912 18:05:56.498894 2738 scope.go:117] "RemoveContainer" containerID="e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593" Sep 12 18:05:56.504269 systemd[1]: Removed slice kubepods-burstable-pod174403b6_1adf_4677_8e24_d8a86b2ea600.slice - libcontainer container kubepods-burstable-pod174403b6_1adf_4677_8e24_d8a86b2ea600.slice. Sep 12 18:05:56.504459 systemd[1]: kubepods-burstable-pod174403b6_1adf_4677_8e24_d8a86b2ea600.slice: Consumed 9.431s CPU time, 191.3M memory peak, 69.4M read from disk, 13.3M written to disk. Sep 12 18:05:56.507361 containerd[1564]: time="2025-09-12T18:05:56.507264496Z" level=info msg="RemoveContainer for \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\"" Sep 12 18:05:56.507728 kubelet[2738]: I0912 18:05:56.507709 2738 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-host-proc-sys-net\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.507925 kubelet[2738]: I0912 18:05:56.507763 2738 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-run\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.507925 kubelet[2738]: I0912 18:05:56.507774 2738 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-xtables-lock\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.507925 kubelet[2738]: I0912 18:05:56.507783 2738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jj4gn\" (UniqueName: \"kubernetes.io/projected/2b4852fe-9f34-432c-9856-cf54b82389e0-kube-api-access-jj4gn\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.507925 kubelet[2738]: I0912 18:05:56.507792 2738 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-bpf-maps\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.507925 kubelet[2738]: I0912 18:05:56.507801 2738 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-cgroup\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.507925 kubelet[2738]: I0912 18:05:56.507810 2738 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-etc-cni-netd\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.507925 kubelet[2738]: I0912 18:05:56.507817 2738 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/174403b6-1adf-4677-8e24-d8a86b2ea600-cilium-config-path\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.507925 kubelet[2738]: I0912 18:05:56.507826 2738 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/174403b6-1adf-4677-8e24-d8a86b2ea600-clustermesh-secrets\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.508138 kubelet[2738]: I0912 18:05:56.507838 2738 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-lib-modules\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.508138 kubelet[2738]: I0912 18:05:56.507846 2738 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/174403b6-1adf-4677-8e24-d8a86b2ea600-hubble-tls\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.508138 kubelet[2738]: I0912 18:05:56.507855 2738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dj9z7\" (UniqueName: \"kubernetes.io/projected/174403b6-1adf-4677-8e24-d8a86b2ea600-kube-api-access-dj9z7\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.508138 kubelet[2738]: I0912 18:05:56.507863 2738 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/174403b6-1adf-4677-8e24-d8a86b2ea600-cni-path\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.508138 kubelet[2738]: I0912 18:05:56.507872 2738 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b4852fe-9f34-432c-9856-cf54b82389e0-cilium-config-path\") on node \"ci-4426.1.0-8-66567323f5\" DevicePath \"\"" Sep 12 18:05:56.514461 containerd[1564]: time="2025-09-12T18:05:56.513845089Z" level=info msg="RemoveContainer for \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" returns successfully" Sep 12 18:05:56.514853 kubelet[2738]: I0912 18:05:56.514823 2738 scope.go:117] "RemoveContainer" containerID="cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005" Sep 12 18:05:56.517505 containerd[1564]: time="2025-09-12T18:05:56.517469377Z" level=info msg="RemoveContainer for \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\"" Sep 12 18:05:56.522529 containerd[1564]: time="2025-09-12T18:05:56.522441503Z" level=info msg="RemoveContainer for \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\" returns successfully" Sep 12 18:05:56.522979 kubelet[2738]: I0912 18:05:56.522938 2738 scope.go:117] "RemoveContainer" containerID="06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565" Sep 12 18:05:56.529977 containerd[1564]: time="2025-09-12T18:05:56.529678866Z" level=info msg="RemoveContainer for \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\"" Sep 12 18:05:56.538511 containerd[1564]: time="2025-09-12T18:05:56.538423952Z" level=info msg="RemoveContainer for \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\" returns successfully" Sep 12 18:05:56.539227 kubelet[2738]: I0912 18:05:56.539069 2738 scope.go:117] "RemoveContainer" containerID="247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e" Sep 12 18:05:56.542850 containerd[1564]: time="2025-09-12T18:05:56.542814052Z" level=info msg="RemoveContainer for \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\"" Sep 12 18:05:56.546842 containerd[1564]: time="2025-09-12T18:05:56.546545523Z" level=info msg="RemoveContainer for \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\" returns successfully" Sep 12 18:05:56.551314 kubelet[2738]: I0912 18:05:56.550600 2738 scope.go:117] "RemoveContainer" containerID="c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20" Sep 12 18:05:56.561057 containerd[1564]: time="2025-09-12T18:05:56.560998086Z" level=info msg="RemoveContainer for \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\"" Sep 12 18:05:56.567327 containerd[1564]: time="2025-09-12T18:05:56.565820378Z" level=info msg="RemoveContainer for \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\" returns successfully" Sep 12 18:05:56.568194 kubelet[2738]: I0912 18:05:56.568154 2738 scope.go:117] "RemoveContainer" containerID="e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593" Sep 12 18:05:56.568644 containerd[1564]: time="2025-09-12T18:05:56.568556991Z" level=error msg="ContainerStatus for \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\": not found" Sep 12 18:05:56.569674 kubelet[2738]: E0912 18:05:56.569605 2738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\": not found" containerID="e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593" Sep 12 18:05:56.570235 kubelet[2738]: I0912 18:05:56.569648 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593"} err="failed to get container status \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\": rpc error: code = NotFound desc = an error occurred when try to find container \"e112dd686156651de80475fb4ad85aeb21c1db7d1a71ead2fca92254c27bb593\": not found" Sep 12 18:05:56.570235 kubelet[2738]: I0912 18:05:56.569815 2738 scope.go:117] "RemoveContainer" containerID="cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005" Sep 12 18:05:56.570971 containerd[1564]: time="2025-09-12T18:05:56.570900855Z" level=error msg="ContainerStatus for \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\": not found" Sep 12 18:05:56.571290 kubelet[2738]: E0912 18:05:56.571170 2738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\": not found" containerID="cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005" Sep 12 18:05:56.571290 kubelet[2738]: I0912 18:05:56.571211 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005"} err="failed to get container status \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf8dcdda39a1f8c63e5d71b6bc6afa7605824c049ecc82ef9cb17589d03f7005\": not found" Sep 12 18:05:56.571290 kubelet[2738]: I0912 18:05:56.571239 2738 scope.go:117] "RemoveContainer" containerID="06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565" Sep 12 18:05:56.571773 containerd[1564]: time="2025-09-12T18:05:56.571734186Z" level=error msg="ContainerStatus for \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\": not found" Sep 12 18:05:56.572076 kubelet[2738]: E0912 18:05:56.572002 2738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\": not found" containerID="06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565" Sep 12 18:05:56.572142 kubelet[2738]: I0912 18:05:56.572074 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565"} err="failed to get container status \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\": rpc error: code = NotFound desc = an error occurred when try to find container \"06eb42dcf25eef70a1e21554751172f18e0967236496711cbf06086830190565\": not found" Sep 12 18:05:56.572142 kubelet[2738]: I0912 18:05:56.572123 2738 scope.go:117] "RemoveContainer" containerID="247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e" Sep 12 18:05:56.572516 containerd[1564]: time="2025-09-12T18:05:56.572471918Z" level=error msg="ContainerStatus for \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\": not found" Sep 12 18:05:56.572716 kubelet[2738]: E0912 18:05:56.572672 2738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\": not found" containerID="247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e" Sep 12 18:05:56.572830 kubelet[2738]: I0912 18:05:56.572722 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e"} err="failed to get container status \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\": rpc error: code = NotFound desc = an error occurred when try to find container \"247efbe3a3a8efd96ae1a3c342e366db9f0ad4fa4c484b706f53c78dfbdb307e\": not found" Sep 12 18:05:56.572830 kubelet[2738]: I0912 18:05:56.572743 2738 scope.go:117] "RemoveContainer" containerID="c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20" Sep 12 18:05:56.573092 containerd[1564]: time="2025-09-12T18:05:56.573020590Z" level=error msg="ContainerStatus for \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\": not found" Sep 12 18:05:56.573282 kubelet[2738]: E0912 18:05:56.573226 2738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\": not found" containerID="c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20" Sep 12 18:05:56.573385 kubelet[2738]: I0912 18:05:56.573289 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20"} err="failed to get container status \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6be6f1886c73dcc3e2e0aee7a15e03cc874bf81db8c258107329e657a067d20\": not found" Sep 12 18:05:57.108330 kubelet[2738]: E0912 18:05:57.108199 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:57.175749 systemd[1]: var-lib-kubelet-pods-2b4852fe\x2d9f34\x2d432c\x2d9856\x2dcf54b82389e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djj4gn.mount: Deactivated successfully. Sep 12 18:05:57.176432 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef035504ee16d4469143a24ca8411bcefdc743966359999b4c627027ebbef449-shm.mount: Deactivated successfully. Sep 12 18:05:57.176547 systemd[1]: var-lib-kubelet-pods-174403b6\x2d1adf\x2d4677\x2d8e24\x2dd8a86b2ea600-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddj9z7.mount: Deactivated successfully. Sep 12 18:05:57.176654 systemd[1]: var-lib-kubelet-pods-174403b6\x2d1adf\x2d4677\x2d8e24\x2dd8a86b2ea600-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 18:05:57.176760 systemd[1]: var-lib-kubelet-pods-174403b6\x2d1adf\x2d4677\x2d8e24\x2dd8a86b2ea600-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 18:05:57.995853 sshd[4318]: Connection closed by 139.178.89.65 port 57992 Sep 12 18:05:57.996148 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:58.012538 systemd[1]: sshd@23-64.23.243.150:22-139.178.89.65:57992.service: Deactivated successfully. Sep 12 18:05:58.015173 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 18:05:58.015891 systemd[1]: session-24.scope: Consumed 1.033s CPU time, 26.3M memory peak. Sep 12 18:05:58.017983 systemd-logind[1525]: Session 24 logged out. Waiting for processes to exit. Sep 12 18:05:58.021154 systemd[1]: Started sshd@24-64.23.243.150:22-139.178.89.65:57994.service - OpenSSH per-connection server daemon (139.178.89.65:57994). Sep 12 18:05:58.023489 systemd-logind[1525]: Removed session 24. Sep 12 18:05:58.110710 sshd[4474]: Accepted publickey for core from 139.178.89.65 port 57994 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:58.112794 kubelet[2738]: I0912 18:05:58.112733 2738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174403b6-1adf-4677-8e24-d8a86b2ea600" path="/var/lib/kubelet/pods/174403b6-1adf-4677-8e24-d8a86b2ea600/volumes" Sep 12 18:05:58.115083 kubelet[2738]: I0912 18:05:58.114208 2738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b4852fe-9f34-432c-9856-cf54b82389e0" path="/var/lib/kubelet/pods/2b4852fe-9f34-432c-9856-cf54b82389e0/volumes" Sep 12 18:05:58.114979 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:58.122796 systemd-logind[1525]: New session 25 of user core. Sep 12 18:05:58.126605 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 18:05:58.725756 sshd[4477]: Connection closed by 139.178.89.65 port 57994 Sep 12 18:05:58.726896 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:58.738579 systemd[1]: sshd@24-64.23.243.150:22-139.178.89.65:57994.service: Deactivated successfully. Sep 12 18:05:58.741413 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 18:05:58.748523 systemd-logind[1525]: Session 25 logged out. Waiting for processes to exit. Sep 12 18:05:58.752476 systemd[1]: Started sshd@25-64.23.243.150:22-139.178.89.65:58000.service - OpenSSH per-connection server daemon (139.178.89.65:58000). Sep 12 18:05:58.756809 systemd-logind[1525]: Removed session 25. Sep 12 18:05:58.811183 systemd[1]: Created slice kubepods-burstable-pod7db6264b_e0c1_49e2_b197_620cfcf4c9d8.slice - libcontainer container kubepods-burstable-pod7db6264b_e0c1_49e2_b197_620cfcf4c9d8.slice. Sep 12 18:05:58.824153 kubelet[2738]: I0912 18:05:58.823617 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-cilium-run\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.824153 kubelet[2738]: I0912 18:05:58.823657 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-hostproc\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.824153 kubelet[2738]: I0912 18:05:58.823697 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-etc-cni-netd\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.824153 kubelet[2738]: I0912 18:05:58.823726 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-clustermesh-secrets\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.824153 kubelet[2738]: I0912 18:05:58.823747 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnhlb\" (UniqueName: \"kubernetes.io/projected/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-kube-api-access-mnhlb\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.824153 kubelet[2738]: I0912 18:05:58.823768 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-lib-modules\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.825322 kubelet[2738]: I0912 18:05:58.823783 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-cilium-ipsec-secrets\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.825322 kubelet[2738]: I0912 18:05:58.823799 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-xtables-lock\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.825322 kubelet[2738]: I0912 18:05:58.823814 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-cilium-cgroup\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.825322 kubelet[2738]: I0912 18:05:58.823828 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-host-proc-sys-net\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.825322 kubelet[2738]: I0912 18:05:58.823845 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-host-proc-sys-kernel\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.825322 kubelet[2738]: I0912 18:05:58.823863 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-hubble-tls\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.825469 kubelet[2738]: I0912 18:05:58.823878 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-bpf-maps\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.825469 kubelet[2738]: I0912 18:05:58.823892 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-cni-path\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.825469 kubelet[2738]: I0912 18:05:58.823911 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7db6264b-e0c1-49e2-b197-620cfcf4c9d8-cilium-config-path\") pod \"cilium-wdrk7\" (UID: \"7db6264b-e0c1-49e2-b197-620cfcf4c9d8\") " pod="kube-system/cilium-wdrk7" Sep 12 18:05:58.838523 sshd[4487]: Accepted publickey for core from 139.178.89.65 port 58000 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:58.840083 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:58.849015 systemd-logind[1525]: New session 26 of user core. Sep 12 18:05:58.853052 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 18:05:58.914781 sshd[4490]: Connection closed by 139.178.89.65 port 58000 Sep 12 18:05:58.917521 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Sep 12 18:05:58.928497 systemd[1]: sshd@25-64.23.243.150:22-139.178.89.65:58000.service: Deactivated successfully. Sep 12 18:05:58.934993 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 18:05:58.939075 systemd-logind[1525]: Session 26 logged out. Waiting for processes to exit. Sep 12 18:05:58.949684 systemd[1]: Started sshd@26-64.23.243.150:22-139.178.89.65:58002.service - OpenSSH per-connection server daemon (139.178.89.65:58002). Sep 12 18:05:58.965679 systemd-logind[1525]: Removed session 26. Sep 12 18:05:59.021598 sshd[4501]: Accepted publickey for core from 139.178.89.65 port 58002 ssh2: RSA SHA256:rgM4CCKqcUK6ImSFkPmxEROhKavbkgyEegeKnVmOeSQ Sep 12 18:05:59.023503 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:05:59.030524 systemd-logind[1525]: New session 27 of user core. Sep 12 18:05:59.036589 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 18:05:59.118710 kubelet[2738]: E0912 18:05:59.118655 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:59.119809 containerd[1564]: time="2025-09-12T18:05:59.119757058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdrk7,Uid:7db6264b-e0c1-49e2-b197-620cfcf4c9d8,Namespace:kube-system,Attempt:0,}" Sep 12 18:05:59.160237 containerd[1564]: time="2025-09-12T18:05:59.160137701Z" level=info msg="connecting to shim 0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408" address="unix:///run/containerd/s/2e1641f0b40772cdbb74235551636134dfe9160902bdd2872555f9a520013a6e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 18:05:59.212791 systemd[1]: Started cri-containerd-0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408.scope - libcontainer container 0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408. Sep 12 18:05:59.261255 containerd[1564]: time="2025-09-12T18:05:59.261207534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdrk7,Uid:7db6264b-e0c1-49e2-b197-620cfcf4c9d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\"" Sep 12 18:05:59.262232 kubelet[2738]: E0912 18:05:59.262202 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:59.272751 containerd[1564]: time="2025-09-12T18:05:59.272146650Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 18:05:59.281389 containerd[1564]: time="2025-09-12T18:05:59.280819954Z" level=info msg="Container ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:05:59.287708 containerd[1564]: time="2025-09-12T18:05:59.287665436Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653\"" Sep 12 18:05:59.289374 containerd[1564]: time="2025-09-12T18:05:59.288468222Z" level=info msg="StartContainer for \"ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653\"" Sep 12 18:05:59.291319 containerd[1564]: time="2025-09-12T18:05:59.291107529Z" level=info msg="connecting to shim ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653" address="unix:///run/containerd/s/2e1641f0b40772cdbb74235551636134dfe9160902bdd2872555f9a520013a6e" protocol=ttrpc version=3 Sep 12 18:05:59.316531 systemd[1]: Started cri-containerd-ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653.scope - libcontainer container ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653. Sep 12 18:05:59.354092 containerd[1564]: time="2025-09-12T18:05:59.354033068Z" level=info msg="StartContainer for \"ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653\" returns successfully" Sep 12 18:05:59.367898 systemd[1]: cri-containerd-ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653.scope: Deactivated successfully. Sep 12 18:05:59.368505 systemd[1]: cri-containerd-ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653.scope: Consumed 25ms CPU time, 9.5M memory peak, 3.1M read from disk. Sep 12 18:05:59.370005 containerd[1564]: time="2025-09-12T18:05:59.369956429Z" level=info msg="received exit event container_id:\"ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653\" id:\"ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653\" pid:4570 exited_at:{seconds:1757700359 nanos:369569106}" Sep 12 18:05:59.370785 containerd[1564]: time="2025-09-12T18:05:59.370755009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653\" id:\"ec44705c03c946598297d33eb21689a9998295833a2cc948fc09f47a87e40653\" pid:4570 exited_at:{seconds:1757700359 nanos:369569106}" Sep 12 18:05:59.506854 kubelet[2738]: E0912 18:05:59.506699 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:05:59.515699 containerd[1564]: time="2025-09-12T18:05:59.515640126Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 18:05:59.524996 containerd[1564]: time="2025-09-12T18:05:59.524869470Z" level=info msg="Container 43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:05:59.538698 containerd[1564]: time="2025-09-12T18:05:59.538572862Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f\"" Sep 12 18:05:59.540092 containerd[1564]: time="2025-09-12T18:05:59.539965403Z" level=info msg="StartContainer for \"43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f\"" Sep 12 18:05:59.542861 containerd[1564]: time="2025-09-12T18:05:59.542649230Z" level=info msg="connecting to shim 43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f" address="unix:///run/containerd/s/2e1641f0b40772cdbb74235551636134dfe9160902bdd2872555f9a520013a6e" protocol=ttrpc version=3 Sep 12 18:05:59.565588 systemd[1]: Started cri-containerd-43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f.scope - libcontainer container 43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f. Sep 12 18:05:59.603169 containerd[1564]: time="2025-09-12T18:05:59.603131127Z" level=info msg="StartContainer for \"43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f\" returns successfully" Sep 12 18:05:59.613613 systemd[1]: cri-containerd-43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f.scope: Deactivated successfully. Sep 12 18:05:59.614058 systemd[1]: cri-containerd-43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f.scope: Consumed 23ms CPU time, 7.5M memory peak, 2.2M read from disk. Sep 12 18:05:59.615434 containerd[1564]: time="2025-09-12T18:05:59.614750426Z" level=info msg="received exit event container_id:\"43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f\" id:\"43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f\" pid:4617 exited_at:{seconds:1757700359 nanos:613727869}" Sep 12 18:05:59.615434 containerd[1564]: time="2025-09-12T18:05:59.615034623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f\" id:\"43e875347111bc7d502aad2045f1fbf2da32c5cbab0cc211290b916f492cbd1f\" pid:4617 exited_at:{seconds:1757700359 nanos:613727869}" Sep 12 18:06:00.197290 kubelet[2738]: E0912 18:06:00.197228 2738 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 18:06:00.517060 kubelet[2738]: E0912 18:06:00.516559 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:06:00.531026 containerd[1564]: time="2025-09-12T18:06:00.530963320Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 18:06:00.564977 containerd[1564]: time="2025-09-12T18:06:00.564925919Z" level=info msg="Container baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:06:00.583416 containerd[1564]: time="2025-09-12T18:06:00.583343859Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8\"" Sep 12 18:06:00.584651 containerd[1564]: time="2025-09-12T18:06:00.584609287Z" level=info msg="StartContainer for \"baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8\"" Sep 12 18:06:00.588788 containerd[1564]: time="2025-09-12T18:06:00.588708714Z" level=info msg="connecting to shim baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8" address="unix:///run/containerd/s/2e1641f0b40772cdbb74235551636134dfe9160902bdd2872555f9a520013a6e" protocol=ttrpc version=3 Sep 12 18:06:00.618858 systemd[1]: Started cri-containerd-baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8.scope - libcontainer container baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8. Sep 12 18:06:00.677551 containerd[1564]: time="2025-09-12T18:06:00.677448292Z" level=info msg="StartContainer for \"baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8\" returns successfully" Sep 12 18:06:00.682638 systemd[1]: cri-containerd-baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8.scope: Deactivated successfully. Sep 12 18:06:00.686021 containerd[1564]: time="2025-09-12T18:06:00.685080583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8\" id:\"baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8\" pid:4660 exited_at:{seconds:1757700360 nanos:684650688}" Sep 12 18:06:00.686021 containerd[1564]: time="2025-09-12T18:06:00.685475283Z" level=info msg="received exit event container_id:\"baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8\" id:\"baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8\" pid:4660 exited_at:{seconds:1757700360 nanos:684650688}" Sep 12 18:06:00.716151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baf207e2d5f917b4888be19c1ea2285ce32126744eedc7e56272617d42705ab8-rootfs.mount: Deactivated successfully. Sep 12 18:06:01.523126 kubelet[2738]: E0912 18:06:01.523066 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:06:01.533736 containerd[1564]: time="2025-09-12T18:06:01.533617726Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 18:06:01.555018 containerd[1564]: time="2025-09-12T18:06:01.554954524Z" level=info msg="Container f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:06:01.573433 containerd[1564]: time="2025-09-12T18:06:01.572876880Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e\"" Sep 12 18:06:01.574762 containerd[1564]: time="2025-09-12T18:06:01.574714005Z" level=info msg="StartContainer for \"f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e\"" Sep 12 18:06:01.576280 containerd[1564]: time="2025-09-12T18:06:01.576233410Z" level=info msg="connecting to shim f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e" address="unix:///run/containerd/s/2e1641f0b40772cdbb74235551636134dfe9160902bdd2872555f9a520013a6e" protocol=ttrpc version=3 Sep 12 18:06:01.616661 systemd[1]: Started cri-containerd-f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e.scope - libcontainer container f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e. Sep 12 18:06:01.653917 systemd[1]: cri-containerd-f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e.scope: Deactivated successfully. Sep 12 18:06:01.656882 containerd[1564]: time="2025-09-12T18:06:01.656655988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e\" id:\"f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e\" pid:4700 exited_at:{seconds:1757700361 nanos:655860928}" Sep 12 18:06:01.657276 containerd[1564]: time="2025-09-12T18:06:01.657190028Z" level=info msg="received exit event container_id:\"f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e\" id:\"f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e\" pid:4700 exited_at:{seconds:1757700361 nanos:655860928}" Sep 12 18:06:01.669173 containerd[1564]: time="2025-09-12T18:06:01.669126284Z" level=info msg="StartContainer for \"f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e\" returns successfully" Sep 12 18:06:01.690402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3179dd7d83e629bfcd2109094833afdeabf571bfb0d85b3ff1b8f069235034e-rootfs.mount: Deactivated successfully. Sep 12 18:06:02.531366 kubelet[2738]: E0912 18:06:02.530325 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:06:02.541127 containerd[1564]: time="2025-09-12T18:06:02.540471126Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 18:06:02.561934 containerd[1564]: time="2025-09-12T18:06:02.561860016Z" level=info msg="Container 6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0: CDI devices from CRI Config.CDIDevices: []" Sep 12 18:06:02.571120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606039400.mount: Deactivated successfully. Sep 12 18:06:02.585640 containerd[1564]: time="2025-09-12T18:06:02.585553046Z" level=info msg="CreateContainer within sandbox \"0af05f4e32866acdcc81467d23d7a6637f5c70e6b735531e438f5da9aeb8a408\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0\"" Sep 12 18:06:02.587455 containerd[1564]: time="2025-09-12T18:06:02.587091820Z" level=info msg="StartContainer for \"6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0\"" Sep 12 18:06:02.588855 containerd[1564]: time="2025-09-12T18:06:02.588794313Z" level=info msg="connecting to shim 6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0" address="unix:///run/containerd/s/2e1641f0b40772cdbb74235551636134dfe9160902bdd2872555f9a520013a6e" protocol=ttrpc version=3 Sep 12 18:06:02.623639 systemd[1]: Started cri-containerd-6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0.scope - libcontainer container 6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0. Sep 12 18:06:02.679372 containerd[1564]: time="2025-09-12T18:06:02.679282677Z" level=info msg="StartContainer for \"6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0\" returns successfully" Sep 12 18:06:02.763233 containerd[1564]: time="2025-09-12T18:06:02.763184807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0\" id:\"eca1e24301d8f13f8431ec4f501c3e964159338a610a9d1e788cd1f7b3552146\" pid:4767 exited_at:{seconds:1757700362 nanos:761442643}" Sep 12 18:06:02.858233 kubelet[2738]: I0912 18:06:02.857717 2738 setters.go:618] "Node became not ready" node="ci-4426.1.0-8-66567323f5" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T18:06:02Z","lastTransitionTime":"2025-09-12T18:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 18:06:03.174374 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 12 18:06:03.539316 kubelet[2738]: E0912 18:06:03.539249 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:06:05.121632 kubelet[2738]: E0912 18:06:05.121508 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:06:05.592940 containerd[1564]: time="2025-09-12T18:06:05.592702945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0\" id:\"da47798186f54e3263b267b46ebe170a950f5c879d833e3d43d336a677b2d7c1\" pid:4985 exit_status:1 exited_at:{seconds:1757700365 nanos:591967374}" Sep 12 18:06:06.588604 systemd-networkd[1432]: lxc_health: Link UP Sep 12 18:06:06.588991 systemd-networkd[1432]: lxc_health: Gained carrier Sep 12 18:06:07.120952 kubelet[2738]: E0912 18:06:07.120900 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:06:07.169190 kubelet[2738]: I0912 18:06:07.168926 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wdrk7" podStartSLOduration=9.168909544 podStartE2EDuration="9.168909544s" podCreationTimestamp="2025-09-12 18:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 18:06:03.560240821 +0000 UTC m=+103.634192153" watchObservedRunningTime="2025-09-12 18:06:07.168909544 +0000 UTC m=+107.242860872" Sep 12 18:06:07.552106 kubelet[2738]: E0912 18:06:07.551598 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:06:07.702961 systemd-networkd[1432]: lxc_health: Gained IPv6LL Sep 12 18:06:07.807061 containerd[1564]: time="2025-09-12T18:06:07.806613751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0\" id:\"7238b6cddafb795b2a59acbc1872751f5f7c5d11d9c7cb7e01bb8c2bd458d9c7\" pid:5288 exited_at:{seconds:1757700367 nanos:805001788}" Sep 12 18:06:08.553969 kubelet[2738]: E0912 18:06:08.553888 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:06:10.039164 containerd[1564]: time="2025-09-12T18:06:10.039000119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0\" id:\"ec6620758bdd37289b23467714981b117b0919f9a44122ffb0a8abbea9c3df8e\" pid:5317 exited_at:{seconds:1757700370 nanos:38688163}" Sep 12 18:06:11.108449 kubelet[2738]: E0912 18:06:11.108393 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 18:06:12.201879 containerd[1564]: time="2025-09-12T18:06:12.201826196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0\" id:\"e47b1cd476cd90f84c15ddd6e7724e858ad2e5998691be7caecfb22c9977acde\" pid:5352 exited_at:{seconds:1757700372 nanos:201323723}" Sep 12 18:06:14.359402 containerd[1564]: time="2025-09-12T18:06:14.359345396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d3a7352549cb2c97a5f7962726f72d2aaa051b7418b27d5bf46b7e9e3c46de0\" id:\"51ef4e06945096f4268869338e8005c76f73eae1df988dbcba712a5c639dbcf8\" pid:5383 exited_at:{seconds:1757700374 nanos:358945396}" Sep 12 18:06:14.367422 sshd[4504]: Connection closed by 139.178.89.65 port 58002 Sep 12 18:06:14.368542 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Sep 12 18:06:14.384520 systemd[1]: sshd@26-64.23.243.150:22-139.178.89.65:58002.service: Deactivated successfully. Sep 12 18:06:14.387237 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 18:06:14.389381 systemd-logind[1525]: Session 27 logged out. Waiting for processes to exit. Sep 12 18:06:14.391945 systemd-logind[1525]: Removed session 27. Sep 12 18:06:15.108686 kubelet[2738]: E0912 18:06:15.108644 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"