Sep 13 00:05:54.085232 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:05:54.085282 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:54.085303 kernel: BIOS-provided physical RAM map: Sep 13 00:05:54.085316 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:05:54.085327 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:05:54.085340 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:05:54.085355 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 13 00:05:54.085367 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 13 00:05:54.085379 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:05:54.085414 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:05:54.085428 kernel: NX (Execute Disable) protection: active Sep 13 00:05:54.085440 kernel: APIC: Static calls initialized Sep 13 00:05:54.085461 kernel: SMBIOS 2.8 present. Sep 13 00:05:54.085474 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 13 00:05:54.085490 kernel: Hypervisor detected: KVM Sep 13 00:05:54.085508 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:05:54.085527 kernel: kvm-clock: using sched offset of 3302730437 cycles Sep 13 00:05:54.085541 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:05:54.085554 kernel: tsc: Detected 2000.000 MHz processor Sep 13 00:05:54.085567 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:05:54.085579 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:05:54.085593 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 13 00:05:54.085631 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:05:54.085647 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:05:54.085667 kernel: ACPI: Early table checksum verification disabled Sep 13 00:05:54.085679 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 13 00:05:54.085692 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:54.085705 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:54.085718 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:54.085731 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 13 00:05:54.085744 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:54.085756 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:54.085776 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:54.087850 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:05:54.087873 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 13 00:05:54.087886 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 13 00:05:54.087900 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 13 00:05:54.087913 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 13 00:05:54.087925 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 13 00:05:54.087939 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 13 00:05:54.087967 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 13 00:05:54.087981 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:05:54.087994 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:05:54.088008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:05:54.088022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:05:54.088046 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 13 00:05:54.088060 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 13 00:05:54.088078 kernel: Zone ranges: Sep 13 00:05:54.088091 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:05:54.088105 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 13 00:05:54.088119 kernel: Normal empty Sep 13 00:05:54.088132 kernel: Movable zone start for each node Sep 13 00:05:54.088145 kernel: Early memory node ranges Sep 13 00:05:54.088158 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:05:54.088171 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 13 00:05:54.088185 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 13 00:05:54.088204 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:05:54.088218 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:05:54.088236 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 13 00:05:54.088250 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:05:54.088264 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:05:54.088278 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:05:54.088291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:05:54.088305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:05:54.088318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:05:54.088335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:05:54.088349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:05:54.088363 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:05:54.088376 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:05:54.088390 kernel: TSC deadline timer available Sep 13 00:05:54.088404 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:05:54.088417 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:05:54.088431 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 13 00:05:54.088451 kernel: Booting paravirtualized kernel on KVM Sep 13 00:05:54.088465 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:05:54.088483 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:05:54.088496 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:05:54.088510 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:05:54.088523 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:05:54.088536 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:05:54.088552 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:54.088566 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:05:54.088584 kernel: random: crng init done Sep 13 00:05:54.088598 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:05:54.088611 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:05:54.088625 kernel: Fallback order for Node 0: 0 Sep 13 00:05:54.088638 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 13 00:05:54.088652 kernel: Policy zone: DMA32 Sep 13 00:05:54.088665 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:05:54.088679 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 13 00:05:54.088693 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:05:54.088709 kernel: Kernel/User page tables isolation: enabled Sep 13 00:05:54.088721 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:05:54.088733 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:05:54.088746 kernel: Dynamic Preempt: voluntary Sep 13 00:05:54.088761 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:05:54.088777 kernel: rcu: RCU event tracing is enabled. Sep 13 00:05:54.088817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:05:54.088832 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:05:54.088846 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:05:54.088859 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:05:54.088878 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:05:54.088892 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:05:54.088906 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:05:54.088919 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:05:54.088938 kernel: Console: colour VGA+ 80x25 Sep 13 00:05:54.088953 kernel: printk: console [tty0] enabled Sep 13 00:05:54.088966 kernel: printk: console [ttyS0] enabled Sep 13 00:05:54.088979 kernel: ACPI: Core revision 20230628 Sep 13 00:05:54.088994 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:05:54.089012 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:05:54.089025 kernel: x2apic enabled Sep 13 00:05:54.089039 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:05:54.089052 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:05:54.089066 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Sep 13 00:05:54.089080 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Sep 13 00:05:54.089094 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 00:05:54.089108 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 00:05:54.089140 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:05:54.089154 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:05:54.089169 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:05:54.089188 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 00:05:54.089202 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:05:54.089217 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:05:54.089231 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:05:54.089244 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:05:54.089259 kernel: active return thunk: its_return_thunk Sep 13 00:05:54.089283 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:05:54.089298 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:05:54.089312 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:05:54.089327 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:05:54.089341 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:05:54.089357 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:05:54.089372 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:05:54.089386 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:05:54.089405 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:05:54.089420 kernel: landlock: Up and running. Sep 13 00:05:54.089434 kernel: SELinux: Initializing. Sep 13 00:05:54.089449 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:05:54.089463 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:05:54.089478 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 13 00:05:54.089493 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:54.089507 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:54.089522 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:54.089542 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 13 00:05:54.089556 kernel: signal: max sigframe size: 1776 Sep 13 00:05:54.089571 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:05:54.089586 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:05:54.089602 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:05:54.089644 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:05:54.089659 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:05:54.089673 kernel: .... node #0, CPUs: #1 Sep 13 00:05:54.089694 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:05:54.089716 kernel: smpboot: Max logical packages: 1 Sep 13 00:05:54.089730 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Sep 13 00:05:54.089745 kernel: devtmpfs: initialized Sep 13 00:05:54.089759 kernel: x86/mm: Memory block size: 128MB Sep 13 00:05:54.089774 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:05:54.089788 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:05:54.091880 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:05:54.091898 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:05:54.091915 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:05:54.091944 kernel: audit: type=2000 audit(1757721952.110:1): state=initialized audit_enabled=0 res=1 Sep 13 00:05:54.091960 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:05:54.091976 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:05:54.091991 kernel: cpuidle: using governor menu Sep 13 00:05:54.092007 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:05:54.092023 kernel: dca service started, version 1.12.1 Sep 13 00:05:54.092040 kernel: PCI: Using configuration type 1 for base access Sep 13 00:05:54.092056 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:05:54.092073 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:05:54.092092 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:05:54.092108 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:05:54.092124 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:05:54.092138 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:05:54.092154 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:05:54.092170 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:05:54.092185 kernel: ACPI: Interpreter enabled Sep 13 00:05:54.092200 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:05:54.092214 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:05:54.092233 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:05:54.092250 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:05:54.092265 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:05:54.092281 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:05:54.092645 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:05:54.092868 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:05:54.093030 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:05:54.093058 kernel: acpiphp: Slot [3] registered Sep 13 00:05:54.093073 kernel: acpiphp: Slot [4] registered Sep 13 00:05:54.093086 kernel: acpiphp: Slot [5] registered Sep 13 00:05:54.093098 kernel: acpiphp: Slot [6] registered Sep 13 00:05:54.093111 kernel: acpiphp: Slot [7] registered Sep 13 00:05:54.093123 kernel: acpiphp: Slot [8] registered Sep 13 00:05:54.093135 kernel: acpiphp: Slot [9] registered Sep 13 00:05:54.093147 kernel: acpiphp: Slot [10] registered Sep 13 00:05:54.093160 kernel: acpiphp: Slot [11] registered Sep 13 00:05:54.093179 kernel: acpiphp: Slot [12] registered Sep 13 00:05:54.093193 kernel: acpiphp: Slot [13] registered Sep 13 00:05:54.093207 kernel: acpiphp: Slot [14] registered Sep 13 00:05:54.093221 kernel: acpiphp: Slot [15] registered Sep 13 00:05:54.093235 kernel: acpiphp: Slot [16] registered Sep 13 00:05:54.093250 kernel: acpiphp: Slot [17] registered Sep 13 00:05:54.093264 kernel: acpiphp: Slot [18] registered Sep 13 00:05:54.093278 kernel: acpiphp: Slot [19] registered Sep 13 00:05:54.093292 kernel: acpiphp: Slot [20] registered Sep 13 00:05:54.093306 kernel: acpiphp: Slot [21] registered Sep 13 00:05:54.093324 kernel: acpiphp: Slot [22] registered Sep 13 00:05:54.093339 kernel: acpiphp: Slot [23] registered Sep 13 00:05:54.093352 kernel: acpiphp: Slot [24] registered Sep 13 00:05:54.093360 kernel: acpiphp: Slot [25] registered Sep 13 00:05:54.093368 kernel: acpiphp: Slot [26] registered Sep 13 00:05:54.093376 kernel: acpiphp: Slot [27] registered Sep 13 00:05:54.093385 kernel: acpiphp: Slot [28] registered Sep 13 00:05:54.093393 kernel: acpiphp: Slot [29] registered Sep 13 00:05:54.093401 kernel: acpiphp: Slot [30] registered Sep 13 00:05:54.093412 kernel: acpiphp: Slot [31] registered Sep 13 00:05:54.093421 kernel: PCI host bridge to bus 0000:00 Sep 13 00:05:54.093581 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:05:54.093709 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:05:54.095914 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:05:54.096059 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:05:54.096152 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 13 00:05:54.096238 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:05:54.096388 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:05:54.096504 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:05:54.096654 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 13 00:05:54.096764 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 13 00:05:54.096923 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 00:05:54.097019 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 00:05:54.097120 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 00:05:54.097214 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 00:05:54.097325 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 13 00:05:54.097422 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 13 00:05:54.097531 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:05:54.097650 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 13 00:05:54.097753 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 13 00:05:54.099052 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:05:54.099181 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 13 00:05:54.099372 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 13 00:05:54.099497 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 13 00:05:54.099594 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 13 00:05:54.099710 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:05:54.100906 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:05:54.101047 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 13 00:05:54.101145 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 13 00:05:54.101240 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 13 00:05:54.101344 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:05:54.101441 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 13 00:05:54.101535 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 13 00:05:54.101639 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 13 00:05:54.101755 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 13 00:05:54.103957 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 13 00:05:54.104083 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 13 00:05:54.104202 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 13 00:05:54.104321 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:05:54.104419 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:05:54.104523 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 13 00:05:54.104619 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 13 00:05:54.104760 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:05:54.104892 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 13 00:05:54.104988 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 13 00:05:54.105082 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 13 00:05:54.105219 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 13 00:05:54.105331 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 13 00:05:54.105427 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 13 00:05:54.105438 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:05:54.105447 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:05:54.105456 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:05:54.105464 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:05:54.105472 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:05:54.105484 kernel: iommu: Default domain type: Translated Sep 13 00:05:54.105493 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:05:54.105501 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:05:54.105510 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:05:54.105518 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:05:54.105526 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 13 00:05:54.105625 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 13 00:05:54.105723 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 13 00:05:54.106939 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:05:54.106958 kernel: vgaarb: loaded Sep 13 00:05:54.106968 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:05:54.106982 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:05:54.106993 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:05:54.107005 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:05:54.107019 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:05:54.107032 kernel: pnp: PnP ACPI init Sep 13 00:05:54.107045 kernel: pnp: PnP ACPI: found 4 devices Sep 13 00:05:54.107068 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:05:54.107078 kernel: NET: Registered PF_INET protocol family Sep 13 00:05:54.107087 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:05:54.107096 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:05:54.107104 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:05:54.107113 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:05:54.107121 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:05:54.107129 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:05:54.107138 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:05:54.107150 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:05:54.107158 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:05:54.107167 kernel: NET: Registered PF_XDP protocol family Sep 13 00:05:54.107332 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:05:54.107452 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:05:54.107551 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:05:54.107639 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:05:54.107723 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 13 00:05:54.107863 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 13 00:05:54.107967 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:05:54.107980 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:05:54.108079 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 42586 usecs Sep 13 00:05:54.108090 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:05:54.108098 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:05:54.108107 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Sep 13 00:05:54.108116 kernel: Initialise system trusted keyrings Sep 13 00:05:54.108128 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:05:54.108137 kernel: Key type asymmetric registered Sep 13 00:05:54.108146 kernel: Asymmetric key parser 'x509' registered Sep 13 00:05:54.108154 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:05:54.108163 kernel: io scheduler mq-deadline registered Sep 13 00:05:54.108171 kernel: io scheduler kyber registered Sep 13 00:05:54.108180 kernel: io scheduler bfq registered Sep 13 00:05:54.108188 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:05:54.108197 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 13 00:05:54.108206 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:05:54.108222 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:05:54.108235 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:05:54.108248 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:05:54.108260 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:05:54.108274 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:05:54.108284 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:05:54.108293 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:05:54.108425 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:05:54.108523 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:05:54.108613 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:05:53 UTC (1757721953) Sep 13 00:05:54.108701 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 00:05:54.108711 kernel: intel_pstate: CPU model not supported Sep 13 00:05:54.108720 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:05:54.108729 kernel: Segment Routing with IPv6 Sep 13 00:05:54.108737 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:05:54.108745 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:05:54.108757 kernel: Key type dns_resolver registered Sep 13 00:05:54.108766 kernel: IPI shorthand broadcast: enabled Sep 13 00:05:54.108780 kernel: sched_clock: Marking stable (1269005503, 161187751)->(1585043691, -154850437) Sep 13 00:05:54.111847 kernel: registered taskstats version 1 Sep 13 00:05:54.111874 kernel: Loading compiled-in X.509 certificates Sep 13 00:05:54.111892 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:05:54.111901 kernel: Key type .fscrypt registered Sep 13 00:05:54.111909 kernel: Key type fscrypt-provisioning registered Sep 13 00:05:54.111918 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:05:54.111937 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:05:54.111945 kernel: ima: No architecture policies found Sep 13 00:05:54.111954 kernel: clk: Disabling unused clocks Sep 13 00:05:54.111962 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:05:54.111971 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:05:54.112000 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:05:54.112011 kernel: Run /init as init process Sep 13 00:05:54.112020 kernel: with arguments: Sep 13 00:05:54.112029 kernel: /init Sep 13 00:05:54.112040 kernel: with environment: Sep 13 00:05:54.112048 kernel: HOME=/ Sep 13 00:05:54.112060 kernel: TERM=linux Sep 13 00:05:54.112070 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:05:54.112084 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:05:54.112096 systemd[1]: Detected virtualization kvm. Sep 13 00:05:54.112108 systemd[1]: Detected architecture x86-64. Sep 13 00:05:54.112123 systemd[1]: Running in initrd. Sep 13 00:05:54.112141 systemd[1]: No hostname configured, using default hostname. Sep 13 00:05:54.112150 systemd[1]: Hostname set to . Sep 13 00:05:54.112159 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:05:54.112168 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:05:54.112178 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:54.112195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:54.112212 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:05:54.112221 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:05:54.112233 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:05:54.112243 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:05:54.112254 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:05:54.112267 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:05:54.112277 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:54.112286 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:54.112298 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:05:54.112307 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:05:54.112317 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:05:54.112329 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:05:54.112342 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:05:54.112352 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:05:54.112364 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:05:54.112374 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:05:54.112383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:54.112393 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:54.112402 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:54.112413 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:05:54.112429 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:05:54.112442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:05:54.112454 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:05:54.112463 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:05:54.112473 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:05:54.112488 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:05:54.112504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:54.112518 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:05:54.112532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:54.112546 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:05:54.112623 systemd-journald[185]: Collecting audit messages is disabled. Sep 13 00:05:54.112656 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:05:54.112672 systemd-journald[185]: Journal started Sep 13 00:05:54.112706 systemd-journald[185]: Runtime Journal (/run/log/journal/f4ce823e75fa4dadbb0c55b101f72923) is 4.9M, max 39.3M, 34.4M free. Sep 13 00:05:54.083776 systemd-modules-load[186]: Inserted module 'overlay' Sep 13 00:05:54.146754 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:05:54.146827 kernel: Bridge firewalling registered Sep 13 00:05:54.118959 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 13 00:05:54.156867 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:05:54.157743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:54.164405 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:54.165416 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:05:54.175215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:54.179100 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:54.180709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:05:54.190468 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:05:54.208162 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:54.219293 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:54.221546 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:54.222708 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:54.230166 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:05:54.234066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:05:54.257382 dracut-cmdline[220]: dracut-dracut-053 Sep 13 00:05:54.264186 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:54.285577 systemd-resolved[221]: Positive Trust Anchors: Sep 13 00:05:54.285599 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:05:54.285634 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:05:54.288993 systemd-resolved[221]: Defaulting to hostname 'linux'. Sep 13 00:05:54.290599 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:05:54.295756 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:54.400860 kernel: SCSI subsystem initialized Sep 13 00:05:54.417870 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:05:54.436873 kernel: iscsi: registered transport (tcp) Sep 13 00:05:54.470900 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:05:54.471072 kernel: QLogic iSCSI HBA Driver Sep 13 00:05:54.530155 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:05:54.537186 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:05:54.574388 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:05:54.574478 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:05:54.574496 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:05:54.625868 kernel: raid6: avx2x4 gen() 28669 MB/s Sep 13 00:05:54.640865 kernel: raid6: avx2x2 gen() 26192 MB/s Sep 13 00:05:54.659387 kernel: raid6: avx2x1 gen() 13989 MB/s Sep 13 00:05:54.659485 kernel: raid6: using algorithm avx2x4 gen() 28669 MB/s Sep 13 00:05:54.677936 kernel: raid6: .... xor() 7506 MB/s, rmw enabled Sep 13 00:05:54.678058 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:05:54.705865 kernel: xor: automatically using best checksumming function avx Sep 13 00:05:54.881057 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:05:54.896135 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:05:54.906085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:54.921367 systemd-udevd[405]: Using default interface naming scheme 'v255'. Sep 13 00:05:54.926638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:54.935675 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:05:54.956763 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Sep 13 00:05:54.995885 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:05:55.002113 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:05:55.080752 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:55.093169 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:05:55.124408 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:05:55.128516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:05:55.130930 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:55.132696 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:05:55.140136 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:05:55.179371 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:05:55.194843 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:05:55.198838 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 13 00:05:55.216934 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 00:05:55.247102 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:05:55.247213 kernel: GPT:9289727 != 125829119 Sep 13 00:05:55.247233 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:05:55.247265 kernel: GPT:9289727 != 125829119 Sep 13 00:05:55.247282 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:05:55.247321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:05:55.247340 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 13 00:05:55.247648 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 13 00:05:55.248221 kernel: libata version 3.00 loaded. Sep 13 00:05:55.257823 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 13 00:05:55.274842 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:05:55.276874 kernel: scsi host1: ata_piix Sep 13 00:05:55.277170 kernel: scsi host2: ata_piix Sep 13 00:05:55.278539 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 13 00:05:55.278592 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 13 00:05:55.292852 kernel: ACPI: bus type USB registered Sep 13 00:05:55.294828 kernel: usbcore: registered new interface driver usbfs Sep 13 00:05:55.309895 kernel: usbcore: registered new interface driver hub Sep 13 00:05:55.319895 kernel: usbcore: registered new device driver usb Sep 13 00:05:55.327902 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (464) Sep 13 00:05:55.335940 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (461) Sep 13 00:05:55.349841 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:05:55.357437 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:05:55.363950 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:05:55.369594 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:05:55.370551 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:05:55.383404 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:05:55.384659 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:05:55.384872 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:55.385663 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:55.401265 disk-uuid[495]: Primary Header is updated. Sep 13 00:05:55.401265 disk-uuid[495]: Secondary Entries is updated. Sep 13 00:05:55.401265 disk-uuid[495]: Secondary Header is updated. Sep 13 00:05:55.386325 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:05:55.386473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:55.389110 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:55.406656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:05:55.393691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:55.413846 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:05:55.420854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:05:55.510210 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:55.516075 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:55.551860 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:05:55.563908 kernel: AES CTR mode by8 optimization enabled Sep 13 00:05:55.581277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:55.611350 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 13 00:05:55.617035 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 13 00:05:55.617304 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 13 00:05:55.625084 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 13 00:05:55.654834 kernel: hub 1-0:1.0: USB hub found Sep 13 00:05:55.658881 kernel: hub 1-0:1.0: 2 ports detected Sep 13 00:05:56.424880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:05:56.425905 disk-uuid[497]: The operation has completed successfully. Sep 13 00:05:56.483712 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:05:56.483867 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:05:56.491051 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:05:56.494899 sh[568]: Success Sep 13 00:05:56.513874 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:05:56.574058 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:05:56.582963 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:05:56.585851 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:05:56.615852 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:05:56.615941 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:56.616875 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:05:56.618927 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:05:56.621020 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:05:56.628773 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:05:56.630144 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:05:56.636107 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:05:56.641164 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:05:56.653075 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:56.653161 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:56.653196 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:05:56.656829 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:05:56.669680 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:05:56.672052 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:56.680265 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:05:56.690099 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:05:56.848136 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:05:56.861183 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:05:56.861390 ignition[645]: Ignition 2.19.0 Sep 13 00:05:56.861398 ignition[645]: Stage: fetch-offline Sep 13 00:05:56.861457 ignition[645]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:56.861468 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:05:56.861620 ignition[645]: parsed url from cmdline: "" Sep 13 00:05:56.861625 ignition[645]: no config URL provided Sep 13 00:05:56.861631 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:05:56.861640 ignition[645]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:05:56.861646 ignition[645]: failed to fetch config: resource requires networking Sep 13 00:05:56.861919 ignition[645]: Ignition finished successfully Sep 13 00:05:56.875514 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:05:56.908174 systemd-networkd[757]: lo: Link UP Sep 13 00:05:56.908191 systemd-networkd[757]: lo: Gained carrier Sep 13 00:05:56.912511 systemd-networkd[757]: Enumeration completed Sep 13 00:05:56.913433 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 13 00:05:56.913448 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 13 00:05:56.913525 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:05:56.914290 systemd[1]: Reached target network.target - Network. Sep 13 00:05:56.915495 systemd-networkd[757]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:56.915499 systemd-networkd[757]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:56.916699 systemd-networkd[757]: eth0: Link UP Sep 13 00:05:56.916704 systemd-networkd[757]: eth0: Gained carrier Sep 13 00:05:56.916713 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 13 00:05:56.922170 systemd-networkd[757]: eth1: Link UP Sep 13 00:05:56.922175 systemd-networkd[757]: eth1: Gained carrier Sep 13 00:05:56.922188 systemd-networkd[757]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:56.924096 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:05:56.940956 systemd-networkd[757]: eth0: DHCPv4 address 161.35.231.245/20, gateway 161.35.224.1 acquired from 169.254.169.253 Sep 13 00:05:56.946042 systemd-networkd[757]: eth1: DHCPv4 address 10.124.0.25/20 acquired from 169.254.169.253 Sep 13 00:05:56.967820 ignition[760]: Ignition 2.19.0 Sep 13 00:05:56.967832 ignition[760]: Stage: fetch Sep 13 00:05:56.968097 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:56.968111 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:05:56.968234 ignition[760]: parsed url from cmdline: "" Sep 13 00:05:56.968239 ignition[760]: no config URL provided Sep 13 00:05:56.968245 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:05:56.968254 ignition[760]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:05:56.968277 ignition[760]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 13 00:05:56.985496 ignition[760]: GET result: OK Sep 13 00:05:56.985617 ignition[760]: parsing config with SHA512: 789f71ce46b465711523044f7e77eaa1b1404050beab0d3323c7bf41feae83dfd3de19b322771dfe746feb67883fbac567db744d353148ee4973421fd5b2f350 Sep 13 00:05:56.995677 unknown[760]: fetched base config from "system" Sep 13 00:05:56.995696 unknown[760]: fetched base config from "system" Sep 13 00:05:56.996413 ignition[760]: fetch: fetch complete Sep 13 00:05:56.995706 unknown[760]: fetched user config from "digitalocean" Sep 13 00:05:56.996421 ignition[760]: fetch: fetch passed Sep 13 00:05:56.999685 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:05:56.996500 ignition[760]: Ignition finished successfully Sep 13 00:05:57.008061 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:05:57.029597 ignition[767]: Ignition 2.19.0 Sep 13 00:05:57.029610 ignition[767]: Stage: kargs Sep 13 00:05:57.029935 ignition[767]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:57.029952 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:05:57.033372 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:05:57.031593 ignition[767]: kargs: kargs passed Sep 13 00:05:57.031672 ignition[767]: Ignition finished successfully Sep 13 00:05:57.042127 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:05:57.066357 ignition[773]: Ignition 2.19.0 Sep 13 00:05:57.066369 ignition[773]: Stage: disks Sep 13 00:05:57.066986 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:57.067008 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:05:57.069042 ignition[773]: disks: disks passed Sep 13 00:05:57.073441 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:05:57.069144 ignition[773]: Ignition finished successfully Sep 13 00:05:57.076719 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:05:57.078249 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:05:57.079713 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:05:57.081185 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:05:57.082561 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:05:57.095315 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:05:57.115372 systemd-fsck[782]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:05:57.120299 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:05:57.126941 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:05:57.263772 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:05:57.263178 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:05:57.265398 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:05:57.275017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:05:57.278209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:05:57.283053 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 13 00:05:57.286829 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (790) Sep 13 00:05:57.292099 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:05:57.298053 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:57.298089 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:57.298106 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:05:57.300031 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:05:57.300081 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:05:57.306237 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:05:57.309972 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:05:57.315109 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:05:57.323781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:05:57.415101 coreos-metadata[793]: Sep 13 00:05:57.413 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:05:57.420206 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:05:57.422220 coreos-metadata[792]: Sep 13 00:05:57.421 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:05:57.427163 coreos-metadata[793]: Sep 13 00:05:57.427 INFO Fetch successful Sep 13 00:05:57.430648 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:05:57.435607 coreos-metadata[793]: Sep 13 00:05:57.435 INFO wrote hostname ci-4081.3.5-n-5a30d8cd2b to /sysroot/etc/hostname Sep 13 00:05:57.437822 coreos-metadata[792]: Sep 13 00:05:57.437 INFO Fetch successful Sep 13 00:05:57.438484 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:05:57.443597 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:05:57.449445 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 13 00:05:57.450760 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 13 00:05:57.456332 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:05:57.574236 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:05:57.580062 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:05:57.582029 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:05:57.598843 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:57.615369 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:05:57.632414 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:05:57.634962 ignition[910]: INFO : Ignition 2.19.0 Sep 13 00:05:57.634962 ignition[910]: INFO : Stage: mount Sep 13 00:05:57.634962 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:57.634962 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:05:57.638776 ignition[910]: INFO : mount: mount passed Sep 13 00:05:57.638776 ignition[910]: INFO : Ignition finished successfully Sep 13 00:05:57.639764 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:05:57.655608 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:05:57.664502 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:05:57.687307 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (922) Sep 13 00:05:57.687390 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:57.688917 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:57.689826 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:05:57.693846 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:05:57.696348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:05:57.722848 ignition[939]: INFO : Ignition 2.19.0 Sep 13 00:05:57.722848 ignition[939]: INFO : Stage: files Sep 13 00:05:57.722848 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:57.722848 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:05:57.726722 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:05:57.727785 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:05:57.728986 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:05:57.732862 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:05:57.734007 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:05:57.735002 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:05:57.734478 unknown[939]: wrote ssh authorized keys file for user: core Sep 13 00:05:57.736635 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:05:57.736635 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:05:57.920592 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:05:58.030595 systemd-networkd[757]: eth0: Gained IPv6LL Sep 13 00:05:58.057201 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:05:58.057201 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:05:58.057201 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:05:58.360186 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:05:58.606002 systemd-networkd[757]: eth1: Gained IPv6LL Sep 13 00:05:58.655304 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:05:58.655304 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:58.657867 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:05:58.978964 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:06:00.276948 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:06:00.276948 ignition[939]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:06:00.282850 ignition[939]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:00.282850 ignition[939]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:00.282850 ignition[939]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:06:00.282850 ignition[939]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:00.282850 ignition[939]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:00.293766 ignition[939]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:00.293766 ignition[939]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:00.293766 ignition[939]: INFO : files: files passed Sep 13 00:06:00.293766 ignition[939]: INFO : Ignition finished successfully Sep 13 00:06:00.289595 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:06:00.301276 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:06:00.305038 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:06:00.311748 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:06:00.319495 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:06:00.336030 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:00.336030 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:00.338239 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:00.339017 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:00.340712 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:06:00.350126 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:06:00.385029 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:06:00.385909 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:06:00.387553 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:06:00.388714 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:06:00.390214 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:06:00.403117 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:06:00.423114 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:00.434165 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:06:00.452137 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:00.453174 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:00.455257 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:06:00.456766 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:06:00.457010 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:00.458749 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:06:00.459735 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:06:00.461228 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:06:00.462649 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:00.463927 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:00.465303 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:06:00.467034 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:00.468483 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:06:00.469904 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:06:00.471347 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:06:00.472665 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:06:00.472863 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:00.474387 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:00.475344 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:00.476583 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:06:00.476812 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:00.478105 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:06:00.478238 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:00.480307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:06:00.480592 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:00.481995 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:06:00.482112 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:06:00.483529 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:06:00.483674 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:06:00.492166 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:06:00.494177 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:06:00.495294 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:06:00.497880 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:00.499318 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:06:00.499436 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:00.512416 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:06:00.512586 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:06:00.532903 ignition[991]: INFO : Ignition 2.19.0 Sep 13 00:06:00.534927 ignition[991]: INFO : Stage: umount Sep 13 00:06:00.534927 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:00.534927 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:06:00.547267 ignition[991]: INFO : umount: umount passed Sep 13 00:06:00.547267 ignition[991]: INFO : Ignition finished successfully Sep 13 00:06:00.540663 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:06:00.547383 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:06:00.547546 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:06:00.551058 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:06:00.551249 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:06:00.554403 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:06:00.554505 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:06:00.555839 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:06:00.555911 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:06:00.557318 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:06:00.557387 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:06:00.558868 systemd[1]: Stopped target network.target - Network. Sep 13 00:06:00.560191 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:06:00.560278 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:00.561776 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:06:00.563374 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:06:00.567125 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:00.568197 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:06:00.569592 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:06:00.570869 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:06:00.570951 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:00.572139 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:06:00.572200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:00.573378 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:06:00.573456 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:06:00.574609 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:06:00.574675 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:00.575975 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:06:00.576039 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:00.577584 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:06:00.579180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:06:00.583307 systemd-networkd[757]: eth1: DHCPv6 lease lost Sep 13 00:06:00.586998 systemd-networkd[757]: eth0: DHCPv6 lease lost Sep 13 00:06:00.590618 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:06:00.590789 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:06:00.593581 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:06:00.593716 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:06:00.596653 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:06:00.596745 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:00.604985 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:06:00.606088 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:06:00.606174 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:00.608043 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:00.608109 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:00.608655 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:06:00.608717 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:00.611020 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:06:00.611104 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:00.613093 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:00.634495 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:06:00.635572 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:00.638159 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:06:00.638275 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:06:00.639589 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:06:00.639687 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:00.640729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:06:00.640820 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:00.642000 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:06:00.642065 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:00.644173 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:06:00.644231 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:00.645621 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:00.645693 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:00.654053 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:06:00.654704 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:06:00.654779 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:00.657895 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:06:00.657969 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:00.660287 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:06:00.660354 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:00.661917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:00.661984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:00.666027 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:06:00.666146 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:06:00.668394 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:06:00.679662 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:06:00.690187 systemd[1]: Switching root. Sep 13 00:06:00.789753 systemd-journald[185]: Journal stopped Sep 13 00:06:02.600030 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 13 00:06:02.600171 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:06:02.600196 kernel: SELinux: policy capability open_perms=1 Sep 13 00:06:02.600216 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:06:02.600234 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:06:02.600246 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:06:02.600259 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:06:02.600270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:06:02.600282 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:06:02.600304 kernel: audit: type=1403 audit(1757721961.089:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:06:02.600317 systemd[1]: Successfully loaded SELinux policy in 50.622ms. Sep 13 00:06:02.600362 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.612ms. Sep 13 00:06:02.600377 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:02.600390 systemd[1]: Detected virtualization kvm. Sep 13 00:06:02.600403 systemd[1]: Detected architecture x86-64. Sep 13 00:06:02.600414 systemd[1]: Detected first boot. Sep 13 00:06:02.600426 systemd[1]: Hostname set to . Sep 13 00:06:02.600439 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:02.600451 zram_generator::config[1035]: No configuration found. Sep 13 00:06:02.600470 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:06:02.600482 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:06:02.600498 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:06:02.600510 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:02.600524 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:06:02.600543 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:06:02.600566 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:06:02.600584 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:06:02.600604 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:06:02.600634 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:06:02.600662 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:06:02.600683 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:06:02.600703 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:02.600725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:02.600746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:06:02.600765 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:06:02.600778 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:06:02.600811 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:02.601940 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:06:02.601966 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:02.601979 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:06:02.601993 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:06:02.602023 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:06:02.602059 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:06:02.602081 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:02.602108 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:02.602134 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:02.602155 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:02.602176 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:06:02.602189 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:06:02.602201 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:02.602215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:02.602227 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:02.602244 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:06:02.602256 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:06:02.602269 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:06:02.602282 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:06:02.602295 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:02.602306 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:06:02.602319 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:06:02.602331 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:06:02.602348 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:06:02.602361 systemd[1]: Reached target machines.target - Containers. Sep 13 00:06:02.602375 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:06:02.602388 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:02.602400 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:02.602411 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:06:02.602423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:02.602435 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:02.602447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:02.602463 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:06:02.602474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:02.602487 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:06:02.602498 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:06:02.602510 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:06:02.602522 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:06:02.602533 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:06:02.602545 kernel: fuse: init (API version 7.39) Sep 13 00:06:02.602561 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:02.602573 kernel: loop: module loaded Sep 13 00:06:02.602585 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:02.602597 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:06:02.602609 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:06:02.602620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:02.602635 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:06:02.602646 systemd[1]: Stopped verity-setup.service. Sep 13 00:06:02.602658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:02.602673 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:06:02.602686 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:06:02.602697 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:06:02.602710 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:06:02.602723 kernel: ACPI: bus type drm_connector registered Sep 13 00:06:02.602738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:06:02.602750 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:06:02.602761 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:02.602776 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:06:02.602789 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:06:02.607954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:02.607994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:02.608013 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:02.608031 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:02.608050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:02.608071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:02.608088 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:06:02.608107 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:06:02.608125 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:02.608143 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:02.608157 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:02.608177 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:06:02.608258 systemd-journald[1108]: Collecting audit messages is disabled. Sep 13 00:06:02.608295 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:06:02.608309 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:02.608321 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:02.608335 systemd-journald[1108]: Journal started Sep 13 00:06:02.608367 systemd-journald[1108]: Runtime Journal (/run/log/journal/f4ce823e75fa4dadbb0c55b101f72923) is 4.9M, max 39.3M, 34.4M free. Sep 13 00:06:02.000675 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:06:02.031324 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:06:02.034649 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:06:02.624935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:06:02.625070 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:02.629180 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:06:02.631742 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:06:02.634735 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:06:02.637603 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:06:02.647476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:02.689960 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:02.694387 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:06:02.703071 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:06:02.704155 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:06:02.704203 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:02.706431 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:06:02.715990 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:06:02.733294 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:06:02.735357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:02.737346 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Sep 13 00:06:02.737377 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Sep 13 00:06:02.748013 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:06:02.754233 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:06:02.755374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:02.764251 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:06:02.775109 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:06:02.790313 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:06:02.800144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:02.802281 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:06:02.819257 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:06:02.841841 kernel: loop0: detected capacity change from 0 to 8 Sep 13 00:06:02.852630 systemd-journald[1108]: Time spent on flushing to /var/log/journal/f4ce823e75fa4dadbb0c55b101f72923 is 99.565ms for 1001 entries. Sep 13 00:06:02.852630 systemd-journald[1108]: System Journal (/var/log/journal/f4ce823e75fa4dadbb0c55b101f72923) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:06:02.993122 systemd-journald[1108]: Received client request to flush runtime journal. Sep 13 00:06:02.993963 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:06:02.994425 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:06:02.994470 kernel: loop2: detected capacity change from 0 to 142488 Sep 13 00:06:02.864420 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:06:02.869441 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:06:02.883323 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:06:02.932149 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:06:02.982903 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:06:02.987636 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:06:03.009918 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:06:03.018525 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:06:03.033318 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:03.066348 kernel: loop3: detected capacity change from 0 to 221472 Sep 13 00:06:03.140717 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Sep 13 00:06:03.140753 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Sep 13 00:06:03.172862 kernel: loop4: detected capacity change from 0 to 8 Sep 13 00:06:03.168186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:03.182172 kernel: loop5: detected capacity change from 0 to 140768 Sep 13 00:06:03.215490 kernel: loop6: detected capacity change from 0 to 142488 Sep 13 00:06:03.244252 kernel: loop7: detected capacity change from 0 to 221472 Sep 13 00:06:03.263143 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 13 00:06:03.264198 (sd-merge)[1182]: Merged extensions into '/usr'. Sep 13 00:06:03.274696 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:06:03.274853 systemd[1]: Reloading... Sep 13 00:06:03.579651 zram_generator::config[1209]: No configuration found. Sep 13 00:06:03.655845 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:06:03.780371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:03.845383 systemd[1]: Reloading finished in 566 ms. Sep 13 00:06:03.882768 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:06:03.888106 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:06:03.900489 systemd[1]: Starting ensure-sysext.service... Sep 13 00:06:03.912130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:03.930818 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:06:03.930842 systemd[1]: Reloading... Sep 13 00:06:03.995960 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:06:03.998789 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:06:04.000215 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:06:04.000578 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 13 00:06:04.000645 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 13 00:06:04.010041 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:04.010861 systemd-tmpfiles[1253]: Skipping /boot Sep 13 00:06:04.049523 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:04.051161 systemd-tmpfiles[1253]: Skipping /boot Sep 13 00:06:04.119892 zram_generator::config[1277]: No configuration found. Sep 13 00:06:04.348693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:04.400966 systemd[1]: Reloading finished in 469 ms. Sep 13 00:06:04.423355 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:06:04.425181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:04.447173 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:04.461419 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:06:04.468292 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:06:04.482040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:04.486927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:04.502253 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:06:04.513255 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:04.513556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:04.517523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:04.522071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:04.530668 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:04.531937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:04.532150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:04.537659 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:04.537964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:04.538210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:04.548243 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:06:04.549139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:04.554440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:04.554858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:04.560364 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:04.562553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:04.562907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:04.565647 systemd[1]: Finished ensure-sysext.service. Sep 13 00:06:04.577274 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:06:04.591873 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:06:04.610310 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:06:04.612614 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:06:04.644903 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:06:04.648599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:04.651173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:04.653757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:04.654712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:04.660464 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:04.661918 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:04.663433 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:04.665102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:04.676248 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:04.678042 augenrules[1356]: No rules Sep 13 00:06:04.676424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:04.676480 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:04.683522 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:04.688108 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:06:04.690548 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:06:04.694561 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Sep 13 00:06:04.742043 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:04.755172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:04.816286 systemd-resolved[1330]: Positive Trust Anchors: Sep 13 00:06:04.816312 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:04.816363 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:04.830770 systemd-resolved[1330]: Using system hostname 'ci-4081.3.5-n-5a30d8cd2b'. Sep 13 00:06:04.833279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:04.843217 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:04.879718 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:06:04.881139 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:06:04.929653 systemd-networkd[1377]: lo: Link UP Sep 13 00:06:04.931331 systemd-networkd[1377]: lo: Gained carrier Sep 13 00:06:04.934745 systemd-networkd[1377]: Enumeration completed Sep 13 00:06:04.935672 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:04.937049 systemd[1]: Reached target network.target - Network. Sep 13 00:06:04.951146 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:06:04.993659 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:06:05.023808 systemd-networkd[1377]: eth1: Configuring with /run/systemd/network/10-7a:4c:81:2a:6f:65.network. Sep 13 00:06:05.024858 systemd-networkd[1377]: eth1: Link UP Sep 13 00:06:05.024870 systemd-networkd[1377]: eth1: Gained carrier Sep 13 00:06:05.030247 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 13 00:06:05.053881 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1375) Sep 13 00:06:05.051840 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 13 00:06:05.052933 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:05.053140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:05.059177 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:05.064136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:05.075226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:05.076508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:05.076561 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:05.076580 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:05.108868 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:05.109147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:05.111042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:05.111896 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:05.113089 kernel: ISO 9660 Extensions: RRIP_1991A Sep 13 00:06:05.118658 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 13 00:06:05.121123 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:05.121594 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:05.123652 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:06:05.130421 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:06:05.131692 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:05.132999 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:05.156251 systemd-networkd[1377]: eth0: Configuring with /run/systemd/network/10-72:1b:87:c5:c8:d2.network. Sep 13 00:06:05.159567 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 13 00:06:05.161062 systemd-networkd[1377]: eth0: Link UP Sep 13 00:06:05.161073 systemd-networkd[1377]: eth0: Gained carrier Sep 13 00:06:05.164485 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 13 00:06:05.166229 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 13 00:06:05.189851 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 13 00:06:05.231892 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:06:05.304268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:05.404272 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:06:05.415244 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:06:05.478950 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:06:05.487875 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:06:05.488016 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 13 00:06:05.488090 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 13 00:06:05.555845 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:06:05.557119 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:06:05.557198 kernel: [drm] features: -context_init Sep 13 00:06:05.560421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:05.561743 kernel: [drm] number of scanouts: 1 Sep 13 00:06:05.561851 kernel: [drm] number of cap sets: 0 Sep 13 00:06:05.565937 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 13 00:06:05.575964 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 13 00:06:05.578974 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:06:05.586366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:05.586532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:05.589331 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:06:05.590429 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:05.605447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:05.616174 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:05.616431 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:05.626147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:05.634837 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:06:05.662870 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:06:05.669329 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:06:05.691694 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:05.693964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:05.727782 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:06:05.729030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:05.731252 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:05.732366 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:06:05.732532 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:06:05.733321 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:06:05.733865 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:06:05.734151 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:06:05.734268 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:06:05.734333 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:05.734646 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:05.736954 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:06:05.741213 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:06:05.759391 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:06:05.774160 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:06:05.775605 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:06:05.778621 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:05.779853 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:05.780450 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:05.781376 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:05.781411 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:05.789073 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:06:05.796032 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:06:05.805170 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:06:05.809945 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:06:05.813909 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:06:05.816258 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:06:05.825288 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:06:05.840030 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:06:05.855909 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:06:05.865090 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:06:05.870154 coreos-metadata[1442]: Sep 13 00:06:05.868 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:06:05.876282 jq[1444]: false Sep 13 00:06:05.884046 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:06:05.891097 coreos-metadata[1442]: Sep 13 00:06:05.881 INFO Fetch successful Sep 13 00:06:05.886006 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:06:05.901296 extend-filesystems[1447]: Found loop4 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found loop5 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found loop6 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found loop7 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found vda Sep 13 00:06:05.901296 extend-filesystems[1447]: Found vda1 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found vda2 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found vda3 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found usr Sep 13 00:06:05.901296 extend-filesystems[1447]: Found vda4 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found vda6 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found vda7 Sep 13 00:06:05.901296 extend-filesystems[1447]: Found vda9 Sep 13 00:06:05.901296 extend-filesystems[1447]: Checking size of /dev/vda9 Sep 13 00:06:05.886708 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:06:05.898054 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:06:05.911069 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:06:05.918540 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:06:05.942413 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:06:05.942720 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:06:05.943397 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:06:05.944909 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:06:05.970039 extend-filesystems[1447]: Resized partition /dev/vda9 Sep 13 00:06:05.977504 dbus-daemon[1443]: [system] SELinux support is enabled Sep 13 00:06:05.972744 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:06:05.974585 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:06:05.982775 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:06:06.000218 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:06:06.032130 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 13 00:06:06.025272 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:06:06.025325 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:06:06.029553 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:06:06.029752 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 13 00:06:06.029912 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:06:06.048928 update_engine[1460]: I20250913 00:06:06.034538 1460 main.cc:92] Flatcar Update Engine starting Sep 13 00:06:06.066289 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Sep 13 00:06:06.069925 update_engine[1460]: I20250913 00:06:06.058393 1460 update_check_scheduler.cc:74] Next update check in 9m32s Sep 13 00:06:06.070083 jq[1463]: true Sep 13 00:06:06.091288 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:06:06.095474 tar[1468]: linux-amd64/helm Sep 13 00:06:06.101203 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:06:06.109200 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:06:06.142839 jq[1484]: true Sep 13 00:06:06.197730 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:06:06.199454 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:06:06.284892 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 00:06:06.306227 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:06:06.306227 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 00:06:06.306227 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 00:06:06.320042 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Sep 13 00:06:06.320042 extend-filesystems[1447]: Found vdb Sep 13 00:06:06.312763 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:06:06.313021 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:06:06.351585 systemd-networkd[1377]: eth0: Gained IPv6LL Sep 13 00:06:06.352825 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 13 00:06:06.364457 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:06:06.365423 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:06:06.401987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:06.409637 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:06:06.414244 systemd-networkd[1377]: eth1: Gained IPv6LL Sep 13 00:06:06.415039 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 13 00:06:06.468775 systemd-logind[1459]: New seat seat0. Sep 13 00:06:06.471355 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:06.476415 systemd-logind[1459]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:06:06.476444 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:06:06.476763 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:06:06.478706 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:06:06.492067 systemd[1]: Starting sshkeys.service... Sep 13 00:06:06.538505 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:06:06.556129 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:06:06.585766 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:06:06.622950 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:06:06.637610 coreos-metadata[1524]: Sep 13 00:06:06.637 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:06:06.641306 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:06:06.654978 coreos-metadata[1524]: Sep 13 00:06:06.652 INFO Fetch successful Sep 13 00:06:06.668554 unknown[1524]: wrote ssh authorized keys file for user: core Sep 13 00:06:06.721396 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:06:06.727097 update-ssh-keys[1539]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:06.738875 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:06:06.741422 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:06:06.759440 systemd[1]: Finished sshkeys.service. Sep 13 00:06:06.791523 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:06:06.791929 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:06:06.802391 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:06:06.831950 containerd[1481]: time="2025-09-13T00:06:06.831283428Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:06:06.857660 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:06:06.872428 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:06:06.885140 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:06:06.890157 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:06:06.914812 containerd[1481]: time="2025-09-13T00:06:06.914342244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:06.917917 containerd[1481]: time="2025-09-13T00:06:06.917658289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:06.917917 containerd[1481]: time="2025-09-13T00:06:06.917720765Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:06:06.917917 containerd[1481]: time="2025-09-13T00:06:06.917752383Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:06:06.918632 containerd[1481]: time="2025-09-13T00:06:06.918382356Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:06:06.918632 containerd[1481]: time="2025-09-13T00:06:06.918423235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:06.918632 containerd[1481]: time="2025-09-13T00:06:06.918509595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:06.918632 containerd[1481]: time="2025-09-13T00:06:06.918536582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:06.919658 containerd[1481]: time="2025-09-13T00:06:06.919060627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:06.919658 containerd[1481]: time="2025-09-13T00:06:06.919092914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:06.919658 containerd[1481]: time="2025-09-13T00:06:06.919117886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:06.919658 containerd[1481]: time="2025-09-13T00:06:06.919138908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:06.919658 containerd[1481]: time="2025-09-13T00:06:06.919295129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:06.919658 containerd[1481]: time="2025-09-13T00:06:06.919607143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:06.920232 containerd[1481]: time="2025-09-13T00:06:06.920196900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:06.920341 containerd[1481]: time="2025-09-13T00:06:06.920322441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:06:06.920596 containerd[1481]: time="2025-09-13T00:06:06.920570729Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:06:06.920913 containerd[1481]: time="2025-09-13T00:06:06.920730407Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:06:06.934086 containerd[1481]: time="2025-09-13T00:06:06.933293760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:06:06.934086 containerd[1481]: time="2025-09-13T00:06:06.933537592Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:06:06.934086 containerd[1481]: time="2025-09-13T00:06:06.933572859Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:06:06.934086 containerd[1481]: time="2025-09-13T00:06:06.933597771Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:06:06.934086 containerd[1481]: time="2025-09-13T00:06:06.933668809Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:06:06.934086 containerd[1481]: time="2025-09-13T00:06:06.933984623Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.935729943Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.935979040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936004994Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936025646Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936047087Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936066938Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936089009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936118176Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936138659Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936154100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936167936Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936182124Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936205635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.936614 containerd[1481]: time="2025-09-13T00:06:06.936219661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936232677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936277515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936295803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936310424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936323098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936338243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936363490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936385744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936403731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936422104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936435983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936451510Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936477468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936517045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.937140 containerd[1481]: time="2025-09-13T00:06:06.936532957Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:06:06.940843 containerd[1481]: time="2025-09-13T00:06:06.939301931Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:06:06.940843 containerd[1481]: time="2025-09-13T00:06:06.939512971Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:06:06.940843 containerd[1481]: time="2025-09-13T00:06:06.939535150Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:06:06.940843 containerd[1481]: time="2025-09-13T00:06:06.939558703Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:06:06.940843 containerd[1481]: time="2025-09-13T00:06:06.939580310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.940843 containerd[1481]: time="2025-09-13T00:06:06.939640531Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:06:06.940843 containerd[1481]: time="2025-09-13T00:06:06.939667604Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:06:06.940843 containerd[1481]: time="2025-09-13T00:06:06.939685760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:06:06.941256 containerd[1481]: time="2025-09-13T00:06:06.940221776Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:06:06.941256 containerd[1481]: time="2025-09-13T00:06:06.940341485Z" level=info msg="Connect containerd service" Sep 13 00:06:06.941256 containerd[1481]: time="2025-09-13T00:06:06.940405416Z" level=info msg="using legacy CRI server" Sep 13 00:06:06.941256 containerd[1481]: time="2025-09-13T00:06:06.940418545Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:06:06.941256 containerd[1481]: time="2025-09-13T00:06:06.940587189Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:06:06.945454 containerd[1481]: time="2025-09-13T00:06:06.944435628Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:06:06.945454 containerd[1481]: time="2025-09-13T00:06:06.945013807Z" level=info msg="Start subscribing containerd event" Sep 13 00:06:06.945454 containerd[1481]: time="2025-09-13T00:06:06.945106943Z" level=info msg="Start recovering state" Sep 13 00:06:06.945454 containerd[1481]: time="2025-09-13T00:06:06.945202494Z" level=info msg="Start event monitor" Sep 13 00:06:06.945454 containerd[1481]: time="2025-09-13T00:06:06.945217463Z" level=info msg="Start snapshots syncer" Sep 13 00:06:06.945454 containerd[1481]: time="2025-09-13T00:06:06.945229050Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:06:06.945454 containerd[1481]: time="2025-09-13T00:06:06.945239403Z" level=info msg="Start streaming server" Sep 13 00:06:06.949443 containerd[1481]: time="2025-09-13T00:06:06.946579115Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:06:06.949443 containerd[1481]: time="2025-09-13T00:06:06.946692299Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:06:06.949443 containerd[1481]: time="2025-09-13T00:06:06.946780424Z" level=info msg="containerd successfully booted in 0.120042s" Sep 13 00:06:06.946972 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:06:07.333406 tar[1468]: linux-amd64/LICENSE Sep 13 00:06:07.334339 tar[1468]: linux-amd64/README.md Sep 13 00:06:07.350262 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:06:07.840367 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:06:07.849907 systemd[1]: Started sshd@0-161.35.231.245:22-139.178.68.195:50828.service - OpenSSH per-connection server daemon (139.178.68.195:50828). Sep 13 00:06:07.944317 sshd[1561]: Accepted publickey for core from 139.178.68.195 port 50828 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:07.946308 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:07.963233 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:06:07.972264 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:06:07.979755 systemd-logind[1459]: New session 1 of user core. Sep 13 00:06:08.013924 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:06:08.028518 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:06:08.044836 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:08.084073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:08.093067 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:06:08.108455 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:08.230102 systemd[1565]: Queued start job for default target default.target. Sep 13 00:06:08.235620 systemd[1565]: Created slice app.slice - User Application Slice. Sep 13 00:06:08.235675 systemd[1565]: Reached target paths.target - Paths. Sep 13 00:06:08.235698 systemd[1565]: Reached target timers.target - Timers. Sep 13 00:06:08.237982 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:06:08.263105 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:06:08.264271 systemd[1565]: Reached target sockets.target - Sockets. Sep 13 00:06:08.264307 systemd[1565]: Reached target basic.target - Basic System. Sep 13 00:06:08.264393 systemd[1565]: Reached target default.target - Main User Target. Sep 13 00:06:08.264440 systemd[1565]: Startup finished in 204ms. Sep 13 00:06:08.265907 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:06:08.280193 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:06:08.284496 systemd[1]: Startup finished in 1.457s (kernel) + 7.318s (initrd) + 7.244s (userspace) = 16.020s. Sep 13 00:06:08.381261 systemd[1]: Started sshd@1-161.35.231.245:22-139.178.68.195:50844.service - OpenSSH per-connection server daemon (139.178.68.195:50844). Sep 13 00:06:08.470937 sshd[1590]: Accepted publickey for core from 139.178.68.195 port 50844 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:08.473023 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:08.482010 systemd-logind[1459]: New session 2 of user core. Sep 13 00:06:08.491142 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:06:08.564834 sshd[1590]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:08.575969 systemd[1]: sshd@1-161.35.231.245:22-139.178.68.195:50844.service: Deactivated successfully. Sep 13 00:06:08.579503 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:06:08.582062 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:06:08.594183 systemd[1]: Started sshd@2-161.35.231.245:22-139.178.68.195:50852.service - OpenSSH per-connection server daemon (139.178.68.195:50852). Sep 13 00:06:08.599976 systemd-logind[1459]: Removed session 2. Sep 13 00:06:08.706812 sshd[1597]: Accepted publickey for core from 139.178.68.195 port 50852 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:08.709675 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:08.718925 systemd-logind[1459]: New session 3 of user core. Sep 13 00:06:08.724505 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:06:08.786981 sshd[1597]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:08.795922 systemd[1]: sshd@2-161.35.231.245:22-139.178.68.195:50852.service: Deactivated successfully. Sep 13 00:06:08.799398 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:06:08.800708 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:06:08.809160 systemd[1]: Started sshd@3-161.35.231.245:22-139.178.68.195:50860.service - OpenSSH per-connection server daemon (139.178.68.195:50860). Sep 13 00:06:08.816265 systemd-logind[1459]: Removed session 3. Sep 13 00:06:08.864955 sshd[1605]: Accepted publickey for core from 139.178.68.195 port 50860 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:08.867848 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:08.875655 systemd-logind[1459]: New session 4 of user core. Sep 13 00:06:08.881123 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:06:08.954921 sshd[1605]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:08.967292 systemd[1]: sshd@3-161.35.231.245:22-139.178.68.195:50860.service: Deactivated successfully. Sep 13 00:06:08.972379 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:06:08.973908 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:06:08.986316 systemd[1]: Started sshd@4-161.35.231.245:22-139.178.68.195:50864.service - OpenSSH per-connection server daemon (139.178.68.195:50864). Sep 13 00:06:08.987952 systemd-logind[1459]: Removed session 4. Sep 13 00:06:09.000462 kubelet[1572]: E0913 00:06:09.000352 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:09.002719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:09.002925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:09.003552 systemd[1]: kubelet.service: Consumed 1.665s CPU time. Sep 13 00:06:09.044535 sshd[1612]: Accepted publickey for core from 139.178.68.195 port 50864 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:09.046428 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:09.054135 systemd-logind[1459]: New session 5 of user core. Sep 13 00:06:09.063112 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:06:09.133976 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:06:09.134299 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:09.151420 sudo[1616]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:09.155744 sshd[1612]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:09.173359 systemd[1]: sshd@4-161.35.231.245:22-139.178.68.195:50864.service: Deactivated successfully. Sep 13 00:06:09.176847 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:06:09.178013 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:06:09.187450 systemd[1]: Started sshd@5-161.35.231.245:22-139.178.68.195:50866.service - OpenSSH per-connection server daemon (139.178.68.195:50866). Sep 13 00:06:09.190080 systemd-logind[1459]: Removed session 5. Sep 13 00:06:09.239086 sshd[1621]: Accepted publickey for core from 139.178.68.195 port 50866 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:09.241397 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:09.249302 systemd-logind[1459]: New session 6 of user core. Sep 13 00:06:09.261282 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:06:09.328484 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:06:09.329007 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:09.334580 sudo[1625]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:09.342695 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:06:09.343378 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:09.364214 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:09.378692 auditctl[1628]: No rules Sep 13 00:06:09.379374 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:06:09.379707 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:09.387537 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:09.434629 augenrules[1646]: No rules Sep 13 00:06:09.436619 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:09.438304 sudo[1624]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:09.442724 sshd[1621]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:09.454914 systemd[1]: sshd@5-161.35.231.245:22-139.178.68.195:50866.service: Deactivated successfully. Sep 13 00:06:09.457984 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:06:09.461534 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:06:09.467359 systemd[1]: Started sshd@6-161.35.231.245:22-139.178.68.195:50870.service - OpenSSH per-connection server daemon (139.178.68.195:50870). Sep 13 00:06:09.469375 systemd-logind[1459]: Removed session 6. Sep 13 00:06:09.524919 sshd[1654]: Accepted publickey for core from 139.178.68.195 port 50870 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:09.526388 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:09.534240 systemd-logind[1459]: New session 7 of user core. Sep 13 00:06:09.546185 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:06:09.607860 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:06:09.608185 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:10.416295 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:06:10.416420 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:06:11.042648 dockerd[1673]: time="2025-09-13T00:06:11.042519205Z" level=info msg="Starting up" Sep 13 00:06:11.443081 dockerd[1673]: time="2025-09-13T00:06:11.442874154Z" level=info msg="Loading containers: start." Sep 13 00:06:11.612010 kernel: Initializing XFRM netlink socket Sep 13 00:06:11.654108 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 13 00:06:11.664857 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 13 00:06:11.732210 systemd-networkd[1377]: docker0: Link UP Sep 13 00:06:11.733456 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 13 00:06:11.769513 dockerd[1673]: time="2025-09-13T00:06:11.769333080Z" level=info msg="Loading containers: done." Sep 13 00:06:11.796275 dockerd[1673]: time="2025-09-13T00:06:11.796188104Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:06:11.796704 dockerd[1673]: time="2025-09-13T00:06:11.796424736Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:06:11.796704 dockerd[1673]: time="2025-09-13T00:06:11.796621337Z" level=info msg="Daemon has completed initialization" Sep 13 00:06:11.862668 dockerd[1673]: time="2025-09-13T00:06:11.862557715Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:06:11.863748 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:06:12.976708 containerd[1481]: time="2025-09-13T00:06:12.976620270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:06:13.660658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount199176134.mount: Deactivated successfully. Sep 13 00:06:15.248365 containerd[1481]: time="2025-09-13T00:06:15.248269428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:15.250000 containerd[1481]: time="2025-09-13T00:06:15.249841841Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 00:06:15.252747 containerd[1481]: time="2025-09-13T00:06:15.251057463Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:15.256191 containerd[1481]: time="2025-09-13T00:06:15.256085837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:15.257592 containerd[1481]: time="2025-09-13T00:06:15.257108034Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.280419758s" Sep 13 00:06:15.257592 containerd[1481]: time="2025-09-13T00:06:15.257157988Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:06:15.257792 containerd[1481]: time="2025-09-13T00:06:15.257718426Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:06:17.046508 containerd[1481]: time="2025-09-13T00:06:17.046416094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:17.048767 containerd[1481]: time="2025-09-13T00:06:17.048684071Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 00:06:17.049833 containerd[1481]: time="2025-09-13T00:06:17.049451311Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:17.058834 containerd[1481]: time="2025-09-13T00:06:17.056289814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:17.060507 containerd[1481]: time="2025-09-13T00:06:17.060438633Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.802685626s" Sep 13 00:06:17.060728 containerd[1481]: time="2025-09-13T00:06:17.060701920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:06:17.061699 containerd[1481]: time="2025-09-13T00:06:17.061669934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:06:18.491103 containerd[1481]: time="2025-09-13T00:06:18.490983458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:18.493845 containerd[1481]: time="2025-09-13T00:06:18.493120643Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 00:06:18.493845 containerd[1481]: time="2025-09-13T00:06:18.493737256Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:18.500394 containerd[1481]: time="2025-09-13T00:06:18.500328837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:18.502663 containerd[1481]: time="2025-09-13T00:06:18.502592989Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.440740213s" Sep 13 00:06:18.503054 containerd[1481]: time="2025-09-13T00:06:18.502905948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:06:18.503614 containerd[1481]: time="2025-09-13T00:06:18.503581324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:06:19.253647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:19.265208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:19.509160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:19.518787 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:19.638866 kubelet[1896]: E0913 00:06:19.638572 1896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:19.646612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:19.647248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:20.108706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560235019.mount: Deactivated successfully. Sep 13 00:06:20.915857 containerd[1481]: time="2025-09-13T00:06:20.915638009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:20.917371 containerd[1481]: time="2025-09-13T00:06:20.916973558Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 00:06:20.919855 containerd[1481]: time="2025-09-13T00:06:20.918495020Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:20.922583 containerd[1481]: time="2025-09-13T00:06:20.922505934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:20.923228 containerd[1481]: time="2025-09-13T00:06:20.923110104Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.41938172s" Sep 13 00:06:20.923228 containerd[1481]: time="2025-09-13T00:06:20.923187529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:06:20.924235 containerd[1481]: time="2025-09-13T00:06:20.924203955Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:06:20.926010 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 13 00:06:21.449610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3833177757.mount: Deactivated successfully. Sep 13 00:06:22.890642 containerd[1481]: time="2025-09-13T00:06:22.889175631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:22.890642 containerd[1481]: time="2025-09-13T00:06:22.890300879Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:06:22.890642 containerd[1481]: time="2025-09-13T00:06:22.890541879Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:22.895290 containerd[1481]: time="2025-09-13T00:06:22.895232313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:22.896477 containerd[1481]: time="2025-09-13T00:06:22.896431782Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.972190572s" Sep 13 00:06:22.896555 containerd[1481]: time="2025-09-13T00:06:22.896484893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:06:22.897270 containerd[1481]: time="2025-09-13T00:06:22.897213255Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:06:23.346192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868777532.mount: Deactivated successfully. Sep 13 00:06:23.357685 containerd[1481]: time="2025-09-13T00:06:23.357550917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:23.358780 containerd[1481]: time="2025-09-13T00:06:23.358595364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:06:23.360199 containerd[1481]: time="2025-09-13T00:06:23.359611079Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:23.362880 containerd[1481]: time="2025-09-13T00:06:23.362829738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:23.364288 containerd[1481]: time="2025-09-13T00:06:23.364221324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 466.968051ms" Sep 13 00:06:23.364570 containerd[1481]: time="2025-09-13T00:06:23.364538021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:06:23.365479 containerd[1481]: time="2025-09-13T00:06:23.365442970Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:06:24.015066 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 13 00:06:24.018161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1511090480.mount: Deactivated successfully. Sep 13 00:06:26.258847 containerd[1481]: time="2025-09-13T00:06:26.257101601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:26.259946 containerd[1481]: time="2025-09-13T00:06:26.259861297Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 00:06:26.260463 containerd[1481]: time="2025-09-13T00:06:26.260418327Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:26.266533 containerd[1481]: time="2025-09-13T00:06:26.266441292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:26.269868 containerd[1481]: time="2025-09-13T00:06:26.268097366Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.902465486s" Sep 13 00:06:26.269868 containerd[1481]: time="2025-09-13T00:06:26.268181785Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:06:29.045264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:29.052269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:29.098855 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit session-7.scope)... Sep 13 00:06:29.098877 systemd[1]: Reloading... Sep 13 00:06:29.259830 zram_generator::config[2088]: No configuration found. Sep 13 00:06:29.425814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:29.511596 systemd[1]: Reloading finished in 412 ms. Sep 13 00:06:29.577335 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:06:29.577633 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:06:29.578130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:29.593689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:29.767872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:29.782477 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:06:29.848525 kubelet[2139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:29.848525 kubelet[2139]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:06:29.848525 kubelet[2139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:29.848525 kubelet[2139]: I0913 00:06:29.848342 2139 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:06:30.273198 kubelet[2139]: I0913 00:06:30.272931 2139 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:06:30.273648 kubelet[2139]: I0913 00:06:30.273621 2139 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:06:30.274117 kubelet[2139]: I0913 00:06:30.274086 2139 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:06:30.313586 kubelet[2139]: E0913 00:06:30.313521 2139 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://161.35.231.245:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 161.35.231.245:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:30.319977 kubelet[2139]: I0913 00:06:30.319846 2139 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:06:30.333078 kubelet[2139]: E0913 00:06:30.332838 2139 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:06:30.333078 kubelet[2139]: I0913 00:06:30.332895 2139 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:06:30.338399 kubelet[2139]: I0913 00:06:30.338357 2139 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:06:30.338676 kubelet[2139]: I0913 00:06:30.338662 2139 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:06:30.338932 kubelet[2139]: I0913 00:06:30.338897 2139 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:06:30.339295 kubelet[2139]: I0913 00:06:30.338991 2139 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-5a30d8cd2b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:06:30.340748 kubelet[2139]: I0913 00:06:30.340418 2139 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:06:30.340748 kubelet[2139]: I0913 00:06:30.340452 2139 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:06:30.340748 kubelet[2139]: I0913 00:06:30.340592 2139 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:30.343533 kubelet[2139]: I0913 00:06:30.343248 2139 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:06:30.343533 kubelet[2139]: I0913 00:06:30.343301 2139 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:06:30.343533 kubelet[2139]: I0913 00:06:30.343340 2139 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:06:30.343533 kubelet[2139]: I0913 00:06:30.343360 2139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:06:30.349669 kubelet[2139]: W0913 00:06:30.349526 2139 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://161.35.231.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-5a30d8cd2b&limit=500&resourceVersion=0": dial tcp 161.35.231.245:6443: connect: connection refused Sep 13 00:06:30.349669 kubelet[2139]: E0913 00:06:30.349627 2139 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://161.35.231.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-5a30d8cd2b&limit=500&resourceVersion=0\": dial tcp 161.35.231.245:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:30.350833 kubelet[2139]: I0913 00:06:30.349754 2139 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:06:30.356853 kubelet[2139]: I0913 00:06:30.355464 2139 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:06:30.356853 kubelet[2139]: W0913 00:06:30.355572 2139 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:06:30.361490 kubelet[2139]: I0913 00:06:30.361442 2139 server.go:1274] "Started kubelet" Sep 13 00:06:30.363967 kubelet[2139]: W0913 00:06:30.363911 2139 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://161.35.231.245:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 161.35.231.245:6443: connect: connection refused Sep 13 00:06:30.364193 kubelet[2139]: E0913 00:06:30.364165 2139 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://161.35.231.245:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 161.35.231.245:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:30.364434 kubelet[2139]: I0913 00:06:30.364391 2139 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:06:30.365377 kubelet[2139]: I0913 00:06:30.365326 2139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:06:30.365774 kubelet[2139]: I0913 00:06:30.365748 2139 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:06:30.367875 kubelet[2139]: E0913 00:06:30.365997 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://161.35.231.245:6443/api/v1/namespaces/default/events\": dial tcp 161.35.231.245:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-5a30d8cd2b.1864aed66161b598 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-5a30d8cd2b,UID:ci-4081.3.5-n-5a30d8cd2b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-5a30d8cd2b,},FirstTimestamp:2025-09-13 00:06:30.361363864 +0000 UTC m=+0.572430191,LastTimestamp:2025-09-13 00:06:30.361363864 +0000 UTC m=+0.572430191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-5a30d8cd2b,}" Sep 13 00:06:30.367875 kubelet[2139]: I0913 00:06:30.367858 2139 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:06:30.373683 kubelet[2139]: I0913 00:06:30.373650 2139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:06:30.377419 kubelet[2139]: I0913 00:06:30.377381 2139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:06:30.380638 kubelet[2139]: E0913 00:06:30.380589 2139 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-5a30d8cd2b\" not found" Sep 13 00:06:30.380770 kubelet[2139]: I0913 00:06:30.380655 2139 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:06:30.382238 kubelet[2139]: E0913 00:06:30.382170 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.231.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-5a30d8cd2b?timeout=10s\": dial tcp 161.35.231.245:6443: connect: connection refused" interval="200ms" Sep 13 00:06:30.382666 kubelet[2139]: I0913 00:06:30.382627 2139 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:06:30.384719 kubelet[2139]: E0913 00:06:30.383604 2139 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:06:30.384953 kubelet[2139]: I0913 00:06:30.384937 2139 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:06:30.385486 kubelet[2139]: W0913 00:06:30.385423 2139 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://161.35.231.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.231.245:6443: connect: connection refused Sep 13 00:06:30.385548 kubelet[2139]: E0913 00:06:30.385518 2139 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://161.35.231.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 161.35.231.245:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:30.386414 kubelet[2139]: I0913 00:06:30.386334 2139 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:06:30.386414 kubelet[2139]: I0913 00:06:30.386378 2139 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:06:30.386616 kubelet[2139]: I0913 00:06:30.386508 2139 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:06:30.407315 kubelet[2139]: I0913 00:06:30.406926 2139 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:06:30.407315 kubelet[2139]: I0913 00:06:30.406951 2139 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:06:30.407315 kubelet[2139]: I0913 00:06:30.406980 2139 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:30.412018 kubelet[2139]: I0913 00:06:30.411763 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:06:30.415451 kubelet[2139]: I0913 00:06:30.415276 2139 policy_none.go:49] "None policy: Start" Sep 13 00:06:30.417843 kubelet[2139]: I0913 00:06:30.416955 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:06:30.417843 kubelet[2139]: I0913 00:06:30.416985 2139 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:06:30.417843 kubelet[2139]: I0913 00:06:30.417005 2139 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:06:30.417843 kubelet[2139]: E0913 00:06:30.417054 2139 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:06:30.418567 kubelet[2139]: I0913 00:06:30.416844 2139 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:06:30.418567 kubelet[2139]: I0913 00:06:30.418562 2139 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:06:30.422848 kubelet[2139]: W0913 00:06:30.422736 2139 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://161.35.231.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.231.245:6443: connect: connection refused Sep 13 00:06:30.422848 kubelet[2139]: E0913 00:06:30.422852 2139 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://161.35.231.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 161.35.231.245:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:30.430963 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:06:30.447538 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:06:30.454999 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:06:30.470743 kubelet[2139]: I0913 00:06:30.468905 2139 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:06:30.470743 kubelet[2139]: I0913 00:06:30.469217 2139 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:06:30.470743 kubelet[2139]: I0913 00:06:30.469234 2139 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:06:30.470743 kubelet[2139]: I0913 00:06:30.469941 2139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:06:30.473492 kubelet[2139]: E0913 00:06:30.473201 2139 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-5a30d8cd2b\" not found" Sep 13 00:06:30.529588 systemd[1]: Created slice kubepods-burstable-pod07d3dca68aad72dbd8d14080836af5b7.slice - libcontainer container kubepods-burstable-pod07d3dca68aad72dbd8d14080836af5b7.slice. Sep 13 00:06:30.553571 systemd[1]: Created slice kubepods-burstable-pod5c9e51cffd269d2e349751aa5f4bfdac.slice - libcontainer container kubepods-burstable-pod5c9e51cffd269d2e349751aa5f4bfdac.slice. Sep 13 00:06:30.558970 systemd[1]: Created slice kubepods-burstable-podc1bf7d9f7b935872e54d8cc9db7a06e4.slice - libcontainer container kubepods-burstable-podc1bf7d9f7b935872e54d8cc9db7a06e4.slice. Sep 13 00:06:30.571676 kubelet[2139]: I0913 00:06:30.571642 2139 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.572484 kubelet[2139]: E0913 00:06:30.572451 2139 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://161.35.231.245:6443/api/v1/nodes\": dial tcp 161.35.231.245:6443: connect: connection refused" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.583734 kubelet[2139]: E0913 00:06:30.583672 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.231.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-5a30d8cd2b?timeout=10s\": dial tcp 161.35.231.245:6443: connect: connection refused" interval="400ms" Sep 13 00:06:30.586033 kubelet[2139]: I0913 00:06:30.585936 2139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1bf7d9f7b935872e54d8cc9db7a06e4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"c1bf7d9f7b935872e54d8cc9db7a06e4\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.586033 kubelet[2139]: I0913 00:06:30.586024 2139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07d3dca68aad72dbd8d14080836af5b7-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"07d3dca68aad72dbd8d14080836af5b7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.586336 kubelet[2139]: I0913 00:06:30.586094 2139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07d3dca68aad72dbd8d14080836af5b7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"07d3dca68aad72dbd8d14080836af5b7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.586336 kubelet[2139]: I0913 00:06:30.586116 2139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.586336 kubelet[2139]: I0913 00:06:30.586134 2139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.586336 kubelet[2139]: I0913 00:06:30.586149 2139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.586336 kubelet[2139]: I0913 00:06:30.586165 2139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07d3dca68aad72dbd8d14080836af5b7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"07d3dca68aad72dbd8d14080836af5b7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.586488 kubelet[2139]: I0913 00:06:30.586180 2139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.586488 kubelet[2139]: I0913 00:06:30.586217 2139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.774512 kubelet[2139]: I0913 00:06:30.774124 2139 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.774705 kubelet[2139]: E0913 00:06:30.774609 2139 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://161.35.231.245:6443/api/v1/nodes\": dial tcp 161.35.231.245:6443: connect: connection refused" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:30.850134 kubelet[2139]: E0913 00:06:30.849991 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:30.852552 containerd[1481]: time="2025-09-13T00:06:30.852495336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-5a30d8cd2b,Uid:07d3dca68aad72dbd8d14080836af5b7,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:30.855284 systemd-resolved[1330]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 13 00:06:30.857708 kubelet[2139]: E0913 00:06:30.857025 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://161.35.231.245:6443/api/v1/namespaces/default/events\": dial tcp 161.35.231.245:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-5a30d8cd2b.1864aed66161b598 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-5a30d8cd2b,UID:ci-4081.3.5-n-5a30d8cd2b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-5a30d8cd2b,},FirstTimestamp:2025-09-13 00:06:30.361363864 +0000 UTC m=+0.572430191,LastTimestamp:2025-09-13 00:06:30.361363864 +0000 UTC m=+0.572430191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-5a30d8cd2b,}" Sep 13 00:06:30.857708 kubelet[2139]: E0913 00:06:30.857379 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:30.862992 kubelet[2139]: E0913 00:06:30.862952 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:30.865478 containerd[1481]: time="2025-09-13T00:06:30.865145435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b,Uid:5c9e51cffd269d2e349751aa5f4bfdac,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:30.865879 containerd[1481]: time="2025-09-13T00:06:30.865816608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-5a30d8cd2b,Uid:c1bf7d9f7b935872e54d8cc9db7a06e4,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:30.984921 kubelet[2139]: E0913 00:06:30.984849 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.231.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-5a30d8cd2b?timeout=10s\": dial tcp 161.35.231.245:6443: connect: connection refused" interval="800ms" Sep 13 00:06:31.177349 kubelet[2139]: I0913 00:06:31.176639 2139 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:31.177349 kubelet[2139]: E0913 00:06:31.177184 2139 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://161.35.231.245:6443/api/v1/nodes\": dial tcp 161.35.231.245:6443: connect: connection refused" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:31.399371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104846979.mount: Deactivated successfully. Sep 13 00:06:31.405827 containerd[1481]: time="2025-09-13T00:06:31.405718529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:31.407705 containerd[1481]: time="2025-09-13T00:06:31.407607250Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:06:31.409950 containerd[1481]: time="2025-09-13T00:06:31.409892607Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:31.412430 containerd[1481]: time="2025-09-13T00:06:31.412376432Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:31.413350 containerd[1481]: time="2025-09-13T00:06:31.413271153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:06:31.413456 containerd[1481]: time="2025-09-13T00:06:31.413368105Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:31.414112 containerd[1481]: time="2025-09-13T00:06:31.414042647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:06:31.415064 containerd[1481]: time="2025-09-13T00:06:31.415016558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:31.416137 containerd[1481]: time="2025-09-13T00:06:31.416102235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 563.491197ms" Sep 13 00:06:31.425842 containerd[1481]: time="2025-09-13T00:06:31.425605917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.668915ms" Sep 13 00:06:31.430947 containerd[1481]: time="2025-09-13T00:06:31.430446632Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.190009ms" Sep 13 00:06:31.540084 kubelet[2139]: W0913 00:06:31.538934 2139 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://161.35.231.245:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 161.35.231.245:6443: connect: connection refused Sep 13 00:06:31.540612 kubelet[2139]: E0913 00:06:31.540530 2139 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://161.35.231.245:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 161.35.231.245:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:31.609758 containerd[1481]: time="2025-09-13T00:06:31.609377893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:31.609758 containerd[1481]: time="2025-09-13T00:06:31.609439696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:31.609758 containerd[1481]: time="2025-09-13T00:06:31.609464724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:31.609758 containerd[1481]: time="2025-09-13T00:06:31.609582128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:31.609758 containerd[1481]: time="2025-09-13T00:06:31.608529756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:31.609758 containerd[1481]: time="2025-09-13T00:06:31.609394736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:31.609758 containerd[1481]: time="2025-09-13T00:06:31.609413747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:31.609758 containerd[1481]: time="2025-09-13T00:06:31.609677850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:31.623078 containerd[1481]: time="2025-09-13T00:06:31.622076218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:31.623078 containerd[1481]: time="2025-09-13T00:06:31.622149326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:31.623078 containerd[1481]: time="2025-09-13T00:06:31.622162210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:31.623078 containerd[1481]: time="2025-09-13T00:06:31.622366038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:31.638122 systemd[1]: Started cri-containerd-94b6965b06b68b439951d313bdae7aa545ced3d78b8ed37f887c34827baa0f05.scope - libcontainer container 94b6965b06b68b439951d313bdae7aa545ced3d78b8ed37f887c34827baa0f05. Sep 13 00:06:31.653061 systemd[1]: Started cri-containerd-6693188a528525f532f8d3114a02d5ed05f38dc281c5c09ed257bd1a412eac95.scope - libcontainer container 6693188a528525f532f8d3114a02d5ed05f38dc281c5c09ed257bd1a412eac95. Sep 13 00:06:31.675628 kubelet[2139]: W0913 00:06:31.673013 2139 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://161.35.231.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-5a30d8cd2b&limit=500&resourceVersion=0": dial tcp 161.35.231.245:6443: connect: connection refused Sep 13 00:06:31.675628 kubelet[2139]: E0913 00:06:31.673536 2139 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://161.35.231.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-5a30d8cd2b&limit=500&resourceVersion=0\": dial tcp 161.35.231.245:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:31.677059 systemd[1]: Started cri-containerd-b39604382c6dbc678031eb0bce06bc18800efcc4d9915bf893d1100b19f35512.scope - libcontainer container b39604382c6dbc678031eb0bce06bc18800efcc4d9915bf893d1100b19f35512. Sep 13 00:06:31.705148 kubelet[2139]: W0913 00:06:31.703775 2139 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://161.35.231.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 161.35.231.245:6443: connect: connection refused Sep 13 00:06:31.705148 kubelet[2139]: E0913 00:06:31.704012 2139 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://161.35.231.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 161.35.231.245:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:31.745329 containerd[1481]: time="2025-09-13T00:06:31.745259224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b,Uid:5c9e51cffd269d2e349751aa5f4bfdac,Namespace:kube-system,Attempt:0,} returns sandbox id \"94b6965b06b68b439951d313bdae7aa545ced3d78b8ed37f887c34827baa0f05\"" Sep 13 00:06:31.751648 kubelet[2139]: E0913 00:06:31.751562 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:31.769510 containerd[1481]: time="2025-09-13T00:06:31.769327836Z" level=info msg="CreateContainer within sandbox \"94b6965b06b68b439951d313bdae7aa545ced3d78b8ed37f887c34827baa0f05\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:06:31.785374 kubelet[2139]: E0913 00:06:31.785303 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://161.35.231.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-5a30d8cd2b?timeout=10s\": dial tcp 161.35.231.245:6443: connect: connection refused" interval="1.6s" Sep 13 00:06:31.795455 containerd[1481]: time="2025-09-13T00:06:31.795355271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-5a30d8cd2b,Uid:07d3dca68aad72dbd8d14080836af5b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"6693188a528525f532f8d3114a02d5ed05f38dc281c5c09ed257bd1a412eac95\"" Sep 13 00:06:31.797682 kubelet[2139]: E0913 00:06:31.797322 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:31.802042 containerd[1481]: time="2025-09-13T00:06:31.801722815Z" level=info msg="CreateContainer within sandbox \"6693188a528525f532f8d3114a02d5ed05f38dc281c5c09ed257bd1a412eac95\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:06:31.809008 containerd[1481]: time="2025-09-13T00:06:31.808719992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-5a30d8cd2b,Uid:c1bf7d9f7b935872e54d8cc9db7a06e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b39604382c6dbc678031eb0bce06bc18800efcc4d9915bf893d1100b19f35512\"" Sep 13 00:06:31.809908 containerd[1481]: time="2025-09-13T00:06:31.809683821Z" level=info msg="CreateContainer within sandbox \"94b6965b06b68b439951d313bdae7aa545ced3d78b8ed37f887c34827baa0f05\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f00d9f16a8915c0a36e62bdea85177294f90d69486627fc45ca036d1a03823d2\"" Sep 13 00:06:31.810421 kubelet[2139]: E0913 00:06:31.810153 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:31.811765 containerd[1481]: time="2025-09-13T00:06:31.811707270Z" level=info msg="StartContainer for \"f00d9f16a8915c0a36e62bdea85177294f90d69486627fc45ca036d1a03823d2\"" Sep 13 00:06:31.812724 containerd[1481]: time="2025-09-13T00:06:31.812647174Z" level=info msg="CreateContainer within sandbox \"b39604382c6dbc678031eb0bce06bc18800efcc4d9915bf893d1100b19f35512\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:06:31.834032 containerd[1481]: time="2025-09-13T00:06:31.833983105Z" level=info msg="CreateContainer within sandbox \"6693188a528525f532f8d3114a02d5ed05f38dc281c5c09ed257bd1a412eac95\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6a1a3f8ce69b731d201193fe1c14399bd7e05c62fd79fa4094674cf5f3cf61c\"" Sep 13 00:06:31.835836 containerd[1481]: time="2025-09-13T00:06:31.834708151Z" level=info msg="StartContainer for \"c6a1a3f8ce69b731d201193fe1c14399bd7e05c62fd79fa4094674cf5f3cf61c\"" Sep 13 00:06:31.840092 containerd[1481]: time="2025-09-13T00:06:31.840026476Z" level=info msg="CreateContainer within sandbox \"b39604382c6dbc678031eb0bce06bc18800efcc4d9915bf893d1100b19f35512\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"deeecdef8c0982acad6fc2489433f79a5be3d48764648f3843d7a7be94a5312f\"" Sep 13 00:06:31.841787 containerd[1481]: time="2025-09-13T00:06:31.841747232Z" level=info msg="StartContainer for \"deeecdef8c0982acad6fc2489433f79a5be3d48764648f3843d7a7be94a5312f\"" Sep 13 00:06:31.859137 systemd[1]: Started cri-containerd-f00d9f16a8915c0a36e62bdea85177294f90d69486627fc45ca036d1a03823d2.scope - libcontainer container f00d9f16a8915c0a36e62bdea85177294f90d69486627fc45ca036d1a03823d2. Sep 13 00:06:31.903229 systemd[1]: Started cri-containerd-deeecdef8c0982acad6fc2489433f79a5be3d48764648f3843d7a7be94a5312f.scope - libcontainer container deeecdef8c0982acad6fc2489433f79a5be3d48764648f3843d7a7be94a5312f. Sep 13 00:06:31.913613 systemd[1]: Started cri-containerd-c6a1a3f8ce69b731d201193fe1c14399bd7e05c62fd79fa4094674cf5f3cf61c.scope - libcontainer container c6a1a3f8ce69b731d201193fe1c14399bd7e05c62fd79fa4094674cf5f3cf61c. Sep 13 00:06:31.980999 kubelet[2139]: I0913 00:06:31.980143 2139 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:31.980999 kubelet[2139]: E0913 00:06:31.980789 2139 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://161.35.231.245:6443/api/v1/nodes\": dial tcp 161.35.231.245:6443: connect: connection refused" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:31.994902 containerd[1481]: time="2025-09-13T00:06:31.994249934Z" level=info msg="StartContainer for \"f00d9f16a8915c0a36e62bdea85177294f90d69486627fc45ca036d1a03823d2\" returns successfully" Sep 13 00:06:32.006481 kubelet[2139]: W0913 00:06:32.005101 2139 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://161.35.231.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 161.35.231.245:6443: connect: connection refused Sep 13 00:06:32.006481 kubelet[2139]: E0913 00:06:32.006172 2139 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://161.35.231.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 161.35.231.245:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:06:32.018085 containerd[1481]: time="2025-09-13T00:06:32.017378017Z" level=info msg="StartContainer for \"deeecdef8c0982acad6fc2489433f79a5be3d48764648f3843d7a7be94a5312f\" returns successfully" Sep 13 00:06:32.033860 containerd[1481]: time="2025-09-13T00:06:32.033742766Z" level=info msg="StartContainer for \"c6a1a3f8ce69b731d201193fe1c14399bd7e05c62fd79fa4094674cf5f3cf61c\" returns successfully" Sep 13 00:06:32.437217 kubelet[2139]: E0913 00:06:32.437026 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:32.442859 kubelet[2139]: E0913 00:06:32.440872 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:32.448895 kubelet[2139]: E0913 00:06:32.448850 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:33.452672 kubelet[2139]: E0913 00:06:33.452470 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:33.583761 kubelet[2139]: I0913 00:06:33.583348 2139 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:34.560148 kubelet[2139]: E0913 00:06:34.560076 2139 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-n-5a30d8cd2b\" not found" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:34.656112 kubelet[2139]: I0913 00:06:34.654893 2139 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:35.366333 kubelet[2139]: I0913 00:06:35.365970 2139 apiserver.go:52] "Watching apiserver" Sep 13 00:06:35.383632 kubelet[2139]: I0913 00:06:35.383577 2139 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:06:35.810911 kubelet[2139]: W0913 00:06:35.809234 2139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:06:35.810911 kubelet[2139]: E0913 00:06:35.809586 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:36.458224 kubelet[2139]: E0913 00:06:36.458113 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:36.738734 systemd[1]: Reloading requested from client PID 2410 ('systemctl') (unit session-7.scope)... Sep 13 00:06:36.738753 systemd[1]: Reloading... Sep 13 00:06:36.879976 zram_generator::config[2449]: No configuration found. Sep 13 00:06:37.091372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:37.186218 systemd[1]: Reloading finished in 446 ms. Sep 13 00:06:37.234912 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:37.250486 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:06:37.250819 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:37.250885 systemd[1]: kubelet.service: Consumed 1.047s CPU time, 128.0M memory peak, 0B memory swap peak. Sep 13 00:06:37.259411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:37.466094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:37.468001 (kubelet)[2500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:06:37.557419 kubelet[2500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:37.557419 kubelet[2500]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:06:37.557419 kubelet[2500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:37.557419 kubelet[2500]: I0913 00:06:37.556946 2500 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:06:37.571341 kubelet[2500]: I0913 00:06:37.571226 2500 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:06:37.571341 kubelet[2500]: I0913 00:06:37.571334 2500 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:06:37.572044 kubelet[2500]: I0913 00:06:37.572009 2500 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:06:37.576703 kubelet[2500]: I0913 00:06:37.576133 2500 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:06:37.582349 kubelet[2500]: I0913 00:06:37.582287 2500 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:06:37.588289 kubelet[2500]: E0913 00:06:37.588194 2500 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:06:37.588289 kubelet[2500]: I0913 00:06:37.588227 2500 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:06:37.592919 kubelet[2500]: I0913 00:06:37.592857 2500 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:06:37.593208 kubelet[2500]: I0913 00:06:37.593056 2500 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:06:37.593269 kubelet[2500]: I0913 00:06:37.593218 2500 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:06:37.593635 kubelet[2500]: I0913 00:06:37.593271 2500 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-5a30d8cd2b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:06:37.593635 kubelet[2500]: I0913 00:06:37.593620 2500 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:06:37.593635 kubelet[2500]: I0913 00:06:37.593639 2500 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:06:37.593967 kubelet[2500]: I0913 00:06:37.593684 2500 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:37.593967 kubelet[2500]: I0913 00:06:37.593893 2500 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:06:37.593967 kubelet[2500]: I0913 00:06:37.593916 2500 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:06:37.593967 kubelet[2500]: I0913 00:06:37.593960 2500 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:06:37.596349 kubelet[2500]: I0913 00:06:37.593976 2500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:06:37.596997 kubelet[2500]: I0913 00:06:37.596968 2500 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:06:37.597778 kubelet[2500]: I0913 00:06:37.597752 2500 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:06:37.598573 kubelet[2500]: I0913 00:06:37.598551 2500 server.go:1274] "Started kubelet" Sep 13 00:06:37.610400 kubelet[2500]: I0913 00:06:37.610361 2500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:06:37.619352 kubelet[2500]: I0913 00:06:37.619255 2500 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:06:37.631514 kubelet[2500]: I0913 00:06:37.631458 2500 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:06:37.635470 kubelet[2500]: I0913 00:06:37.635383 2500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:06:37.635694 kubelet[2500]: I0913 00:06:37.635668 2500 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:06:37.638090 kubelet[2500]: I0913 00:06:37.638007 2500 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:06:37.643286 kubelet[2500]: I0913 00:06:37.643231 2500 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:06:37.645717 kubelet[2500]: E0913 00:06:37.643584 2500 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-5a30d8cd2b\" not found" Sep 13 00:06:37.653708 kubelet[2500]: I0913 00:06:37.653672 2500 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:06:37.657371 kubelet[2500]: I0913 00:06:37.657335 2500 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:06:37.663638 kubelet[2500]: I0913 00:06:37.663602 2500 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:06:37.664218 kubelet[2500]: I0913 00:06:37.664054 2500 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:06:37.670219 kubelet[2500]: E0913 00:06:37.670169 2500 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:06:37.674366 kubelet[2500]: I0913 00:06:37.674179 2500 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:06:37.677224 kubelet[2500]: I0913 00:06:37.677054 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:06:37.679626 kubelet[2500]: I0913 00:06:37.679247 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:06:37.679626 kubelet[2500]: I0913 00:06:37.679280 2500 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:06:37.679626 kubelet[2500]: I0913 00:06:37.679300 2500 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:06:37.679626 kubelet[2500]: E0913 00:06:37.679348 2500 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:06:37.750750 kubelet[2500]: I0913 00:06:37.750607 2500 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:06:37.751921 kubelet[2500]: I0913 00:06:37.751893 2500 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:06:37.752887 kubelet[2500]: I0913 00:06:37.752030 2500 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:37.752887 kubelet[2500]: I0913 00:06:37.752284 2500 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:06:37.752887 kubelet[2500]: I0913 00:06:37.752299 2500 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:06:37.752887 kubelet[2500]: I0913 00:06:37.752331 2500 policy_none.go:49] "None policy: Start" Sep 13 00:06:37.755990 kubelet[2500]: I0913 00:06:37.755915 2500 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:06:37.756150 kubelet[2500]: I0913 00:06:37.756139 2500 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:06:37.756524 kubelet[2500]: I0913 00:06:37.756503 2500 state_mem.go:75] "Updated machine memory state" Sep 13 00:06:37.764971 sudo[2532]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:06:37.765503 sudo[2532]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 00:06:37.770116 kubelet[2500]: I0913 00:06:37.768874 2500 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:06:37.770116 kubelet[2500]: I0913 00:06:37.769145 2500 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:06:37.770116 kubelet[2500]: I0913 00:06:37.769164 2500 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:06:37.775475 kubelet[2500]: I0913 00:06:37.775010 2500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:06:37.803282 kubelet[2500]: W0913 00:06:37.803209 2500 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:06:37.803894 kubelet[2500]: W0913 00:06:37.803865 2500 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:06:37.808412 kubelet[2500]: W0913 00:06:37.808360 2500 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:06:37.808593 kubelet[2500]: E0913 00:06:37.808485 2500 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.861830 kubelet[2500]: I0913 00:06:37.860436 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.861830 kubelet[2500]: I0913 00:06:37.860500 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.861830 kubelet[2500]: I0913 00:06:37.860543 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.861830 kubelet[2500]: I0913 00:06:37.860574 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1bf7d9f7b935872e54d8cc9db7a06e4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"c1bf7d9f7b935872e54d8cc9db7a06e4\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.861830 kubelet[2500]: I0913 00:06:37.860605 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07d3dca68aad72dbd8d14080836af5b7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"07d3dca68aad72dbd8d14080836af5b7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.862210 kubelet[2500]: I0913 00:06:37.860633 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.862210 kubelet[2500]: I0913 00:06:37.860664 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c9e51cffd269d2e349751aa5f4bfdac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"5c9e51cffd269d2e349751aa5f4bfdac\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.862210 kubelet[2500]: I0913 00:06:37.860693 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07d3dca68aad72dbd8d14080836af5b7-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"07d3dca68aad72dbd8d14080836af5b7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.862210 kubelet[2500]: I0913 00:06:37.860723 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07d3dca68aad72dbd8d14080836af5b7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-5a30d8cd2b\" (UID: \"07d3dca68aad72dbd8d14080836af5b7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.899133 kubelet[2500]: I0913 00:06:37.899049 2500 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.910863 kubelet[2500]: I0913 00:06:37.910634 2500 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:37.910863 kubelet[2500]: I0913 00:06:37.910718 2500 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:38.105065 kubelet[2500]: E0913 00:06:38.103888 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:38.105325 kubelet[2500]: E0913 00:06:38.105274 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:38.110051 kubelet[2500]: E0913 00:06:38.110002 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:38.575183 sudo[2532]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:38.595408 kubelet[2500]: I0913 00:06:38.595355 2500 apiserver.go:52] "Watching apiserver" Sep 13 00:06:38.654323 kubelet[2500]: I0913 00:06:38.654238 2500 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:06:38.710529 kubelet[2500]: E0913 00:06:38.710295 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:38.711709 kubelet[2500]: E0913 00:06:38.711568 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:38.739766 kubelet[2500]: W0913 00:06:38.739023 2500 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:06:38.739766 kubelet[2500]: E0913 00:06:38.739155 2500 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.5-n-5a30d8cd2b\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-5a30d8cd2b" Sep 13 00:06:38.739766 kubelet[2500]: E0913 00:06:38.739389 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:38.811201 kubelet[2500]: I0913 00:06:38.810814 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-5a30d8cd2b" podStartSLOduration=3.810768555 podStartE2EDuration="3.810768555s" podCreationTimestamp="2025-09-13 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:38.806946896 +0000 UTC m=+1.331614178" watchObservedRunningTime="2025-09-13 00:06:38.810768555 +0000 UTC m=+1.335435842" Sep 13 00:06:38.813147 kubelet[2500]: I0913 00:06:38.812946 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-5a30d8cd2b" podStartSLOduration=1.81292247 podStartE2EDuration="1.81292247s" podCreationTimestamp="2025-09-13 00:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:38.769408269 +0000 UTC m=+1.294075542" watchObservedRunningTime="2025-09-13 00:06:38.81292247 +0000 UTC m=+1.337589741" Sep 13 00:06:38.874018 kubelet[2500]: I0913 00:06:38.873836 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-5a30d8cd2b" podStartSLOduration=1.8737899919999998 podStartE2EDuration="1.873789992s" podCreationTimestamp="2025-09-13 00:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:38.839724047 +0000 UTC m=+1.364391324" watchObservedRunningTime="2025-09-13 00:06:38.873789992 +0000 UTC m=+1.398457269" Sep 13 00:06:39.714190 kubelet[2500]: E0913 00:06:39.714099 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:39.714624 kubelet[2500]: E0913 00:06:39.714557 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:39.717259 kubelet[2500]: E0913 00:06:39.717218 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:40.443229 sudo[1657]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:40.448515 sshd[1654]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:40.454469 systemd[1]: sshd@6-161.35.231.245:22-139.178.68.195:50870.service: Deactivated successfully. Sep 13 00:06:40.460177 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:06:40.460964 systemd[1]: session-7.scope: Consumed 5.796s CPU time, 146.3M memory peak, 0B memory swap peak. Sep 13 00:06:40.462255 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:06:40.464336 systemd-logind[1459]: Removed session 7. Sep 13 00:06:40.716876 kubelet[2500]: E0913 00:06:40.716697 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:41.982928 systemd-timesyncd[1351]: Contacted time server 64.142.54.13:123 (2.flatcar.pool.ntp.org). Sep 13 00:06:41.983069 systemd-timesyncd[1351]: Initial clock synchronization to Sat 2025-09-13 00:06:42.016420 UTC. Sep 13 00:06:43.365466 kubelet[2500]: I0913 00:06:43.364890 2500 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:06:43.366341 containerd[1481]: time="2025-09-13T00:06:43.365739533Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:06:43.370277 kubelet[2500]: I0913 00:06:43.369691 2500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:06:44.405607 systemd[1]: Created slice kubepods-besteffort-podc11c94f5_ed32_4ed3_805d_3245f41c022e.slice - libcontainer container kubepods-besteffort-podc11c94f5_ed32_4ed3_805d_3245f41c022e.slice. Sep 13 00:06:44.468511 systemd[1]: Created slice kubepods-burstable-podc4238dba_cdff_40ec_8482_ace0be595e12.slice - libcontainer container kubepods-burstable-podc4238dba_cdff_40ec_8482_ace0be595e12.slice. Sep 13 00:06:44.516076 systemd[1]: Created slice kubepods-besteffort-pod0c295a41_7622_4638_9fe9_5dd2d8754a2a.slice - libcontainer container kubepods-besteffort-pod0c295a41_7622_4638_9fe9_5dd2d8754a2a.slice. Sep 13 00:06:44.518545 kubelet[2500]: I0913 00:06:44.518504 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c11c94f5-ed32-4ed3-805d-3245f41c022e-kube-proxy\") pod \"kube-proxy-pzdd2\" (UID: \"c11c94f5-ed32-4ed3-805d-3245f41c022e\") " pod="kube-system/kube-proxy-pzdd2" Sep 13 00:06:44.519073 kubelet[2500]: I0913 00:06:44.518556 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c11c94f5-ed32-4ed3-805d-3245f41c022e-lib-modules\") pod \"kube-proxy-pzdd2\" (UID: \"c11c94f5-ed32-4ed3-805d-3245f41c022e\") " pod="kube-system/kube-proxy-pzdd2" Sep 13 00:06:44.519073 kubelet[2500]: I0913 00:06:44.518583 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jnf9\" (UniqueName: \"kubernetes.io/projected/c11c94f5-ed32-4ed3-805d-3245f41c022e-kube-api-access-7jnf9\") pod \"kube-proxy-pzdd2\" (UID: \"c11c94f5-ed32-4ed3-805d-3245f41c022e\") " pod="kube-system/kube-proxy-pzdd2" Sep 13 00:06:44.519073 kubelet[2500]: I0913 00:06:44.518646 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c11c94f5-ed32-4ed3-805d-3245f41c022e-xtables-lock\") pod \"kube-proxy-pzdd2\" (UID: \"c11c94f5-ed32-4ed3-805d-3245f41c022e\") " pod="kube-system/kube-proxy-pzdd2" Sep 13 00:06:44.619536 kubelet[2500]: I0913 00:06:44.618877 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-hostproc\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.619536 kubelet[2500]: I0913 00:06:44.619129 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-etc-cni-netd\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.619536 kubelet[2500]: I0913 00:06:44.619158 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-config-path\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.619536 kubelet[2500]: I0913 00:06:44.619220 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-bpf-maps\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.619536 kubelet[2500]: I0913 00:06:44.619282 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-host-proc-sys-net\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.619536 kubelet[2500]: I0913 00:06:44.619318 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-host-proc-sys-kernel\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.619945 kubelet[2500]: I0913 00:06:44.619367 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c295a41-7622-4638-9fe9-5dd2d8754a2a-cilium-config-path\") pod \"cilium-operator-5d85765b45-nddhb\" (UID: \"0c295a41-7622-4638-9fe9-5dd2d8754a2a\") " pod="kube-system/cilium-operator-5d85765b45-nddhb" Sep 13 00:06:44.619945 kubelet[2500]: I0913 00:06:44.619398 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-cgroup\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.619945 kubelet[2500]: I0913 00:06:44.619465 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4238dba-cdff-40ec-8482-ace0be595e12-hubble-tls\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.619945 kubelet[2500]: I0913 00:06:44.619499 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcgrb\" (UniqueName: \"kubernetes.io/projected/c4238dba-cdff-40ec-8482-ace0be595e12-kube-api-access-pcgrb\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.620732 kubelet[2500]: I0913 00:06:44.620189 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-857jd\" (UniqueName: \"kubernetes.io/projected/0c295a41-7622-4638-9fe9-5dd2d8754a2a-kube-api-access-857jd\") pod \"cilium-operator-5d85765b45-nddhb\" (UID: \"0c295a41-7622-4638-9fe9-5dd2d8754a2a\") " pod="kube-system/cilium-operator-5d85765b45-nddhb" Sep 13 00:06:44.620732 kubelet[2500]: I0913 00:06:44.620254 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4238dba-cdff-40ec-8482-ace0be595e12-clustermesh-secrets\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.620732 kubelet[2500]: I0913 00:06:44.620348 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cni-path\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.620732 kubelet[2500]: I0913 00:06:44.620525 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-lib-modules\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.620732 kubelet[2500]: I0913 00:06:44.620581 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-xtables-lock\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.621025 kubelet[2500]: I0913 00:06:44.620633 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-run\") pod \"cilium-knq8s\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " pod="kube-system/cilium-knq8s" Sep 13 00:06:44.718222 kubelet[2500]: E0913 00:06:44.718002 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:44.722224 containerd[1481]: time="2025-09-13T00:06:44.721731308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzdd2,Uid:c11c94f5-ed32-4ed3-805d-3245f41c022e,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:44.805198 containerd[1481]: time="2025-09-13T00:06:44.804475519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:44.805198 containerd[1481]: time="2025-09-13T00:06:44.804574997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:44.805198 containerd[1481]: time="2025-09-13T00:06:44.804592634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:44.805198 containerd[1481]: time="2025-09-13T00:06:44.804701000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:44.820920 kubelet[2500]: E0913 00:06:44.820860 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:44.823617 containerd[1481]: time="2025-09-13T00:06:44.821524039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nddhb,Uid:0c295a41-7622-4638-9fe9-5dd2d8754a2a,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:44.831327 systemd[1]: Started cri-containerd-ed30ac1bc6060f471b1a8dc9c94ef15d9cc6ae853d5f5c9f3857839b92e741ca.scope - libcontainer container ed30ac1bc6060f471b1a8dc9c94ef15d9cc6ae853d5f5c9f3857839b92e741ca. Sep 13 00:06:44.875162 containerd[1481]: time="2025-09-13T00:06:44.874687397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:44.875660 containerd[1481]: time="2025-09-13T00:06:44.875615527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzdd2,Uid:c11c94f5-ed32-4ed3-805d-3245f41c022e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed30ac1bc6060f471b1a8dc9c94ef15d9cc6ae853d5f5c9f3857839b92e741ca\"" Sep 13 00:06:44.877786 containerd[1481]: time="2025-09-13T00:06:44.877678239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:44.877786 containerd[1481]: time="2025-09-13T00:06:44.877737833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:44.879176 kubelet[2500]: E0913 00:06:44.878422 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:44.879344 containerd[1481]: time="2025-09-13T00:06:44.879076593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:44.885198 containerd[1481]: time="2025-09-13T00:06:44.884788026Z" level=info msg="CreateContainer within sandbox \"ed30ac1bc6060f471b1a8dc9c94ef15d9cc6ae853d5f5c9f3857839b92e741ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:06:44.912129 containerd[1481]: time="2025-09-13T00:06:44.912000949Z" level=info msg="CreateContainer within sandbox \"ed30ac1bc6060f471b1a8dc9c94ef15d9cc6ae853d5f5c9f3857839b92e741ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a64fa5fea9ad7dd07f57b78bf7b9870d6f60f2b0d2286748816b76ea7e463861\"" Sep 13 00:06:44.916770 containerd[1481]: time="2025-09-13T00:06:44.913349033Z" level=info msg="StartContainer for \"a64fa5fea9ad7dd07f57b78bf7b9870d6f60f2b0d2286748816b76ea7e463861\"" Sep 13 00:06:44.914975 systemd[1]: Started cri-containerd-2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044.scope - libcontainer container 2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044. Sep 13 00:06:44.990240 systemd[1]: Started cri-containerd-a64fa5fea9ad7dd07f57b78bf7b9870d6f60f2b0d2286748816b76ea7e463861.scope - libcontainer container a64fa5fea9ad7dd07f57b78bf7b9870d6f60f2b0d2286748816b76ea7e463861. Sep 13 00:06:45.006710 containerd[1481]: time="2025-09-13T00:06:45.006255238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nddhb,Uid:0c295a41-7622-4638-9fe9-5dd2d8754a2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\"" Sep 13 00:06:45.008149 kubelet[2500]: E0913 00:06:45.008120 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:45.013243 containerd[1481]: time="2025-09-13T00:06:45.013071515Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:06:45.046840 containerd[1481]: time="2025-09-13T00:06:45.046706100Z" level=info msg="StartContainer for \"a64fa5fea9ad7dd07f57b78bf7b9870d6f60f2b0d2286748816b76ea7e463861\" returns successfully" Sep 13 00:06:45.077584 kubelet[2500]: E0913 00:06:45.077404 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:45.079206 containerd[1481]: time="2025-09-13T00:06:45.079105262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-knq8s,Uid:c4238dba-cdff-40ec-8482-ace0be595e12,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:45.124072 containerd[1481]: time="2025-09-13T00:06:45.123412096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:45.124072 containerd[1481]: time="2025-09-13T00:06:45.123530363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:45.124072 containerd[1481]: time="2025-09-13T00:06:45.123559385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:45.124072 containerd[1481]: time="2025-09-13T00:06:45.123715358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:45.169651 systemd[1]: Started cri-containerd-026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956.scope - libcontainer container 026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956. Sep 13 00:06:45.219838 containerd[1481]: time="2025-09-13T00:06:45.219259782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-knq8s,Uid:c4238dba-cdff-40ec-8482-ace0be595e12,Namespace:kube-system,Attempt:0,} returns sandbox id \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\"" Sep 13 00:06:45.220969 kubelet[2500]: E0913 00:06:45.220921 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:45.766757 kubelet[2500]: E0913 00:06:45.766656 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:45.784840 kubelet[2500]: I0913 00:06:45.784163 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pzdd2" podStartSLOduration=1.784136581 podStartE2EDuration="1.784136581s" podCreationTimestamp="2025-09-13 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:45.781896522 +0000 UTC m=+8.306563795" watchObservedRunningTime="2025-09-13 00:06:45.784136581 +0000 UTC m=+8.308803859" Sep 13 00:06:46.683824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268303881.mount: Deactivated successfully. Sep 13 00:06:47.441573 containerd[1481]: time="2025-09-13T00:06:47.440259545Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:47.443395 containerd[1481]: time="2025-09-13T00:06:47.443318700Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 13 00:06:47.444321 containerd[1481]: time="2025-09-13T00:06:47.444280238Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:47.447310 containerd[1481]: time="2025-09-13T00:06:47.447243194Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.434057427s" Sep 13 00:06:47.447526 containerd[1481]: time="2025-09-13T00:06:47.447497469Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:06:47.449236 containerd[1481]: time="2025-09-13T00:06:47.449191818Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:06:47.461467 containerd[1481]: time="2025-09-13T00:06:47.461398441Z" level=info msg="CreateContainer within sandbox \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:06:47.483527 containerd[1481]: time="2025-09-13T00:06:47.483466182Z" level=info msg="CreateContainer within sandbox \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\"" Sep 13 00:06:47.485879 containerd[1481]: time="2025-09-13T00:06:47.485165525Z" level=info msg="StartContainer for \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\"" Sep 13 00:06:47.539299 systemd[1]: Started cri-containerd-1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5.scope - libcontainer container 1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5. Sep 13 00:06:47.611638 containerd[1481]: time="2025-09-13T00:06:47.610198556Z" level=info msg="StartContainer for \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\" returns successfully" Sep 13 00:06:47.772736 kubelet[2500]: E0913 00:06:47.772690 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:48.918845 kubelet[2500]: E0913 00:06:48.916829 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:49.003090 kubelet[2500]: E0913 00:06:49.001760 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:49.095941 kubelet[2500]: I0913 00:06:49.095687 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-nddhb" podStartSLOduration=2.657678781 podStartE2EDuration="5.095660682s" podCreationTimestamp="2025-09-13 00:06:44 +0000 UTC" firstStartedPulling="2025-09-13 00:06:45.011031144 +0000 UTC m=+7.535698390" lastFinishedPulling="2025-09-13 00:06:47.449013037 +0000 UTC m=+9.973680291" observedRunningTime="2025-09-13 00:06:47.80920879 +0000 UTC m=+10.333876063" watchObservedRunningTime="2025-09-13 00:06:49.095660682 +0000 UTC m=+11.620327958" Sep 13 00:06:49.267156 kubelet[2500]: E0913 00:06:49.267113 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:49.563041 kubelet[2500]: E0913 00:06:49.549710 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:51.265189 update_engine[1460]: I20250913 00:06:51.264946 1460 update_attempter.cc:509] Updating boot flags... Sep 13 00:06:51.328818 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2927) Sep 13 00:06:53.072479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3348967714.mount: Deactivated successfully. Sep 13 00:06:55.808913 containerd[1481]: time="2025-09-13T00:06:55.808817786Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:55.810405 containerd[1481]: time="2025-09-13T00:06:55.810342734Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 13 00:06:55.814118 containerd[1481]: time="2025-09-13T00:06:55.814077287Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:55.819170 containerd[1481]: time="2025-09-13T00:06:55.819123571Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.36967369s" Sep 13 00:06:55.820525 containerd[1481]: time="2025-09-13T00:06:55.820478174Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:06:55.830105 containerd[1481]: time="2025-09-13T00:06:55.829956733Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:06:55.921150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063799582.mount: Deactivated successfully. Sep 13 00:06:55.927358 containerd[1481]: time="2025-09-13T00:06:55.927291572Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\"" Sep 13 00:06:55.928270 containerd[1481]: time="2025-09-13T00:06:55.928211458Z" level=info msg="StartContainer for \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\"" Sep 13 00:06:56.059084 systemd[1]: Started cri-containerd-0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152.scope - libcontainer container 0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152. Sep 13 00:06:56.108428 containerd[1481]: time="2025-09-13T00:06:56.108360345Z" level=info msg="StartContainer for \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\" returns successfully" Sep 13 00:06:56.127641 systemd[1]: cri-containerd-0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152.scope: Deactivated successfully. Sep 13 00:06:56.290418 containerd[1481]: time="2025-09-13T00:06:56.263967195Z" level=info msg="shim disconnected" id=0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152 namespace=k8s.io Sep 13 00:06:56.290418 containerd[1481]: time="2025-09-13T00:06:56.290387811Z" level=warning msg="cleaning up after shim disconnected" id=0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152 namespace=k8s.io Sep 13 00:06:56.290418 containerd[1481]: time="2025-09-13T00:06:56.290412398Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:56.915810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152-rootfs.mount: Deactivated successfully. Sep 13 00:06:56.928692 kubelet[2500]: E0913 00:06:56.928638 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:56.935398 containerd[1481]: time="2025-09-13T00:06:56.935331023Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:06:56.965139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59512004.mount: Deactivated successfully. Sep 13 00:06:56.970667 containerd[1481]: time="2025-09-13T00:06:56.970523233Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\"" Sep 13 00:06:56.974655 containerd[1481]: time="2025-09-13T00:06:56.974588923Z" level=info msg="StartContainer for \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\"" Sep 13 00:06:57.015047 systemd[1]: Started cri-containerd-50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb.scope - libcontainer container 50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb. Sep 13 00:06:57.053146 containerd[1481]: time="2025-09-13T00:06:57.053086986Z" level=info msg="StartContainer for \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\" returns successfully" Sep 13 00:06:57.073422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:57.074687 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:57.075089 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:57.082282 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:57.082588 systemd[1]: cri-containerd-50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb.scope: Deactivated successfully. Sep 13 00:06:57.119288 containerd[1481]: time="2025-09-13T00:06:57.119215064Z" level=info msg="shim disconnected" id=50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb namespace=k8s.io Sep 13 00:06:57.119505 containerd[1481]: time="2025-09-13T00:06:57.119292075Z" level=warning msg="cleaning up after shim disconnected" id=50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb namespace=k8s.io Sep 13 00:06:57.119505 containerd[1481]: time="2025-09-13T00:06:57.119307060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:57.140300 containerd[1481]: time="2025-09-13T00:06:57.140234630Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:06:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:06:57.146924 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:57.918466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb-rootfs.mount: Deactivated successfully. Sep 13 00:06:57.934730 kubelet[2500]: E0913 00:06:57.932779 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:57.963846 containerd[1481]: time="2025-09-13T00:06:57.962357011Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:06:57.999262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367744025.mount: Deactivated successfully. Sep 13 00:06:58.007461 containerd[1481]: time="2025-09-13T00:06:58.007291671Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\"" Sep 13 00:06:58.009404 containerd[1481]: time="2025-09-13T00:06:58.009073526Z" level=info msg="StartContainer for \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\"" Sep 13 00:06:58.054086 systemd[1]: Started cri-containerd-c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c.scope - libcontainer container c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c. Sep 13 00:06:58.110347 containerd[1481]: time="2025-09-13T00:06:58.110261785Z" level=info msg="StartContainer for \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\" returns successfully" Sep 13 00:06:58.118146 systemd[1]: cri-containerd-c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c.scope: Deactivated successfully. Sep 13 00:06:58.160419 containerd[1481]: time="2025-09-13T00:06:58.160277580Z" level=info msg="shim disconnected" id=c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c namespace=k8s.io Sep 13 00:06:58.160419 containerd[1481]: time="2025-09-13T00:06:58.160378110Z" level=warning msg="cleaning up after shim disconnected" id=c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c namespace=k8s.io Sep 13 00:06:58.160419 containerd[1481]: time="2025-09-13T00:06:58.160414373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:58.917336 systemd[1]: run-containerd-runc-k8s.io-c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c-runc.ipuKu9.mount: Deactivated successfully. Sep 13 00:06:58.917523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c-rootfs.mount: Deactivated successfully. Sep 13 00:06:58.942486 kubelet[2500]: E0913 00:06:58.942428 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:58.950461 containerd[1481]: time="2025-09-13T00:06:58.949214702Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:06:58.980786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121299493.mount: Deactivated successfully. Sep 13 00:06:58.987664 containerd[1481]: time="2025-09-13T00:06:58.987233391Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\"" Sep 13 00:06:58.992893 containerd[1481]: time="2025-09-13T00:06:58.991958438Z" level=info msg="StartContainer for \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\"" Sep 13 00:06:59.043196 systemd[1]: Started cri-containerd-1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482.scope - libcontainer container 1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482. Sep 13 00:06:59.089265 systemd[1]: cri-containerd-1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482.scope: Deactivated successfully. Sep 13 00:06:59.094313 containerd[1481]: time="2025-09-13T00:06:59.094251346Z" level=info msg="StartContainer for \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\" returns successfully" Sep 13 00:06:59.126982 containerd[1481]: time="2025-09-13T00:06:59.126607462Z" level=info msg="shim disconnected" id=1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482 namespace=k8s.io Sep 13 00:06:59.126982 containerd[1481]: time="2025-09-13T00:06:59.126687407Z" level=warning msg="cleaning up after shim disconnected" id=1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482 namespace=k8s.io Sep 13 00:06:59.126982 containerd[1481]: time="2025-09-13T00:06:59.126702156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:59.918781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482-rootfs.mount: Deactivated successfully. Sep 13 00:06:59.947222 kubelet[2500]: E0913 00:06:59.946593 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:06:59.949058 containerd[1481]: time="2025-09-13T00:06:59.949021704Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:06:59.982421 containerd[1481]: time="2025-09-13T00:06:59.982348201Z" level=info msg="CreateContainer within sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\"" Sep 13 00:06:59.985379 containerd[1481]: time="2025-09-13T00:06:59.984396697Z" level=info msg="StartContainer for \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\"" Sep 13 00:07:00.042198 systemd[1]: Started cri-containerd-eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6.scope - libcontainer container eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6. Sep 13 00:07:00.094155 containerd[1481]: time="2025-09-13T00:07:00.094086563Z" level=info msg="StartContainer for \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\" returns successfully" Sep 13 00:07:00.286044 kubelet[2500]: I0913 00:07:00.285951 2500 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:07:00.345648 systemd[1]: Created slice kubepods-burstable-pod1feb0270_87f7_470a_9492_3e491c2bfcb9.slice - libcontainer container kubepods-burstable-pod1feb0270_87f7_470a_9492_3e491c2bfcb9.slice. Sep 13 00:07:00.364102 systemd[1]: Created slice kubepods-burstable-pod90efa26e_0a31_4ed0_bf4a_4b1c9337e597.slice - libcontainer container kubepods-burstable-pod90efa26e_0a31_4ed0_bf4a_4b1c9337e597.slice. Sep 13 00:07:00.468697 kubelet[2500]: I0913 00:07:00.468559 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrrvc\" (UniqueName: \"kubernetes.io/projected/1feb0270-87f7-470a-9492-3e491c2bfcb9-kube-api-access-nrrvc\") pod \"coredns-7c65d6cfc9-x25cr\" (UID: \"1feb0270-87f7-470a-9492-3e491c2bfcb9\") " pod="kube-system/coredns-7c65d6cfc9-x25cr" Sep 13 00:07:00.471410 kubelet[2500]: I0913 00:07:00.469468 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90efa26e-0a31-4ed0-bf4a-4b1c9337e597-config-volume\") pod \"coredns-7c65d6cfc9-gg2j6\" (UID: \"90efa26e-0a31-4ed0-bf4a-4b1c9337e597\") " pod="kube-system/coredns-7c65d6cfc9-gg2j6" Sep 13 00:07:00.471410 kubelet[2500]: I0913 00:07:00.469531 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1feb0270-87f7-470a-9492-3e491c2bfcb9-config-volume\") pod \"coredns-7c65d6cfc9-x25cr\" (UID: \"1feb0270-87f7-470a-9492-3e491c2bfcb9\") " pod="kube-system/coredns-7c65d6cfc9-x25cr" Sep 13 00:07:00.471410 kubelet[2500]: I0913 00:07:00.469565 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xtl7\" (UniqueName: \"kubernetes.io/projected/90efa26e-0a31-4ed0-bf4a-4b1c9337e597-kube-api-access-2xtl7\") pod \"coredns-7c65d6cfc9-gg2j6\" (UID: \"90efa26e-0a31-4ed0-bf4a-4b1c9337e597\") " pod="kube-system/coredns-7c65d6cfc9-gg2j6" Sep 13 00:07:00.654741 kubelet[2500]: E0913 00:07:00.654574 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:00.672129 kubelet[2500]: E0913 00:07:00.669863 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:00.681048 containerd[1481]: time="2025-09-13T00:07:00.680969313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x25cr,Uid:1feb0270-87f7-470a-9492-3e491c2bfcb9,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:00.688832 containerd[1481]: time="2025-09-13T00:07:00.688600080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg2j6,Uid:90efa26e-0a31-4ed0-bf4a-4b1c9337e597,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:00.988167 kubelet[2500]: E0913 00:07:00.988122 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:01.013401 kubelet[2500]: I0913 00:07:01.013304 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-knq8s" podStartSLOduration=6.408594966 podStartE2EDuration="17.013277177s" podCreationTimestamp="2025-09-13 00:06:44 +0000 UTC" firstStartedPulling="2025-09-13 00:06:45.22341134 +0000 UTC m=+7.748078586" lastFinishedPulling="2025-09-13 00:06:55.828093543 +0000 UTC m=+18.352760797" observedRunningTime="2025-09-13 00:07:01.011121535 +0000 UTC m=+23.535788812" watchObservedRunningTime="2025-09-13 00:07:01.013277177 +0000 UTC m=+23.537944461" Sep 13 00:07:01.998934 kubelet[2500]: E0913 00:07:01.998844 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:02.579991 systemd-networkd[1377]: cilium_host: Link UP Sep 13 00:07:02.582516 systemd-networkd[1377]: cilium_net: Link UP Sep 13 00:07:02.584542 systemd-networkd[1377]: cilium_net: Gained carrier Sep 13 00:07:02.585074 systemd-networkd[1377]: cilium_host: Gained carrier Sep 13 00:07:02.764477 systemd-networkd[1377]: cilium_vxlan: Link UP Sep 13 00:07:02.764491 systemd-networkd[1377]: cilium_vxlan: Gained carrier Sep 13 00:07:02.890089 systemd-networkd[1377]: cilium_net: Gained IPv6LL Sep 13 00:07:03.001605 kubelet[2500]: E0913 00:07:03.001515 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:03.312103 systemd-networkd[1377]: cilium_host: Gained IPv6LL Sep 13 00:07:03.378305 kernel: NET: Registered PF_ALG protocol family Sep 13 00:07:04.336201 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL Sep 13 00:07:04.511707 systemd-networkd[1377]: lxc_health: Link UP Sep 13 00:07:04.520636 systemd-networkd[1377]: lxc_health: Gained carrier Sep 13 00:07:04.814004 systemd-networkd[1377]: lxca8902673489b: Link UP Sep 13 00:07:04.819860 kernel: eth0: renamed from tmpb3dec Sep 13 00:07:04.827451 systemd-networkd[1377]: lxca8902673489b: Gained carrier Sep 13 00:07:04.849068 systemd-networkd[1377]: lxce8da5d3e334b: Link UP Sep 13 00:07:04.858850 kernel: eth0: renamed from tmpefe4d Sep 13 00:07:04.871259 systemd-networkd[1377]: lxce8da5d3e334b: Gained carrier Sep 13 00:07:05.081767 kubelet[2500]: E0913 00:07:05.081597 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:06.034146 kubelet[2500]: E0913 00:07:06.034096 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:06.062042 systemd-networkd[1377]: lxca8902673489b: Gained IPv6LL Sep 13 00:07:06.254043 systemd-networkd[1377]: lxc_health: Gained IPv6LL Sep 13 00:07:06.766199 systemd-networkd[1377]: lxce8da5d3e334b: Gained IPv6LL Sep 13 00:07:07.037525 kubelet[2500]: E0913 00:07:07.036578 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:10.769612 containerd[1481]: time="2025-09-13T00:07:10.766247380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:10.769612 containerd[1481]: time="2025-09-13T00:07:10.766341011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:10.769612 containerd[1481]: time="2025-09-13T00:07:10.766390368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:10.769612 containerd[1481]: time="2025-09-13T00:07:10.769233648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:10.830056 systemd[1]: Started cri-containerd-b3decddf43e78d7dae335831821216e347217ccc6e7647b88a80d16afbf3db22.scope - libcontainer container b3decddf43e78d7dae335831821216e347217ccc6e7647b88a80d16afbf3db22. Sep 13 00:07:10.832650 containerd[1481]: time="2025-09-13T00:07:10.827656546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:10.832650 containerd[1481]: time="2025-09-13T00:07:10.827723341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:10.832650 containerd[1481]: time="2025-09-13T00:07:10.827740015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:10.832650 containerd[1481]: time="2025-09-13T00:07:10.829577284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:10.894016 systemd[1]: Started cri-containerd-efe4dd5da36088a8e5c445a772d28bfd5b994179f6ffa266a219b1c86eb4827c.scope - libcontainer container efe4dd5da36088a8e5c445a772d28bfd5b994179f6ffa266a219b1c86eb4827c. Sep 13 00:07:10.964411 containerd[1481]: time="2025-09-13T00:07:10.963322484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg2j6,Uid:90efa26e-0a31-4ed0-bf4a-4b1c9337e597,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3decddf43e78d7dae335831821216e347217ccc6e7647b88a80d16afbf3db22\"" Sep 13 00:07:10.966951 kubelet[2500]: E0913 00:07:10.966735 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:10.974697 containerd[1481]: time="2025-09-13T00:07:10.974640348Z" level=info msg="CreateContainer within sandbox \"b3decddf43e78d7dae335831821216e347217ccc6e7647b88a80d16afbf3db22\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:11.028933 containerd[1481]: time="2025-09-13T00:07:11.026616221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x25cr,Uid:1feb0270-87f7-470a-9492-3e491c2bfcb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"efe4dd5da36088a8e5c445a772d28bfd5b994179f6ffa266a219b1c86eb4827c\"" Sep 13 00:07:11.031568 kubelet[2500]: E0913 00:07:11.030941 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:11.036408 containerd[1481]: time="2025-09-13T00:07:11.036352936Z" level=info msg="CreateContainer within sandbox \"efe4dd5da36088a8e5c445a772d28bfd5b994179f6ffa266a219b1c86eb4827c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:11.057829 containerd[1481]: time="2025-09-13T00:07:11.056666250Z" level=info msg="CreateContainer within sandbox \"b3decddf43e78d7dae335831821216e347217ccc6e7647b88a80d16afbf3db22\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38e1d4eb7b352b18c3715c012a43eeca14b3f5ab4eb65cd84aaf16a72cdb0ea5\"" Sep 13 00:07:11.060939 containerd[1481]: time="2025-09-13T00:07:11.060884712Z" level=info msg="StartContainer for \"38e1d4eb7b352b18c3715c012a43eeca14b3f5ab4eb65cd84aaf16a72cdb0ea5\"" Sep 13 00:07:11.081311 containerd[1481]: time="2025-09-13T00:07:11.081251474Z" level=info msg="CreateContainer within sandbox \"efe4dd5da36088a8e5c445a772d28bfd5b994179f6ffa266a219b1c86eb4827c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7ffa252ce637b7d8df3dc689dd6f556ea3edd34ad2c63f6a767fe43e9208877\"" Sep 13 00:07:11.084460 containerd[1481]: time="2025-09-13T00:07:11.084118793Z" level=info msg="StartContainer for \"f7ffa252ce637b7d8df3dc689dd6f556ea3edd34ad2c63f6a767fe43e9208877\"" Sep 13 00:07:11.129147 systemd[1]: Started cri-containerd-38e1d4eb7b352b18c3715c012a43eeca14b3f5ab4eb65cd84aaf16a72cdb0ea5.scope - libcontainer container 38e1d4eb7b352b18c3715c012a43eeca14b3f5ab4eb65cd84aaf16a72cdb0ea5. Sep 13 00:07:11.138133 systemd[1]: Started cri-containerd-f7ffa252ce637b7d8df3dc689dd6f556ea3edd34ad2c63f6a767fe43e9208877.scope - libcontainer container f7ffa252ce637b7d8df3dc689dd6f556ea3edd34ad2c63f6a767fe43e9208877. Sep 13 00:07:11.211614 containerd[1481]: time="2025-09-13T00:07:11.211400226Z" level=info msg="StartContainer for \"38e1d4eb7b352b18c3715c012a43eeca14b3f5ab4eb65cd84aaf16a72cdb0ea5\" returns successfully" Sep 13 00:07:11.211614 containerd[1481]: time="2025-09-13T00:07:11.211401047Z" level=info msg="StartContainer for \"f7ffa252ce637b7d8df3dc689dd6f556ea3edd34ad2c63f6a767fe43e9208877\" returns successfully" Sep 13 00:07:11.786297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306163862.mount: Deactivated successfully. Sep 13 00:07:12.090234 kubelet[2500]: E0913 00:07:12.089824 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:12.092447 kubelet[2500]: E0913 00:07:12.092169 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:12.105119 kubelet[2500]: I0913 00:07:12.105048 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gg2j6" podStartSLOduration=28.105025463 podStartE2EDuration="28.105025463s" podCreationTimestamp="2025-09-13 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:12.104086489 +0000 UTC m=+34.628753761" watchObservedRunningTime="2025-09-13 00:07:12.105025463 +0000 UTC m=+34.629692734" Sep 13 00:07:12.123996 kubelet[2500]: I0913 00:07:12.123861 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-x25cr" podStartSLOduration=28.123831066 podStartE2EDuration="28.123831066s" podCreationTimestamp="2025-09-13 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:12.122053661 +0000 UTC m=+34.646720937" watchObservedRunningTime="2025-09-13 00:07:12.123831066 +0000 UTC m=+34.648498345" Sep 13 00:07:13.095522 kubelet[2500]: E0913 00:07:13.095430 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:13.096180 kubelet[2500]: E0913 00:07:13.095934 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:14.099341 kubelet[2500]: E0913 00:07:14.098149 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:14.099341 kubelet[2500]: E0913 00:07:14.099078 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:22.752620 systemd[1]: Started sshd@7-161.35.231.245:22-139.178.68.195:53264.service - OpenSSH per-connection server daemon (139.178.68.195:53264). Sep 13 00:07:22.849348 sshd[3894]: Accepted publickey for core from 139.178.68.195 port 53264 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:22.852024 sshd[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:22.860448 systemd-logind[1459]: New session 8 of user core. Sep 13 00:07:22.868513 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:07:23.592036 sshd[3894]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:23.599720 systemd[1]: sshd@7-161.35.231.245:22-139.178.68.195:53264.service: Deactivated successfully. Sep 13 00:07:23.603962 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:07:23.606262 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:07:23.607838 systemd-logind[1459]: Removed session 8. Sep 13 00:07:28.614253 systemd[1]: Started sshd@8-161.35.231.245:22-139.178.68.195:53270.service - OpenSSH per-connection server daemon (139.178.68.195:53270). Sep 13 00:07:28.666102 sshd[3908]: Accepted publickey for core from 139.178.68.195 port 53270 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:28.668608 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:28.676232 systemd-logind[1459]: New session 9 of user core. Sep 13 00:07:28.681083 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:07:28.853131 sshd[3908]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:28.857752 systemd[1]: sshd@8-161.35.231.245:22-139.178.68.195:53270.service: Deactivated successfully. Sep 13 00:07:28.860304 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:07:28.861374 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:07:28.862846 systemd-logind[1459]: Removed session 9. Sep 13 00:07:33.884373 systemd[1]: Started sshd@9-161.35.231.245:22-139.178.68.195:59004.service - OpenSSH per-connection server daemon (139.178.68.195:59004). Sep 13 00:07:33.931755 sshd[3922]: Accepted publickey for core from 139.178.68.195 port 59004 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:33.934460 sshd[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:33.941907 systemd-logind[1459]: New session 10 of user core. Sep 13 00:07:33.953158 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:07:34.109564 sshd[3922]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:34.116059 systemd[1]: sshd@9-161.35.231.245:22-139.178.68.195:59004.service: Deactivated successfully. Sep 13 00:07:34.119325 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:07:34.121825 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:07:34.123040 systemd-logind[1459]: Removed session 10. Sep 13 00:07:39.138653 systemd[1]: Started sshd@10-161.35.231.245:22-139.178.68.195:59006.service - OpenSSH per-connection server daemon (139.178.68.195:59006). Sep 13 00:07:39.189852 sshd[3937]: Accepted publickey for core from 139.178.68.195 port 59006 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:39.191636 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:39.200045 systemd-logind[1459]: New session 11 of user core. Sep 13 00:07:39.207191 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:07:39.369461 sshd[3937]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:39.386570 systemd[1]: sshd@10-161.35.231.245:22-139.178.68.195:59006.service: Deactivated successfully. Sep 13 00:07:39.389850 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:07:39.392897 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:07:39.405783 systemd[1]: Started sshd@11-161.35.231.245:22-139.178.68.195:59010.service - OpenSSH per-connection server daemon (139.178.68.195:59010). Sep 13 00:07:39.408202 systemd-logind[1459]: Removed session 11. Sep 13 00:07:39.461078 sshd[3951]: Accepted publickey for core from 139.178.68.195 port 59010 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:39.463853 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:39.472492 systemd-logind[1459]: New session 12 of user core. Sep 13 00:07:39.482207 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:07:39.712174 sshd[3951]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:39.724330 systemd[1]: sshd@11-161.35.231.245:22-139.178.68.195:59010.service: Deactivated successfully. Sep 13 00:07:39.727711 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:07:39.730751 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:07:39.740031 systemd[1]: Started sshd@12-161.35.231.245:22-139.178.68.195:59012.service - OpenSSH per-connection server daemon (139.178.68.195:59012). Sep 13 00:07:39.742908 systemd-logind[1459]: Removed session 12. Sep 13 00:07:39.806989 sshd[3961]: Accepted publickey for core from 139.178.68.195 port 59012 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:39.811241 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:39.820987 systemd-logind[1459]: New session 13 of user core. Sep 13 00:07:39.832157 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:07:39.986998 sshd[3961]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:39.992046 systemd[1]: sshd@12-161.35.231.245:22-139.178.68.195:59012.service: Deactivated successfully. Sep 13 00:07:39.995005 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:07:39.996848 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:07:39.998400 systemd-logind[1459]: Removed session 13. Sep 13 00:07:45.001906 systemd[1]: Started sshd@13-161.35.231.245:22-139.178.68.195:58882.service - OpenSSH per-connection server daemon (139.178.68.195:58882). Sep 13 00:07:45.067511 sshd[3976]: Accepted publickey for core from 139.178.68.195 port 58882 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:45.069633 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:45.076739 systemd-logind[1459]: New session 14 of user core. Sep 13 00:07:45.082125 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:07:45.234588 sshd[3976]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:45.237888 systemd[1]: sshd@13-161.35.231.245:22-139.178.68.195:58882.service: Deactivated successfully. Sep 13 00:07:45.240611 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:07:45.242920 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:07:45.244407 systemd-logind[1459]: Removed session 14. Sep 13 00:07:50.252178 systemd[1]: Started sshd@14-161.35.231.245:22-139.178.68.195:60082.service - OpenSSH per-connection server daemon (139.178.68.195:60082). Sep 13 00:07:50.315821 sshd[3990]: Accepted publickey for core from 139.178.68.195 port 60082 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:50.317740 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:50.323155 systemd-logind[1459]: New session 15 of user core. Sep 13 00:07:50.333749 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:07:50.484208 sshd[3990]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:50.497344 systemd[1]: sshd@14-161.35.231.245:22-139.178.68.195:60082.service: Deactivated successfully. Sep 13 00:07:50.500105 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:07:50.502265 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:07:50.509316 systemd[1]: Started sshd@15-161.35.231.245:22-139.178.68.195:60092.service - OpenSSH per-connection server daemon (139.178.68.195:60092). Sep 13 00:07:50.511943 systemd-logind[1459]: Removed session 15. Sep 13 00:07:50.566995 sshd[4003]: Accepted publickey for core from 139.178.68.195 port 60092 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:50.568977 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:50.575082 systemd-logind[1459]: New session 16 of user core. Sep 13 00:07:50.583147 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:07:50.861202 sshd[4003]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:50.875084 systemd[1]: sshd@15-161.35.231.245:22-139.178.68.195:60092.service: Deactivated successfully. Sep 13 00:07:50.877748 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:07:50.878960 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:07:50.887423 systemd[1]: Started sshd@16-161.35.231.245:22-139.178.68.195:60096.service - OpenSSH per-connection server daemon (139.178.68.195:60096). Sep 13 00:07:50.890147 systemd-logind[1459]: Removed session 16. Sep 13 00:07:50.962680 sshd[4014]: Accepted publickey for core from 139.178.68.195 port 60096 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:50.964850 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:50.970468 systemd-logind[1459]: New session 17 of user core. Sep 13 00:07:50.979142 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:07:51.680324 kubelet[2500]: E0913 00:07:51.680216 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:52.681520 kubelet[2500]: E0913 00:07:52.680974 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:52.729987 sshd[4014]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:52.746078 systemd[1]: sshd@16-161.35.231.245:22-139.178.68.195:60096.service: Deactivated successfully. Sep 13 00:07:52.749021 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:07:52.756180 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:07:52.766989 systemd[1]: Started sshd@17-161.35.231.245:22-139.178.68.195:60102.service - OpenSSH per-connection server daemon (139.178.68.195:60102). Sep 13 00:07:52.772318 systemd-logind[1459]: Removed session 17. Sep 13 00:07:52.846315 sshd[4031]: Accepted publickey for core from 139.178.68.195 port 60102 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:52.848835 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:52.858133 systemd-logind[1459]: New session 18 of user core. Sep 13 00:07:52.862279 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:07:53.221116 sshd[4031]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:53.234350 systemd[1]: sshd@17-161.35.231.245:22-139.178.68.195:60102.service: Deactivated successfully. Sep 13 00:07:53.238623 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:07:53.241103 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:07:53.249399 systemd[1]: Started sshd@18-161.35.231.245:22-139.178.68.195:60108.service - OpenSSH per-connection server daemon (139.178.68.195:60108). Sep 13 00:07:53.253224 systemd-logind[1459]: Removed session 18. Sep 13 00:07:53.315493 sshd[4043]: Accepted publickey for core from 139.178.68.195 port 60108 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:53.318135 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:53.325332 systemd-logind[1459]: New session 19 of user core. Sep 13 00:07:53.333255 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:07:53.470369 sshd[4043]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:53.476180 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:07:53.477124 systemd[1]: sshd@18-161.35.231.245:22-139.178.68.195:60108.service: Deactivated successfully. Sep 13 00:07:53.481720 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:07:53.484005 systemd-logind[1459]: Removed session 19. Sep 13 00:07:57.681542 kubelet[2500]: E0913 00:07:57.681292 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:07:58.493311 systemd[1]: Started sshd@19-161.35.231.245:22-139.178.68.195:60116.service - OpenSSH per-connection server daemon (139.178.68.195:60116). Sep 13 00:07:58.543285 sshd[4059]: Accepted publickey for core from 139.178.68.195 port 60116 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:58.545513 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:58.551482 systemd-logind[1459]: New session 20 of user core. Sep 13 00:07:58.559169 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:07:58.703262 sshd[4059]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:58.706885 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:07:58.707604 systemd[1]: sshd@19-161.35.231.245:22-139.178.68.195:60116.service: Deactivated successfully. Sep 13 00:07:58.710705 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:07:58.713587 systemd-logind[1459]: Removed session 20. Sep 13 00:08:03.726590 systemd[1]: Started sshd@20-161.35.231.245:22-139.178.68.195:47102.service - OpenSSH per-connection server daemon (139.178.68.195:47102). Sep 13 00:08:03.776099 sshd[4072]: Accepted publickey for core from 139.178.68.195 port 47102 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:03.778374 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:03.785314 systemd-logind[1459]: New session 21 of user core. Sep 13 00:08:03.793176 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:08:03.956125 sshd[4072]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:03.961546 systemd[1]: sshd@20-161.35.231.245:22-139.178.68.195:47102.service: Deactivated successfully. Sep 13 00:08:03.964574 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:08:03.966370 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:08:03.967672 systemd-logind[1459]: Removed session 21. Sep 13 00:08:08.979871 systemd[1]: Started sshd@21-161.35.231.245:22-139.178.68.195:47106.service - OpenSSH per-connection server daemon (139.178.68.195:47106). Sep 13 00:08:09.030220 sshd[4084]: Accepted publickey for core from 139.178.68.195 port 47106 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:09.031053 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:09.039269 systemd-logind[1459]: New session 22 of user core. Sep 13 00:08:09.042381 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:08:09.195992 sshd[4084]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:09.200382 systemd[1]: sshd@21-161.35.231.245:22-139.178.68.195:47106.service: Deactivated successfully. Sep 13 00:08:09.204047 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:08:09.206528 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:08:09.208309 systemd-logind[1459]: Removed session 22. Sep 13 00:08:09.680456 kubelet[2500]: E0913 00:08:09.680066 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:11.680902 kubelet[2500]: E0913 00:08:11.680239 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:14.214340 systemd[1]: Started sshd@22-161.35.231.245:22-139.178.68.195:44220.service - OpenSSH per-connection server daemon (139.178.68.195:44220). Sep 13 00:08:14.271362 sshd[4096]: Accepted publickey for core from 139.178.68.195 port 44220 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:14.273142 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:14.279966 systemd-logind[1459]: New session 23 of user core. Sep 13 00:08:14.289082 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:08:14.454465 sshd[4096]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:14.465099 systemd[1]: sshd@22-161.35.231.245:22-139.178.68.195:44220.service: Deactivated successfully. Sep 13 00:08:14.468004 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:08:14.470576 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:08:14.476314 systemd[1]: Started sshd@23-161.35.231.245:22-139.178.68.195:44236.service - OpenSSH per-connection server daemon (139.178.68.195:44236). Sep 13 00:08:14.478785 systemd-logind[1459]: Removed session 23. Sep 13 00:08:14.547810 sshd[4109]: Accepted publickey for core from 139.178.68.195 port 44236 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:14.550042 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:14.557347 systemd-logind[1459]: New session 24 of user core. Sep 13 00:08:14.562116 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:08:16.558619 containerd[1481]: time="2025-09-13T00:08:16.558319551Z" level=info msg="StopContainer for \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\" with timeout 30 (s)" Sep 13 00:08:16.565522 containerd[1481]: time="2025-09-13T00:08:16.565111673Z" level=info msg="Stop container \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\" with signal terminated" Sep 13 00:08:16.587954 containerd[1481]: time="2025-09-13T00:08:16.587858644Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:08:16.591197 systemd[1]: cri-containerd-1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5.scope: Deactivated successfully. Sep 13 00:08:16.601579 containerd[1481]: time="2025-09-13T00:08:16.601526964Z" level=info msg="StopContainer for \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\" with timeout 2 (s)" Sep 13 00:08:16.603362 containerd[1481]: time="2025-09-13T00:08:16.603320301Z" level=info msg="Stop container \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\" with signal terminated" Sep 13 00:08:16.617414 systemd-networkd[1377]: lxc_health: Link DOWN Sep 13 00:08:16.619647 systemd-networkd[1377]: lxc_health: Lost carrier Sep 13 00:08:16.651411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5-rootfs.mount: Deactivated successfully. Sep 13 00:08:16.653036 systemd[1]: cri-containerd-eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6.scope: Deactivated successfully. Sep 13 00:08:16.654038 systemd[1]: cri-containerd-eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6.scope: Consumed 10.246s CPU time. Sep 13 00:08:16.665958 containerd[1481]: time="2025-09-13T00:08:16.665645670Z" level=info msg="shim disconnected" id=1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5 namespace=k8s.io Sep 13 00:08:16.665958 containerd[1481]: time="2025-09-13T00:08:16.665741568Z" level=warning msg="cleaning up after shim disconnected" id=1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5 namespace=k8s.io Sep 13 00:08:16.665958 containerd[1481]: time="2025-09-13T00:08:16.665760974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:16.693036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6-rootfs.mount: Deactivated successfully. Sep 13 00:08:16.695488 containerd[1481]: time="2025-09-13T00:08:16.694530158Z" level=info msg="shim disconnected" id=eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6 namespace=k8s.io Sep 13 00:08:16.695488 containerd[1481]: time="2025-09-13T00:08:16.694666062Z" level=warning msg="cleaning up after shim disconnected" id=eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6 namespace=k8s.io Sep 13 00:08:16.695488 containerd[1481]: time="2025-09-13T00:08:16.694681107Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:16.697078 containerd[1481]: time="2025-09-13T00:08:16.697035445Z" level=info msg="StopContainer for \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\" returns successfully" Sep 13 00:08:16.700832 containerd[1481]: time="2025-09-13T00:08:16.699140737Z" level=info msg="StopPodSandbox for \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\"" Sep 13 00:08:16.700832 containerd[1481]: time="2025-09-13T00:08:16.699213537Z" level=info msg="Container to stop \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:16.706998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044-shm.mount: Deactivated successfully. Sep 13 00:08:16.727048 systemd[1]: cri-containerd-2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044.scope: Deactivated successfully. Sep 13 00:08:16.736999 containerd[1481]: time="2025-09-13T00:08:16.736937104Z" level=info msg="StopContainer for \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\" returns successfully" Sep 13 00:08:16.739439 containerd[1481]: time="2025-09-13T00:08:16.739375779Z" level=info msg="StopPodSandbox for \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\"" Sep 13 00:08:16.739439 containerd[1481]: time="2025-09-13T00:08:16.739442926Z" level=info msg="Container to stop \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:16.739696 containerd[1481]: time="2025-09-13T00:08:16.739460605Z" level=info msg="Container to stop \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:16.739696 containerd[1481]: time="2025-09-13T00:08:16.739477529Z" level=info msg="Container to stop \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:16.739696 containerd[1481]: time="2025-09-13T00:08:16.739492034Z" level=info msg="Container to stop \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:16.739696 containerd[1481]: time="2025-09-13T00:08:16.739506488Z" level=info msg="Container to stop \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:16.748987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956-shm.mount: Deactivated successfully. Sep 13 00:08:16.760712 systemd[1]: cri-containerd-026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956.scope: Deactivated successfully. Sep 13 00:08:16.800759 containerd[1481]: time="2025-09-13T00:08:16.800633377Z" level=info msg="shim disconnected" id=026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956 namespace=k8s.io Sep 13 00:08:16.800759 containerd[1481]: time="2025-09-13T00:08:16.800748679Z" level=warning msg="cleaning up after shim disconnected" id=026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956 namespace=k8s.io Sep 13 00:08:16.800759 containerd[1481]: time="2025-09-13T00:08:16.800760890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:16.801524 containerd[1481]: time="2025-09-13T00:08:16.801262147Z" level=info msg="shim disconnected" id=2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044 namespace=k8s.io Sep 13 00:08:16.801524 containerd[1481]: time="2025-09-13T00:08:16.801310114Z" level=warning msg="cleaning up after shim disconnected" id=2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044 namespace=k8s.io Sep 13 00:08:16.801524 containerd[1481]: time="2025-09-13T00:08:16.801319523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:16.850967 containerd[1481]: time="2025-09-13T00:08:16.850768902Z" level=info msg="TearDown network for sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" successfully" Sep 13 00:08:16.851847 containerd[1481]: time="2025-09-13T00:08:16.851139229Z" level=info msg="StopPodSandbox for \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" returns successfully" Sep 13 00:08:16.853837 containerd[1481]: time="2025-09-13T00:08:16.853397592Z" level=info msg="TearDown network for sandbox \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\" successfully" Sep 13 00:08:16.853837 containerd[1481]: time="2025-09-13T00:08:16.853756880Z" level=info msg="StopPodSandbox for \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\" returns successfully" Sep 13 00:08:17.044882 kubelet[2500]: I0913 00:08:17.044159 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.044882 kubelet[2500]: I0913 00:08:17.044302 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-bpf-maps\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.044882 kubelet[2500]: I0913 00:08:17.044364 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcgrb\" (UniqueName: \"kubernetes.io/projected/c4238dba-cdff-40ec-8482-ace0be595e12-kube-api-access-pcgrb\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.044882 kubelet[2500]: I0913 00:08:17.044386 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-xtables-lock\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.044882 kubelet[2500]: I0913 00:08:17.044403 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cni-path\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.044882 kubelet[2500]: I0913 00:08:17.044419 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-etc-cni-netd\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047350 kubelet[2500]: I0913 00:08:17.044434 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-host-proc-sys-net\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047350 kubelet[2500]: I0913 00:08:17.044450 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-hostproc\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047350 kubelet[2500]: I0913 00:08:17.044468 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-config-path\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047350 kubelet[2500]: I0913 00:08:17.044483 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-host-proc-sys-kernel\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047350 kubelet[2500]: I0913 00:08:17.044499 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4238dba-cdff-40ec-8482-ace0be595e12-hubble-tls\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047350 kubelet[2500]: I0913 00:08:17.044516 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4238dba-cdff-40ec-8482-ace0be595e12-clustermesh-secrets\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047498 kubelet[2500]: I0913 00:08:17.044535 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c295a41-7622-4638-9fe9-5dd2d8754a2a-cilium-config-path\") pod \"0c295a41-7622-4638-9fe9-5dd2d8754a2a\" (UID: \"0c295a41-7622-4638-9fe9-5dd2d8754a2a\") " Sep 13 00:08:17.047498 kubelet[2500]: I0913 00:08:17.044552 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-cgroup\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047498 kubelet[2500]: I0913 00:08:17.044571 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-857jd\" (UniqueName: \"kubernetes.io/projected/0c295a41-7622-4638-9fe9-5dd2d8754a2a-kube-api-access-857jd\") pod \"0c295a41-7622-4638-9fe9-5dd2d8754a2a\" (UID: \"0c295a41-7622-4638-9fe9-5dd2d8754a2a\") " Sep 13 00:08:17.047498 kubelet[2500]: I0913 00:08:17.044604 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-run\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047498 kubelet[2500]: I0913 00:08:17.044624 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-lib-modules\") pod \"c4238dba-cdff-40ec-8482-ace0be595e12\" (UID: \"c4238dba-cdff-40ec-8482-ace0be595e12\") " Sep 13 00:08:17.047498 kubelet[2500]: I0913 00:08:17.044666 2500 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-bpf-maps\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.047721 kubelet[2500]: I0913 00:08:17.044701 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.047721 kubelet[2500]: I0913 00:08:17.045169 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.047721 kubelet[2500]: I0913 00:08:17.045224 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.047721 kubelet[2500]: I0913 00:08:17.045251 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cni-path" (OuterVolumeSpecName: "cni-path") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.047721 kubelet[2500]: I0913 00:08:17.045276 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.047912 kubelet[2500]: I0913 00:08:17.045299 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.047912 kubelet[2500]: I0913 00:08:17.045322 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-hostproc" (OuterVolumeSpecName: "hostproc") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.050946 kubelet[2500]: I0913 00:08:17.050865 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:08:17.054825 kubelet[2500]: I0913 00:08:17.051135 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.054825 kubelet[2500]: I0913 00:08:17.051848 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:17.055051 kubelet[2500]: I0913 00:08:17.054978 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4238dba-cdff-40ec-8482-ace0be595e12-kube-api-access-pcgrb" (OuterVolumeSpecName: "kube-api-access-pcgrb") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "kube-api-access-pcgrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:17.058494 kubelet[2500]: I0913 00:08:17.058438 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4238dba-cdff-40ec-8482-ace0be595e12-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:17.059046 kubelet[2500]: I0913 00:08:17.059006 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4238dba-cdff-40ec-8482-ace0be595e12-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c4238dba-cdff-40ec-8482-ace0be595e12" (UID: "c4238dba-cdff-40ec-8482-ace0be595e12"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:08:17.060406 kubelet[2500]: I0913 00:08:17.060368 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c295a41-7622-4638-9fe9-5dd2d8754a2a-kube-api-access-857jd" (OuterVolumeSpecName: "kube-api-access-857jd") pod "0c295a41-7622-4638-9fe9-5dd2d8754a2a" (UID: "0c295a41-7622-4638-9fe9-5dd2d8754a2a"). InnerVolumeSpecName "kube-api-access-857jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:17.060624 kubelet[2500]: I0913 00:08:17.060601 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c295a41-7622-4638-9fe9-5dd2d8754a2a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0c295a41-7622-4638-9fe9-5dd2d8754a2a" (UID: "0c295a41-7622-4638-9fe9-5dd2d8754a2a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:08:17.145815 kubelet[2500]: I0913 00:08:17.145647 2500 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-xtables-lock\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146354 kubelet[2500]: I0913 00:08:17.146015 2500 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-etc-cni-netd\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146354 kubelet[2500]: I0913 00:08:17.146036 2500 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-host-proc-sys-net\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146354 kubelet[2500]: I0913 00:08:17.146050 2500 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cni-path\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146354 kubelet[2500]: I0913 00:08:17.146071 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-config-path\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146354 kubelet[2500]: I0913 00:08:17.146085 2500 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-hostproc\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146354 kubelet[2500]: I0913 00:08:17.146094 2500 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4238dba-cdff-40ec-8482-ace0be595e12-hubble-tls\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146354 kubelet[2500]: I0913 00:08:17.146103 2500 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4238dba-cdff-40ec-8482-ace0be595e12-clustermesh-secrets\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146354 kubelet[2500]: I0913 00:08:17.146112 2500 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-host-proc-sys-kernel\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146662 kubelet[2500]: I0913 00:08:17.146123 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-run\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146662 kubelet[2500]: I0913 00:08:17.146133 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c295a41-7622-4638-9fe9-5dd2d8754a2a-cilium-config-path\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146662 kubelet[2500]: I0913 00:08:17.146142 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-cilium-cgroup\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146662 kubelet[2500]: I0913 00:08:17.146151 2500 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-857jd\" (UniqueName: \"kubernetes.io/projected/0c295a41-7622-4638-9fe9-5dd2d8754a2a-kube-api-access-857jd\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146662 kubelet[2500]: I0913 00:08:17.146159 2500 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4238dba-cdff-40ec-8482-ace0be595e12-lib-modules\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.146662 kubelet[2500]: I0913 00:08:17.146168 2500 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcgrb\" (UniqueName: \"kubernetes.io/projected/c4238dba-cdff-40ec-8482-ace0be595e12-kube-api-access-pcgrb\") on node \"ci-4081.3.5-n-5a30d8cd2b\" DevicePath \"\"" Sep 13 00:08:17.274079 kubelet[2500]: I0913 00:08:17.274025 2500 scope.go:117] "RemoveContainer" containerID="eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6" Sep 13 00:08:17.289406 systemd[1]: Removed slice kubepods-burstable-podc4238dba_cdff_40ec_8482_ace0be595e12.slice - libcontainer container kubepods-burstable-podc4238dba_cdff_40ec_8482_ace0be595e12.slice. Sep 13 00:08:17.289726 systemd[1]: kubepods-burstable-podc4238dba_cdff_40ec_8482_ace0be595e12.slice: Consumed 10.368s CPU time. Sep 13 00:08:17.297975 containerd[1481]: time="2025-09-13T00:08:17.297416640Z" level=info msg="RemoveContainer for \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\"" Sep 13 00:08:17.307499 systemd[1]: Removed slice kubepods-besteffort-pod0c295a41_7622_4638_9fe9_5dd2d8754a2a.slice - libcontainer container kubepods-besteffort-pod0c295a41_7622_4638_9fe9_5dd2d8754a2a.slice. Sep 13 00:08:17.311370 containerd[1481]: time="2025-09-13T00:08:17.311277793Z" level=info msg="RemoveContainer for \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\" returns successfully" Sep 13 00:08:17.325168 kubelet[2500]: I0913 00:08:17.325065 2500 scope.go:117] "RemoveContainer" containerID="1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482" Sep 13 00:08:17.329910 containerd[1481]: time="2025-09-13T00:08:17.329834829Z" level=info msg="RemoveContainer for \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\"" Sep 13 00:08:17.337093 containerd[1481]: time="2025-09-13T00:08:17.336873329Z" level=info msg="RemoveContainer for \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\" returns successfully" Sep 13 00:08:17.338836 kubelet[2500]: I0913 00:08:17.337410 2500 scope.go:117] "RemoveContainer" containerID="c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c" Sep 13 00:08:17.364251 containerd[1481]: time="2025-09-13T00:08:17.363878683Z" level=info msg="RemoveContainer for \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\"" Sep 13 00:08:17.368518 containerd[1481]: time="2025-09-13T00:08:17.368456698Z" level=info msg="RemoveContainer for \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\" returns successfully" Sep 13 00:08:17.369265 kubelet[2500]: I0913 00:08:17.369080 2500 scope.go:117] "RemoveContainer" containerID="50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb" Sep 13 00:08:17.371075 containerd[1481]: time="2025-09-13T00:08:17.371032906Z" level=info msg="RemoveContainer for \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\"" Sep 13 00:08:17.374739 containerd[1481]: time="2025-09-13T00:08:17.374658820Z" level=info msg="RemoveContainer for \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\" returns successfully" Sep 13 00:08:17.375608 kubelet[2500]: I0913 00:08:17.375549 2500 scope.go:117] "RemoveContainer" containerID="0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152" Sep 13 00:08:17.377377 containerd[1481]: time="2025-09-13T00:08:17.377330894Z" level=info msg="RemoveContainer for \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\"" Sep 13 00:08:17.381335 containerd[1481]: time="2025-09-13T00:08:17.381266050Z" level=info msg="RemoveContainer for \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\" returns successfully" Sep 13 00:08:17.382079 kubelet[2500]: I0913 00:08:17.381621 2500 scope.go:117] "RemoveContainer" containerID="eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6" Sep 13 00:08:17.392519 containerd[1481]: time="2025-09-13T00:08:17.383959714Z" level=error msg="ContainerStatus for \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\": not found" Sep 13 00:08:17.393189 kubelet[2500]: E0913 00:08:17.393144 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\": not found" containerID="eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6" Sep 13 00:08:17.404181 kubelet[2500]: I0913 00:08:17.393420 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6"} err="failed to get container status \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"eae27714d5a3093339715807867572f3319fe7b8df9df1a763c39e4194ff43a6\": not found" Sep 13 00:08:17.404181 kubelet[2500]: I0913 00:08:17.403580 2500 scope.go:117] "RemoveContainer" containerID="1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482" Sep 13 00:08:17.404442 containerd[1481]: time="2025-09-13T00:08:17.404030237Z" level=error msg="ContainerStatus for \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\": not found" Sep 13 00:08:17.405355 kubelet[2500]: E0913 00:08:17.404962 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\": not found" containerID="1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482" Sep 13 00:08:17.405355 kubelet[2500]: I0913 00:08:17.405005 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482"} err="failed to get container status \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a3b3cb3ce0174de0e0f7f1a9429f79d99591f171d26747fc6d3d8f005e49482\": not found" Sep 13 00:08:17.405355 kubelet[2500]: I0913 00:08:17.405143 2500 scope.go:117] "RemoveContainer" containerID="c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c" Sep 13 00:08:17.405678 containerd[1481]: time="2025-09-13T00:08:17.405617523Z" level=error msg="ContainerStatus for \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\": not found" Sep 13 00:08:17.405884 kubelet[2500]: E0913 00:08:17.405853 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\": not found" containerID="c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c" Sep 13 00:08:17.405945 kubelet[2500]: I0913 00:08:17.405891 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c"} err="failed to get container status \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1876e44e351709ee71aad07b2abda1f084d66abfb2ee740eddddc8fc8a1947c\": not found" Sep 13 00:08:17.405945 kubelet[2500]: I0913 00:08:17.405926 2500 scope.go:117] "RemoveContainer" containerID="50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb" Sep 13 00:08:17.406312 containerd[1481]: time="2025-09-13T00:08:17.406269219Z" level=error msg="ContainerStatus for \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\": not found" Sep 13 00:08:17.406518 kubelet[2500]: E0913 00:08:17.406496 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\": not found" containerID="50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb" Sep 13 00:08:17.406568 kubelet[2500]: I0913 00:08:17.406527 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb"} err="failed to get container status \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"50b304862c8cf4f1ce297e42295faa0183a82cb7809abe085d4fd386968c09bb\": not found" Sep 13 00:08:17.406596 kubelet[2500]: I0913 00:08:17.406566 2500 scope.go:117] "RemoveContainer" containerID="0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152" Sep 13 00:08:17.406868 containerd[1481]: time="2025-09-13T00:08:17.406790301Z" level=error msg="ContainerStatus for \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\": not found" Sep 13 00:08:17.407022 kubelet[2500]: E0913 00:08:17.406944 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\": not found" containerID="0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152" Sep 13 00:08:17.407022 kubelet[2500]: I0913 00:08:17.406969 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152"} err="failed to get container status \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\": rpc error: code = NotFound desc = an error occurred when try to find container \"0280585d11b5944cf39835386f1109ce9c02cf45ffc875533c041113463a3152\": not found" Sep 13 00:08:17.407022 kubelet[2500]: I0913 00:08:17.406984 2500 scope.go:117] "RemoveContainer" containerID="1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5" Sep 13 00:08:17.408495 containerd[1481]: time="2025-09-13T00:08:17.408420099Z" level=info msg="RemoveContainer for \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\"" Sep 13 00:08:17.411770 containerd[1481]: time="2025-09-13T00:08:17.411718174Z" level=info msg="RemoveContainer for \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\" returns successfully" Sep 13 00:08:17.412234 kubelet[2500]: I0913 00:08:17.412098 2500 scope.go:117] "RemoveContainer" containerID="1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5" Sep 13 00:08:17.412723 containerd[1481]: time="2025-09-13T00:08:17.412642778Z" level=error msg="ContainerStatus for \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\": not found" Sep 13 00:08:17.412922 kubelet[2500]: E0913 00:08:17.412840 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\": not found" containerID="1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5" Sep 13 00:08:17.412922 kubelet[2500]: I0913 00:08:17.412880 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5"} err="failed to get container status \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e79e909a6ebfba287b845c81db11c5d50b7bbc38abf88a96dabaf4fad10a6a5\": not found" Sep 13 00:08:17.538294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956-rootfs.mount: Deactivated successfully. Sep 13 00:08:17.538617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044-rootfs.mount: Deactivated successfully. Sep 13 00:08:17.538701 systemd[1]: var-lib-kubelet-pods-c4238dba\x2dcdff\x2d40ec\x2d8482\x2dace0be595e12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpcgrb.mount: Deactivated successfully. Sep 13 00:08:17.538782 systemd[1]: var-lib-kubelet-pods-0c295a41\x2d7622\x2d4638\x2d9fe9\x2d5dd2d8754a2a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d857jd.mount: Deactivated successfully. Sep 13 00:08:17.538857 systemd[1]: var-lib-kubelet-pods-c4238dba\x2dcdff\x2d40ec\x2d8482\x2dace0be595e12-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:08:17.538916 systemd[1]: var-lib-kubelet-pods-c4238dba\x2dcdff\x2d40ec\x2d8482\x2dace0be595e12-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:17.684938 kubelet[2500]: I0913 00:08:17.684719 2500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c295a41-7622-4638-9fe9-5dd2d8754a2a" path="/var/lib/kubelet/pods/0c295a41-7622-4638-9fe9-5dd2d8754a2a/volumes" Sep 13 00:08:17.686900 kubelet[2500]: I0913 00:08:17.686845 2500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4238dba-cdff-40ec-8482-ace0be595e12" path="/var/lib/kubelet/pods/c4238dba-cdff-40ec-8482-ace0be595e12/volumes" Sep 13 00:08:17.812372 kubelet[2500]: E0913 00:08:17.812298 2500 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:18.429730 sshd[4109]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:18.439436 systemd[1]: sshd@23-161.35.231.245:22-139.178.68.195:44236.service: Deactivated successfully. Sep 13 00:08:18.443091 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:08:18.443416 systemd[1]: session-24.scope: Consumed 1.242s CPU time. Sep 13 00:08:18.446273 systemd-logind[1459]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:08:18.454452 systemd[1]: Started sshd@24-161.35.231.245:22-139.178.68.195:44238.service - OpenSSH per-connection server daemon (139.178.68.195:44238). Sep 13 00:08:18.457103 systemd-logind[1459]: Removed session 24. Sep 13 00:08:18.523097 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 44238 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:18.525711 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:18.532909 systemd-logind[1459]: New session 25 of user core. Sep 13 00:08:18.541150 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:08:18.681142 kubelet[2500]: E0913 00:08:18.680528 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:19.529788 sshd[4272]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:19.545017 systemd[1]: sshd@24-161.35.231.245:22-139.178.68.195:44238.service: Deactivated successfully. Sep 13 00:08:19.551216 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:08:19.554326 systemd-logind[1459]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:08:19.564053 systemd[1]: Started sshd@25-161.35.231.245:22-139.178.68.195:44254.service - OpenSSH per-connection server daemon (139.178.68.195:44254). Sep 13 00:08:19.568897 systemd-logind[1459]: Removed session 25. Sep 13 00:08:19.611648 kubelet[2500]: E0913 00:08:19.609343 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c295a41-7622-4638-9fe9-5dd2d8754a2a" containerName="cilium-operator" Sep 13 00:08:19.611648 kubelet[2500]: E0913 00:08:19.609382 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4238dba-cdff-40ec-8482-ace0be595e12" containerName="cilium-agent" Sep 13 00:08:19.611648 kubelet[2500]: E0913 00:08:19.609390 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4238dba-cdff-40ec-8482-ace0be595e12" containerName="mount-cgroup" Sep 13 00:08:19.611648 kubelet[2500]: E0913 00:08:19.609397 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4238dba-cdff-40ec-8482-ace0be595e12" containerName="apply-sysctl-overwrites" Sep 13 00:08:19.611648 kubelet[2500]: E0913 00:08:19.609404 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4238dba-cdff-40ec-8482-ace0be595e12" containerName="mount-bpf-fs" Sep 13 00:08:19.611648 kubelet[2500]: E0913 00:08:19.609410 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4238dba-cdff-40ec-8482-ace0be595e12" containerName="clean-cilium-state" Sep 13 00:08:19.611648 kubelet[2500]: I0913 00:08:19.609601 2500 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c295a41-7622-4638-9fe9-5dd2d8754a2a" containerName="cilium-operator" Sep 13 00:08:19.611648 kubelet[2500]: I0913 00:08:19.609619 2500 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4238dba-cdff-40ec-8482-ace0be595e12" containerName="cilium-agent" Sep 13 00:08:19.625869 sshd[4284]: Accepted publickey for core from 139.178.68.195 port 44254 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:19.627732 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:19.650875 systemd-logind[1459]: New session 26 of user core. Sep 13 00:08:19.657055 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:08:19.663594 kubelet[2500]: I0913 00:08:19.663251 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-host-proc-sys-kernel\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.663594 kubelet[2500]: I0913 00:08:19.663290 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-etc-cni-netd\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.663594 kubelet[2500]: I0913 00:08:19.663312 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-xtables-lock\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.663594 kubelet[2500]: I0913 00:08:19.663328 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-cilium-config-path\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.663594 kubelet[2500]: I0913 00:08:19.663344 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-cilium-ipsec-secrets\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.663594 kubelet[2500]: I0913 00:08:19.663361 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-bpf-maps\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.664084 kubelet[2500]: I0913 00:08:19.663376 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-cni-path\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.664084 kubelet[2500]: I0913 00:08:19.663393 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-cilium-cgroup\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.664084 kubelet[2500]: I0913 00:08:19.663406 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-host-proc-sys-net\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.664084 kubelet[2500]: I0913 00:08:19.663421 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6d9\" (UniqueName: \"kubernetes.io/projected/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-kube-api-access-4l6d9\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.664084 kubelet[2500]: I0913 00:08:19.663437 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-cilium-run\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.664084 kubelet[2500]: I0913 00:08:19.663455 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-hostproc\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.664257 kubelet[2500]: I0913 00:08:19.663469 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-lib-modules\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.664257 kubelet[2500]: I0913 00:08:19.663484 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-hubble-tls\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.664257 kubelet[2500]: I0913 00:08:19.663500 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8b36029-d9a5-4594-ab35-68f76a8aa7e0-clustermesh-secrets\") pod \"cilium-sw5ht\" (UID: \"d8b36029-d9a5-4594-ab35-68f76a8aa7e0\") " pod="kube-system/cilium-sw5ht" Sep 13 00:08:19.668421 systemd[1]: Created slice kubepods-burstable-podd8b36029_d9a5_4594_ab35_68f76a8aa7e0.slice - libcontainer container kubepods-burstable-podd8b36029_d9a5_4594_ab35_68f76a8aa7e0.slice. Sep 13 00:08:19.718513 kubelet[2500]: I0913 00:08:19.718194 2500 setters.go:600] "Node became not ready" node="ci-4081.3.5-n-5a30d8cd2b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:08:19Z","lastTransitionTime":"2025-09-13T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:08:19.732464 sshd[4284]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:19.744052 systemd[1]: sshd@25-161.35.231.245:22-139.178.68.195:44254.service: Deactivated successfully. Sep 13 00:08:19.746641 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:08:19.750955 systemd-logind[1459]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:08:19.758204 systemd[1]: Started sshd@26-161.35.231.245:22-139.178.68.195:44270.service - OpenSSH per-connection server daemon (139.178.68.195:44270). Sep 13 00:08:19.761183 systemd-logind[1459]: Removed session 26. Sep 13 00:08:19.844130 sshd[4292]: Accepted publickey for core from 139.178.68.195 port 44270 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:19.846895 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:19.852580 systemd-logind[1459]: New session 27 of user core. Sep 13 00:08:19.862166 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:08:19.976055 kubelet[2500]: E0913 00:08:19.975343 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:19.979218 containerd[1481]: time="2025-09-13T00:08:19.979126782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sw5ht,Uid:d8b36029-d9a5-4594-ab35-68f76a8aa7e0,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:20.051137 containerd[1481]: time="2025-09-13T00:08:20.050740528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:20.051601 containerd[1481]: time="2025-09-13T00:08:20.051316705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:20.051887 containerd[1481]: time="2025-09-13T00:08:20.051773799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:20.052515 containerd[1481]: time="2025-09-13T00:08:20.052409463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:20.103203 systemd[1]: Started cri-containerd-8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f.scope - libcontainer container 8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f. Sep 13 00:08:20.150514 containerd[1481]: time="2025-09-13T00:08:20.150284368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sw5ht,Uid:d8b36029-d9a5-4594-ab35-68f76a8aa7e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\"" Sep 13 00:08:20.151945 kubelet[2500]: E0913 00:08:20.151540 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:20.157997 containerd[1481]: time="2025-09-13T00:08:20.157732137Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:08:20.175478 containerd[1481]: time="2025-09-13T00:08:20.175244220Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d48f08f8a55d486ba62bcc90005d6d2ecc59cff9b1036f08fccdeb6984d5e8ce\"" Sep 13 00:08:20.176641 containerd[1481]: time="2025-09-13T00:08:20.176394371Z" level=info msg="StartContainer for \"d48f08f8a55d486ba62bcc90005d6d2ecc59cff9b1036f08fccdeb6984d5e8ce\"" Sep 13 00:08:20.215221 systemd[1]: Started cri-containerd-d48f08f8a55d486ba62bcc90005d6d2ecc59cff9b1036f08fccdeb6984d5e8ce.scope - libcontainer container d48f08f8a55d486ba62bcc90005d6d2ecc59cff9b1036f08fccdeb6984d5e8ce. Sep 13 00:08:20.258260 containerd[1481]: time="2025-09-13T00:08:20.258181358Z" level=info msg="StartContainer for \"d48f08f8a55d486ba62bcc90005d6d2ecc59cff9b1036f08fccdeb6984d5e8ce\" returns successfully" Sep 13 00:08:20.270079 systemd[1]: cri-containerd-d48f08f8a55d486ba62bcc90005d6d2ecc59cff9b1036f08fccdeb6984d5e8ce.scope: Deactivated successfully. Sep 13 00:08:20.314533 kubelet[2500]: E0913 00:08:20.314403 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:20.322035 containerd[1481]: time="2025-09-13T00:08:20.321416878Z" level=info msg="shim disconnected" id=d48f08f8a55d486ba62bcc90005d6d2ecc59cff9b1036f08fccdeb6984d5e8ce namespace=k8s.io Sep 13 00:08:20.322035 containerd[1481]: time="2025-09-13T00:08:20.321709710Z" level=warning msg="cleaning up after shim disconnected" id=d48f08f8a55d486ba62bcc90005d6d2ecc59cff9b1036f08fccdeb6984d5e8ce namespace=k8s.io Sep 13 00:08:20.322035 containerd[1481]: time="2025-09-13T00:08:20.321728133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:20.342205 containerd[1481]: time="2025-09-13T00:08:20.342034654Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:08:21.319307 kubelet[2500]: E0913 00:08:21.319231 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:21.325080 containerd[1481]: time="2025-09-13T00:08:21.325009835Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:08:21.347696 containerd[1481]: time="2025-09-13T00:08:21.347610687Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0e64c0ff675f42c9eccedd5db2d21333b41ca89d5d9c23beaae2453494dde82\"" Sep 13 00:08:21.349905 containerd[1481]: time="2025-09-13T00:08:21.349859679Z" level=info msg="StartContainer for \"b0e64c0ff675f42c9eccedd5db2d21333b41ca89d5d9c23beaae2453494dde82\"" Sep 13 00:08:21.398185 systemd[1]: Started cri-containerd-b0e64c0ff675f42c9eccedd5db2d21333b41ca89d5d9c23beaae2453494dde82.scope - libcontainer container b0e64c0ff675f42c9eccedd5db2d21333b41ca89d5d9c23beaae2453494dde82. Sep 13 00:08:21.436447 containerd[1481]: time="2025-09-13T00:08:21.436392161Z" level=info msg="StartContainer for \"b0e64c0ff675f42c9eccedd5db2d21333b41ca89d5d9c23beaae2453494dde82\" returns successfully" Sep 13 00:08:21.447967 systemd[1]: cri-containerd-b0e64c0ff675f42c9eccedd5db2d21333b41ca89d5d9c23beaae2453494dde82.scope: Deactivated successfully. Sep 13 00:08:21.506054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0e64c0ff675f42c9eccedd5db2d21333b41ca89d5d9c23beaae2453494dde82-rootfs.mount: Deactivated successfully. Sep 13 00:08:21.510312 containerd[1481]: time="2025-09-13T00:08:21.510181014Z" level=info msg="shim disconnected" id=b0e64c0ff675f42c9eccedd5db2d21333b41ca89d5d9c23beaae2453494dde82 namespace=k8s.io Sep 13 00:08:21.510312 containerd[1481]: time="2025-09-13T00:08:21.510272846Z" level=warning msg="cleaning up after shim disconnected" id=b0e64c0ff675f42c9eccedd5db2d21333b41ca89d5d9c23beaae2453494dde82 namespace=k8s.io Sep 13 00:08:21.510312 containerd[1481]: time="2025-09-13T00:08:21.510288068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:22.324262 kubelet[2500]: E0913 00:08:22.324193 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:22.330402 containerd[1481]: time="2025-09-13T00:08:22.330351335Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:08:22.360164 containerd[1481]: time="2025-09-13T00:08:22.358903706Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"efb2e05aec0009a479a6bdb0c14bf4d51a9d5af06f9a05f5d6c93db0425a6e52\"" Sep 13 00:08:22.363454 containerd[1481]: time="2025-09-13T00:08:22.363343800Z" level=info msg="StartContainer for \"efb2e05aec0009a479a6bdb0c14bf4d51a9d5af06f9a05f5d6c93db0425a6e52\"" Sep 13 00:08:22.431045 systemd[1]: Started cri-containerd-efb2e05aec0009a479a6bdb0c14bf4d51a9d5af06f9a05f5d6c93db0425a6e52.scope - libcontainer container efb2e05aec0009a479a6bdb0c14bf4d51a9d5af06f9a05f5d6c93db0425a6e52. Sep 13 00:08:22.489700 containerd[1481]: time="2025-09-13T00:08:22.489099298Z" level=info msg="StartContainer for \"efb2e05aec0009a479a6bdb0c14bf4d51a9d5af06f9a05f5d6c93db0425a6e52\" returns successfully" Sep 13 00:08:22.500177 systemd[1]: cri-containerd-efb2e05aec0009a479a6bdb0c14bf4d51a9d5af06f9a05f5d6c93db0425a6e52.scope: Deactivated successfully. Sep 13 00:08:22.552943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efb2e05aec0009a479a6bdb0c14bf4d51a9d5af06f9a05f5d6c93db0425a6e52-rootfs.mount: Deactivated successfully. Sep 13 00:08:22.554841 containerd[1481]: time="2025-09-13T00:08:22.552747846Z" level=info msg="shim disconnected" id=efb2e05aec0009a479a6bdb0c14bf4d51a9d5af06f9a05f5d6c93db0425a6e52 namespace=k8s.io Sep 13 00:08:22.554841 containerd[1481]: time="2025-09-13T00:08:22.553922865Z" level=warning msg="cleaning up after shim disconnected" id=efb2e05aec0009a479a6bdb0c14bf4d51a9d5af06f9a05f5d6c93db0425a6e52 namespace=k8s.io Sep 13 00:08:22.554841 containerd[1481]: time="2025-09-13T00:08:22.553992616Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:22.813730 kubelet[2500]: E0913 00:08:22.813661 2500 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:23.330659 kubelet[2500]: E0913 00:08:23.330217 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:23.334887 containerd[1481]: time="2025-09-13T00:08:23.334660238Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:08:23.359784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354143777.mount: Deactivated successfully. Sep 13 00:08:23.362278 containerd[1481]: time="2025-09-13T00:08:23.361455340Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c55a8d37103ca8792c14ab289790ac9a4e4695f340446e870c4fb50665488090\"" Sep 13 00:08:23.364235 containerd[1481]: time="2025-09-13T00:08:23.364080134Z" level=info msg="StartContainer for \"c55a8d37103ca8792c14ab289790ac9a4e4695f340446e870c4fb50665488090\"" Sep 13 00:08:23.425173 systemd[1]: Started cri-containerd-c55a8d37103ca8792c14ab289790ac9a4e4695f340446e870c4fb50665488090.scope - libcontainer container c55a8d37103ca8792c14ab289790ac9a4e4695f340446e870c4fb50665488090. Sep 13 00:08:23.458779 systemd[1]: cri-containerd-c55a8d37103ca8792c14ab289790ac9a4e4695f340446e870c4fb50665488090.scope: Deactivated successfully. Sep 13 00:08:23.460290 containerd[1481]: time="2025-09-13T00:08:23.459970120Z" level=info msg="StartContainer for \"c55a8d37103ca8792c14ab289790ac9a4e4695f340446e870c4fb50665488090\" returns successfully" Sep 13 00:08:23.522700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c55a8d37103ca8792c14ab289790ac9a4e4695f340446e870c4fb50665488090-rootfs.mount: Deactivated successfully. Sep 13 00:08:23.529110 containerd[1481]: time="2025-09-13T00:08:23.528644290Z" level=info msg="shim disconnected" id=c55a8d37103ca8792c14ab289790ac9a4e4695f340446e870c4fb50665488090 namespace=k8s.io Sep 13 00:08:23.529110 containerd[1481]: time="2025-09-13T00:08:23.528747706Z" level=warning msg="cleaning up after shim disconnected" id=c55a8d37103ca8792c14ab289790ac9a4e4695f340446e870c4fb50665488090 namespace=k8s.io Sep 13 00:08:23.529110 containerd[1481]: time="2025-09-13T00:08:23.528778513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:24.336825 kubelet[2500]: E0913 00:08:24.336713 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:24.342045 containerd[1481]: time="2025-09-13T00:08:24.341969677Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:08:24.366914 containerd[1481]: time="2025-09-13T00:08:24.366164554Z" level=info msg="CreateContainer within sandbox \"8ae3ef8763d416248e0fef85a7c935efbd9530cbf77f46ad7d698e87c47e445f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ddb0d6a79382dc91bdf9ad21d83d9b6ccc6aadeab8f0199e57871a6160250498\"" Sep 13 00:08:24.368855 containerd[1481]: time="2025-09-13T00:08:24.367534497Z" level=info msg="StartContainer for \"ddb0d6a79382dc91bdf9ad21d83d9b6ccc6aadeab8f0199e57871a6160250498\"" Sep 13 00:08:24.411508 systemd[1]: run-containerd-runc-k8s.io-ddb0d6a79382dc91bdf9ad21d83d9b6ccc6aadeab8f0199e57871a6160250498-runc.gGzuMz.mount: Deactivated successfully. Sep 13 00:08:24.426088 systemd[1]: Started cri-containerd-ddb0d6a79382dc91bdf9ad21d83d9b6ccc6aadeab8f0199e57871a6160250498.scope - libcontainer container ddb0d6a79382dc91bdf9ad21d83d9b6ccc6aadeab8f0199e57871a6160250498. Sep 13 00:08:24.462543 containerd[1481]: time="2025-09-13T00:08:24.462488789Z" level=info msg="StartContainer for \"ddb0d6a79382dc91bdf9ad21d83d9b6ccc6aadeab8f0199e57871a6160250498\" returns successfully" Sep 13 00:08:25.079051 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:08:25.347465 kubelet[2500]: E0913 00:08:25.346135 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:26.353857 kubelet[2500]: E0913 00:08:26.353493 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:28.848436 systemd-networkd[1377]: lxc_health: Link UP Sep 13 00:08:28.860309 systemd-networkd[1377]: lxc_health: Gained carrier Sep 13 00:08:29.978206 kubelet[2500]: E0913 00:08:29.977427 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:30.007963 kubelet[2500]: I0913 00:08:30.007896 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sw5ht" podStartSLOduration=11.007874611 podStartE2EDuration="11.007874611s" podCreationTimestamp="2025-09-13 00:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:25.380155519 +0000 UTC m=+107.904822801" watchObservedRunningTime="2025-09-13 00:08:30.007874611 +0000 UTC m=+112.532541882" Sep 13 00:08:30.366237 kubelet[2500]: E0913 00:08:30.365280 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:30.606094 systemd-networkd[1377]: lxc_health: Gained IPv6LL Sep 13 00:08:31.062228 systemd[1]: run-containerd-runc-k8s.io-ddb0d6a79382dc91bdf9ad21d83d9b6ccc6aadeab8f0199e57871a6160250498-runc.3l5x5E.mount: Deactivated successfully. Sep 13 00:08:31.367860 kubelet[2500]: E0913 00:08:31.367298 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:32.681352 kubelet[2500]: E0913 00:08:32.681168 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 13 00:08:35.582516 sshd[4292]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:35.586698 systemd[1]: sshd@26-161.35.231.245:22-139.178.68.195:44270.service: Deactivated successfully. Sep 13 00:08:35.590986 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:08:35.593753 systemd-logind[1459]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:08:35.595702 systemd-logind[1459]: Removed session 27. Sep 13 00:08:37.666431 containerd[1481]: time="2025-09-13T00:08:37.666372260Z" level=info msg="StopPodSandbox for \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\"" Sep 13 00:08:37.667110 containerd[1481]: time="2025-09-13T00:08:37.666521740Z" level=info msg="TearDown network for sandbox \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\" successfully" Sep 13 00:08:37.667110 containerd[1481]: time="2025-09-13T00:08:37.666544542Z" level=info msg="StopPodSandbox for \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\" returns successfully" Sep 13 00:08:37.667697 containerd[1481]: time="2025-09-13T00:08:37.667664292Z" level=info msg="RemovePodSandbox for \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\"" Sep 13 00:08:37.670232 containerd[1481]: time="2025-09-13T00:08:37.670187732Z" level=info msg="Forcibly stopping sandbox \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\"" Sep 13 00:08:37.670407 containerd[1481]: time="2025-09-13T00:08:37.670330157Z" level=info msg="TearDown network for sandbox \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\" successfully" Sep 13 00:08:37.674051 containerd[1481]: time="2025-09-13T00:08:37.673953037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:37.674051 containerd[1481]: time="2025-09-13T00:08:37.674059713Z" level=info msg="RemovePodSandbox \"2649bff772960831b469c2561eb175d8f4de1686e71df0f05bf3b67dbcab2044\" returns successfully" Sep 13 00:08:37.675013 containerd[1481]: time="2025-09-13T00:08:37.674954351Z" level=info msg="StopPodSandbox for \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\"" Sep 13 00:08:37.675125 containerd[1481]: time="2025-09-13T00:08:37.675083773Z" level=info msg="TearDown network for sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" successfully" Sep 13 00:08:37.675125 containerd[1481]: time="2025-09-13T00:08:37.675102554Z" level=info msg="StopPodSandbox for \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" returns successfully" Sep 13 00:08:37.677543 containerd[1481]: time="2025-09-13T00:08:37.675946240Z" level=info msg="RemovePodSandbox for \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\"" Sep 13 00:08:37.677543 containerd[1481]: time="2025-09-13T00:08:37.675991383Z" level=info msg="Forcibly stopping sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\"" Sep 13 00:08:37.677543 containerd[1481]: time="2025-09-13T00:08:37.676077603Z" level=info msg="TearDown network for sandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" successfully" Sep 13 00:08:37.679892 containerd[1481]: time="2025-09-13T00:08:37.679818975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:37.681752 containerd[1481]: time="2025-09-13T00:08:37.681535013Z" level=info msg="RemovePodSandbox \"026f8f55e8311a4402d5c8e45b96cba014db6dafa02ee368faa574f29acdc956\" returns successfully"