Sep 12 17:36:55.925993 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:36:55.926020 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:36:55.926033 kernel: BIOS-provided physical RAM map: Sep 12 17:36:55.926040 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:36:55.926046 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:36:55.926053 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:36:55.926061 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 12 17:36:55.926068 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 12 17:36:55.926074 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:36:55.926084 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:36:55.926091 kernel: NX (Execute Disable) protection: active Sep 12 17:36:55.926097 kernel: APIC: Static calls initialized Sep 12 17:36:55.926108 kernel: SMBIOS 2.8 present. Sep 12 17:36:55.926116 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 12 17:36:55.926124 kernel: Hypervisor detected: KVM Sep 12 17:36:55.926134 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:36:55.926156 kernel: kvm-clock: using sched offset of 2712880474 cycles Sep 12 17:36:55.926165 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:36:55.926173 kernel: tsc: Detected 2494.140 MHz processor Sep 12 17:36:55.926181 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:36:55.926189 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:36:55.926197 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 12 17:36:55.926208 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:36:55.926221 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:36:55.926240 kernel: ACPI: Early table checksum verification disabled Sep 12 17:36:55.926251 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 12 17:36:55.926262 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:55.926273 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:55.926285 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:55.926296 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 12 17:36:55.926307 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:55.926319 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:55.926330 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:55.926345 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:55.926356 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 12 17:36:55.926367 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 12 17:36:55.926379 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 12 17:36:55.926390 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 12 17:36:55.926403 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 12 17:36:55.926413 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 12 17:36:55.926426 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 12 17:36:55.926437 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:36:55.926446 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:36:55.926455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:36:55.926463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 17:36:55.926475 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 12 17:36:55.926484 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 12 17:36:55.926496 kernel: Zone ranges: Sep 12 17:36:55.926504 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:36:55.926513 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 12 17:36:55.926522 kernel: Normal empty Sep 12 17:36:55.926530 kernel: Movable zone start for each node Sep 12 17:36:55.926538 kernel: Early memory node ranges Sep 12 17:36:55.926547 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:36:55.926555 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 12 17:36:55.926563 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 12 17:36:55.926575 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:36:55.926583 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:36:55.926639 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 12 17:36:55.926651 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:36:55.926660 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:36:55.926668 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:36:55.926677 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:36:55.926685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:36:55.926693 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:36:55.926705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:36:55.926713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:36:55.926722 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:36:55.926730 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:36:55.926738 kernel: TSC deadline timer available Sep 12 17:36:55.926746 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:36:55.926755 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:36:55.926763 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 12 17:36:55.926774 kernel: Booting paravirtualized kernel on KVM Sep 12 17:36:55.926783 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:36:55.926794 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:36:55.926802 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:36:55.926811 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:36:55.926819 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:36:55.926827 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 17:36:55.926837 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:36:55.926845 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:36:55.926853 kernel: random: crng init done Sep 12 17:36:55.926864 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:36:55.926873 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:36:55.926881 kernel: Fallback order for Node 0: 0 Sep 12 17:36:55.926889 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 12 17:36:55.926897 kernel: Policy zone: DMA32 Sep 12 17:36:55.926906 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:36:55.926914 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 12 17:36:55.926923 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:36:55.926934 kernel: Kernel/User page tables isolation: enabled Sep 12 17:36:55.926942 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:36:55.926950 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:36:55.926959 kernel: Dynamic Preempt: voluntary Sep 12 17:36:55.926967 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:36:55.926977 kernel: rcu: RCU event tracing is enabled. Sep 12 17:36:55.926985 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:36:55.926993 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:36:55.927008 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:36:55.927020 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:36:55.927036 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:36:55.927048 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:36:55.927060 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:36:55.927072 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:36:55.927087 kernel: Console: colour VGA+ 80x25 Sep 12 17:36:55.927098 kernel: printk: console [tty0] enabled Sep 12 17:36:55.927111 kernel: printk: console [ttyS0] enabled Sep 12 17:36:55.927124 kernel: ACPI: Core revision 20230628 Sep 12 17:36:55.927137 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:36:55.927168 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:36:55.927176 kernel: x2apic enabled Sep 12 17:36:55.927185 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:36:55.927193 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:36:55.927201 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 12 17:36:55.927210 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 12 17:36:55.927218 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 17:36:55.927227 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 17:36:55.927247 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:36:55.927256 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:36:55.927265 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:36:55.927276 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 17:36:55.927285 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:36:55.927294 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:36:55.927302 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 17:36:55.927311 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:36:55.927320 kernel: active return thunk: its_return_thunk Sep 12 17:36:55.927335 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:36:55.927344 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:36:55.927353 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:36:55.927361 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:36:55.927370 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:36:55.927379 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 17:36:55.927388 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:36:55.927397 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:36:55.927408 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:36:55.927419 kernel: landlock: Up and running. Sep 12 17:36:55.927433 kernel: SELinux: Initializing. Sep 12 17:36:55.927444 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:36:55.927453 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:36:55.927462 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 12 17:36:55.927471 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:36:55.927480 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:36:55.927488 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:36:55.927501 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 12 17:36:55.927510 kernel: signal: max sigframe size: 1776 Sep 12 17:36:55.927519 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:36:55.927528 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:36:55.927537 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:36:55.927545 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:36:55.927554 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:36:55.927563 kernel: .... node #0, CPUs: #1 Sep 12 17:36:55.927574 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:36:55.927586 kernel: smpboot: Max logical packages: 1 Sep 12 17:36:55.927595 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 12 17:36:55.927604 kernel: devtmpfs: initialized Sep 12 17:36:55.927613 kernel: x86/mm: Memory block size: 128MB Sep 12 17:36:55.927622 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:36:55.927630 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:36:55.927639 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:36:55.927648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:36:55.927656 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:36:55.927668 kernel: audit: type=2000 audit(1757698615.176:1): state=initialized audit_enabled=0 res=1 Sep 12 17:36:55.927676 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:36:55.927685 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:36:55.927693 kernel: cpuidle: using governor menu Sep 12 17:36:55.927702 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:36:55.927711 kernel: dca service started, version 1.12.1 Sep 12 17:36:55.927720 kernel: PCI: Using configuration type 1 for base access Sep 12 17:36:55.927729 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:36:55.927738 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:36:55.927750 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:36:55.927759 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:36:55.927768 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:36:55.927777 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:36:55.927791 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:36:55.927800 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:36:55.927809 kernel: ACPI: Interpreter enabled Sep 12 17:36:55.927818 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:36:55.927827 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:36:55.927838 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:36:55.927847 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:36:55.927856 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:36:55.927864 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:36:55.928097 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:36:55.928252 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:36:55.928354 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:36:55.928371 kernel: acpiphp: Slot [3] registered Sep 12 17:36:55.928380 kernel: acpiphp: Slot [4] registered Sep 12 17:36:55.928389 kernel: acpiphp: Slot [5] registered Sep 12 17:36:55.928397 kernel: acpiphp: Slot [6] registered Sep 12 17:36:55.928406 kernel: acpiphp: Slot [7] registered Sep 12 17:36:55.928414 kernel: acpiphp: Slot [8] registered Sep 12 17:36:55.928423 kernel: acpiphp: Slot [9] registered Sep 12 17:36:55.928432 kernel: acpiphp: Slot [10] registered Sep 12 17:36:55.928440 kernel: acpiphp: Slot [11] registered Sep 12 17:36:55.928449 kernel: acpiphp: Slot [12] registered Sep 12 17:36:55.928461 kernel: acpiphp: Slot [13] registered Sep 12 17:36:55.928469 kernel: acpiphp: Slot [14] registered Sep 12 17:36:55.928478 kernel: acpiphp: Slot [15] registered Sep 12 17:36:55.928487 kernel: acpiphp: Slot [16] registered Sep 12 17:36:55.928495 kernel: acpiphp: Slot [17] registered Sep 12 17:36:55.928504 kernel: acpiphp: Slot [18] registered Sep 12 17:36:55.928513 kernel: acpiphp: Slot [19] registered Sep 12 17:36:55.928522 kernel: acpiphp: Slot [20] registered Sep 12 17:36:55.928530 kernel: acpiphp: Slot [21] registered Sep 12 17:36:55.928542 kernel: acpiphp: Slot [22] registered Sep 12 17:36:55.928551 kernel: acpiphp: Slot [23] registered Sep 12 17:36:55.928560 kernel: acpiphp: Slot [24] registered Sep 12 17:36:55.928568 kernel: acpiphp: Slot [25] registered Sep 12 17:36:55.928577 kernel: acpiphp: Slot [26] registered Sep 12 17:36:55.928586 kernel: acpiphp: Slot [27] registered Sep 12 17:36:55.928595 kernel: acpiphp: Slot [28] registered Sep 12 17:36:55.928603 kernel: acpiphp: Slot [29] registered Sep 12 17:36:55.928612 kernel: acpiphp: Slot [30] registered Sep 12 17:36:55.928621 kernel: acpiphp: Slot [31] registered Sep 12 17:36:55.928637 kernel: PCI host bridge to bus 0000:00 Sep 12 17:36:55.928805 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:36:55.928946 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:36:55.929087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:36:55.929210 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:36:55.929325 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 12 17:36:55.929418 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:36:55.929573 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:36:55.929753 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 17:36:55.929871 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 12 17:36:55.929969 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 12 17:36:55.930065 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 12 17:36:55.931005 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 12 17:36:55.931160 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 12 17:36:55.931263 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 12 17:36:55.931395 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 12 17:36:55.931495 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 12 17:36:55.931693 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 17:36:55.931805 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 12 17:36:55.931909 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 12 17:36:55.932023 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 12 17:36:55.932121 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 12 17:36:55.932230 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 12 17:36:55.932339 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 12 17:36:55.932437 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 12 17:36:55.932534 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:36:55.932680 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:36:55.933377 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 12 17:36:55.933492 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 12 17:36:55.933590 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 12 17:36:55.933705 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:36:55.933803 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 12 17:36:55.933898 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 12 17:36:55.934001 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 12 17:36:55.934111 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 12 17:36:55.935314 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 12 17:36:55.935452 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 12 17:36:55.935553 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 12 17:36:55.935668 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:36:55.935767 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 17:36:55.935871 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 12 17:36:55.935965 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 12 17:36:55.936071 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:36:55.938256 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 12 17:36:55.938440 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 12 17:36:55.938557 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 12 17:36:55.938718 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 12 17:36:55.938902 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 12 17:36:55.939046 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 12 17:36:55.939060 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:36:55.939070 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:36:55.939079 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:36:55.939088 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:36:55.939097 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:36:55.939112 kernel: iommu: Default domain type: Translated Sep 12 17:36:55.939121 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:36:55.939130 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:36:55.939275 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:36:55.939289 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:36:55.939298 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 12 17:36:55.939409 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 12 17:36:55.939506 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 12 17:36:55.939606 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:36:55.939618 kernel: vgaarb: loaded Sep 12 17:36:55.939628 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:36:55.939637 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:36:55.939646 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:36:55.939655 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:36:55.939665 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:36:55.939674 kernel: pnp: PnP ACPI init Sep 12 17:36:55.939682 kernel: pnp: PnP ACPI: found 4 devices Sep 12 17:36:55.939695 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:36:55.939704 kernel: NET: Registered PF_INET protocol family Sep 12 17:36:55.939740 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:36:55.939752 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:36:55.939765 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:36:55.939777 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:36:55.939791 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:36:55.939805 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:36:55.939819 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:36:55.939839 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:36:55.939848 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:36:55.939857 kernel: NET: Registered PF_XDP protocol family Sep 12 17:36:55.939965 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:36:55.940053 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:36:55.940138 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:36:55.941343 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:36:55.941436 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 12 17:36:55.941564 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 12 17:36:55.941707 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:36:55.941722 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:36:55.941841 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 29328 usecs Sep 12 17:36:55.941854 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:36:55.941863 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:36:55.941874 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 12 17:36:55.941883 kernel: Initialise system trusted keyrings Sep 12 17:36:55.941892 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:36:55.941906 kernel: Key type asymmetric registered Sep 12 17:36:55.941914 kernel: Asymmetric key parser 'x509' registered Sep 12 17:36:55.941923 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:36:55.941932 kernel: io scheduler mq-deadline registered Sep 12 17:36:55.941941 kernel: io scheduler kyber registered Sep 12 17:36:55.941950 kernel: io scheduler bfq registered Sep 12 17:36:55.941959 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:36:55.941968 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 12 17:36:55.941977 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 17:36:55.941989 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 17:36:55.941998 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:36:55.942007 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:36:55.942016 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:36:55.942025 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:36:55.942033 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:36:55.942177 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 17:36:55.942273 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 17:36:55.942366 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T17:36:55 UTC (1757698615) Sep 12 17:36:55.942378 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 12 17:36:55.942464 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 17:36:55.942476 kernel: intel_pstate: CPU model not supported Sep 12 17:36:55.942486 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:36:55.942494 kernel: Segment Routing with IPv6 Sep 12 17:36:55.942504 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:36:55.942513 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:36:55.942522 kernel: Key type dns_resolver registered Sep 12 17:36:55.942534 kernel: IPI shorthand broadcast: enabled Sep 12 17:36:55.942543 kernel: sched_clock: Marking stable (882005764, 84332349)->(1061671502, -95333389) Sep 12 17:36:55.942551 kernel: registered taskstats version 1 Sep 12 17:36:55.942560 kernel: Loading compiled-in X.509 certificates Sep 12 17:36:55.942569 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:36:55.942613 kernel: Key type .fscrypt registered Sep 12 17:36:55.942626 kernel: Key type fscrypt-provisioning registered Sep 12 17:36:55.942638 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:36:55.942651 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:36:55.942660 kernel: ima: No architecture policies found Sep 12 17:36:55.942669 kernel: clk: Disabling unused clocks Sep 12 17:36:55.942678 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:36:55.942688 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:36:55.942716 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:36:55.942728 kernel: Run /init as init process Sep 12 17:36:55.942738 kernel: with arguments: Sep 12 17:36:55.942747 kernel: /init Sep 12 17:36:55.942759 kernel: with environment: Sep 12 17:36:55.942769 kernel: HOME=/ Sep 12 17:36:55.942778 kernel: TERM=linux Sep 12 17:36:55.942787 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:36:55.942799 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:36:55.942811 systemd[1]: Detected virtualization kvm. Sep 12 17:36:55.942824 systemd[1]: Detected architecture x86-64. Sep 12 17:36:55.942839 systemd[1]: Running in initrd. Sep 12 17:36:55.942853 systemd[1]: No hostname configured, using default hostname. Sep 12 17:36:55.942862 systemd[1]: Hostname set to . Sep 12 17:36:55.942872 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:36:55.942881 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:36:55.942894 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:36:55.942904 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:36:55.942914 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:36:55.942924 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:36:55.942936 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:36:55.942946 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:36:55.942957 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:36:55.942967 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:36:55.942977 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:36:55.942991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:36:55.943003 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:36:55.943016 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:36:55.943026 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:36:55.943038 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:36:55.943048 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:36:55.943058 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:36:55.943071 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:36:55.943081 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:36:55.943090 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:36:55.943100 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:36:55.943110 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:36:55.943120 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:36:55.943130 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:36:55.943139 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:36:55.945225 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:36:55.945246 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:36:55.945257 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:36:55.945267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:36:55.945278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:55.945288 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:36:55.945325 systemd-journald[183]: Collecting audit messages is disabled. Sep 12 17:36:55.945352 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:36:55.945362 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:36:55.945374 systemd-journald[183]: Journal started Sep 12 17:36:55.945398 systemd-journald[183]: Runtime Journal (/run/log/journal/d471fb85cc354147bfea378eee56713e) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:36:55.948184 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:36:55.958803 systemd-modules-load[185]: Inserted module 'overlay' Sep 12 17:36:55.986204 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:36:55.989174 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:36:55.989210 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:55.993191 kernel: Bridge firewalling registered Sep 12 17:36:55.994242 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 12 17:36:56.000493 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:36:56.002325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:36:56.004203 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:36:56.004857 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:36:56.016012 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:36:56.026400 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:36:56.030213 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:56.035340 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:36:56.035969 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:36:56.042243 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:36:56.051858 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:36:56.054614 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:36:56.076615 dracut-cmdline[213]: dracut-dracut-053 Sep 12 17:36:56.080172 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:36:56.090212 systemd-resolved[217]: Positive Trust Anchors: Sep 12 17:36:56.091012 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:36:56.091080 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:36:56.097024 systemd-resolved[217]: Defaulting to hostname 'linux'. Sep 12 17:36:56.099683 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:36:56.100243 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:36:56.182224 kernel: SCSI subsystem initialized Sep 12 17:36:56.192191 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:36:56.203192 kernel: iscsi: registered transport (tcp) Sep 12 17:36:56.226178 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:36:56.226256 kernel: QLogic iSCSI HBA Driver Sep 12 17:36:56.276166 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:36:56.283434 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:36:56.316226 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:36:56.316319 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:36:56.317894 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:36:56.361198 kernel: raid6: avx2x4 gen() 16280 MB/s Sep 12 17:36:56.378203 kernel: raid6: avx2x2 gen() 17399 MB/s Sep 12 17:36:56.395318 kernel: raid6: avx2x1 gen() 13804 MB/s Sep 12 17:36:56.395398 kernel: raid6: using algorithm avx2x2 gen() 17399 MB/s Sep 12 17:36:56.413529 kernel: raid6: .... xor() 20170 MB/s, rmw enabled Sep 12 17:36:56.413622 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:36:56.435178 kernel: xor: automatically using best checksumming function avx Sep 12 17:36:56.597189 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:36:56.611458 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:36:56.623453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:36:56.637286 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 12 17:36:56.642630 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:36:56.650721 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:36:56.669528 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Sep 12 17:36:56.717248 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:36:56.722473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:36:56.797354 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:36:56.804662 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:36:56.837627 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:36:56.841577 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:36:56.842156 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:36:56.843845 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:36:56.851352 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:36:56.887603 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:36:56.903220 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 12 17:36:56.905658 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 17:36:56.908575 kernel: ACPI: bus type USB registered Sep 12 17:36:56.908662 kernel: usbcore: registered new interface driver usbfs Sep 12 17:36:56.921758 kernel: usbcore: registered new interface driver hub Sep 12 17:36:56.921840 kernel: usbcore: registered new device driver usb Sep 12 17:36:56.928991 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:36:56.929078 kernel: GPT:9289727 != 125829119 Sep 12 17:36:56.929093 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:36:56.929125 kernel: GPT:9289727 != 125829119 Sep 12 17:36:56.929139 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:36:56.929170 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:36:56.934239 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:36:56.947182 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:36:56.953204 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 12 17:36:56.962623 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 12 17:36:56.967179 kernel: libata version 3.00 loaded. Sep 12 17:36:56.970429 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 12 17:36:56.975197 kernel: scsi host1: ata_piix Sep 12 17:36:56.976343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:36:56.977002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:56.977994 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:36:56.978917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:36:56.979088 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:56.980620 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:56.985527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:56.989209 kernel: scsi host2: ata_piix Sep 12 17:36:56.993408 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 12 17:36:56.993856 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 12 17:36:57.004177 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:36:57.012753 kernel: AES CTR mode by8 optimization enabled Sep 12 17:36:57.067187 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Sep 12 17:36:57.073842 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (460) Sep 12 17:36:57.094648 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:36:57.106129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:57.117506 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 12 17:36:57.117794 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 12 17:36:57.117946 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 12 17:36:57.118075 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 12 17:36:57.118226 kernel: hub 1-0:1.0: USB hub found Sep 12 17:36:57.118479 kernel: hub 1-0:1.0: 2 ports detected Sep 12 17:36:57.124351 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:36:57.132396 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:36:57.139048 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:36:57.139747 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:36:57.145481 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:36:57.154467 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:36:57.173595 disk-uuid[537]: Primary Header is updated. Sep 12 17:36:57.173595 disk-uuid[537]: Secondary Entries is updated. Sep 12 17:36:57.173595 disk-uuid[537]: Secondary Header is updated. Sep 12 17:36:57.187194 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:36:57.191393 kernel: GPT:disk_guids don't match. Sep 12 17:36:57.191507 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:36:57.192354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:36:57.207272 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:57.213180 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:36:58.205217 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:36:58.205578 disk-uuid[544]: The operation has completed successfully. Sep 12 17:36:58.248932 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:36:58.249070 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:36:58.267517 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:36:58.272218 sh[565]: Success Sep 12 17:36:58.289210 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:36:58.366284 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:36:58.369320 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:36:58.370481 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:36:58.405650 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:36:58.405736 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:58.405757 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:36:58.406589 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:36:58.407442 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:36:58.416590 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:36:58.418388 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:36:58.424418 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:36:58.427360 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:36:58.441186 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:58.442201 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:58.442256 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:36:58.449188 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:36:58.466027 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:36:58.466580 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:58.474743 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:36:58.479429 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:36:58.611271 ignition[656]: Ignition 2.19.0 Sep 12 17:36:58.611288 ignition[656]: Stage: fetch-offline Sep 12 17:36:58.611357 ignition[656]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:58.611370 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:58.614368 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:36:58.611524 ignition[656]: parsed url from cmdline: "" Sep 12 17:36:58.611528 ignition[656]: no config URL provided Sep 12 17:36:58.611535 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:36:58.611547 ignition[656]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:36:58.611553 ignition[656]: failed to fetch config: resource requires networking Sep 12 17:36:58.611793 ignition[656]: Ignition finished successfully Sep 12 17:36:58.629274 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:36:58.635420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:36:58.666955 systemd-networkd[754]: lo: Link UP Sep 12 17:36:58.666970 systemd-networkd[754]: lo: Gained carrier Sep 12 17:36:58.669361 systemd-networkd[754]: Enumeration completed Sep 12 17:36:58.669770 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:36:58.669774 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 12 17:36:58.670048 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:36:58.670861 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:36:58.670866 systemd-networkd[754]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:36:58.671530 systemd-networkd[754]: eth0: Link UP Sep 12 17:36:58.671534 systemd-networkd[754]: eth0: Gained carrier Sep 12 17:36:58.671542 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:36:58.672223 systemd[1]: Reached target network.target - Network. Sep 12 17:36:58.678852 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:36:58.678992 systemd-networkd[754]: eth1: Link UP Sep 12 17:36:58.678999 systemd-networkd[754]: eth1: Gained carrier Sep 12 17:36:58.679025 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:36:58.692245 systemd-networkd[754]: eth0: DHCPv4 address 143.244.177.186/20, gateway 143.244.176.1 acquired from 169.254.169.253 Sep 12 17:36:58.696299 systemd-networkd[754]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Sep 12 17:36:58.707561 ignition[756]: Ignition 2.19.0 Sep 12 17:36:58.707576 ignition[756]: Stage: fetch Sep 12 17:36:58.707866 ignition[756]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:58.707883 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:58.708704 ignition[756]: parsed url from cmdline: "" Sep 12 17:36:58.708711 ignition[756]: no config URL provided Sep 12 17:36:58.708720 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:36:58.708735 ignition[756]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:36:58.708769 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 12 17:36:58.727335 ignition[756]: GET result: OK Sep 12 17:36:58.727453 ignition[756]: parsing config with SHA512: dc99e5c8a3fdf34edf7cad2c1c868051989cbda8909d6a67d02f60343a7b1ef2853ff0fea1d1acf36af7979d48bc3cf66df4adee32d81954472669d1a7f04339 Sep 12 17:36:58.732277 unknown[756]: fetched base config from "system" Sep 12 17:36:58.732293 unknown[756]: fetched base config from "system" Sep 12 17:36:58.732304 unknown[756]: fetched user config from "digitalocean" Sep 12 17:36:58.733089 ignition[756]: fetch: fetch complete Sep 12 17:36:58.733099 ignition[756]: fetch: fetch passed Sep 12 17:36:58.735267 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:36:58.733376 ignition[756]: Ignition finished successfully Sep 12 17:36:58.741400 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:36:58.766411 ignition[763]: Ignition 2.19.0 Sep 12 17:36:58.767118 ignition[763]: Stage: kargs Sep 12 17:36:58.767337 ignition[763]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:58.767350 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:58.769471 ignition[763]: kargs: kargs passed Sep 12 17:36:58.769864 ignition[763]: Ignition finished successfully Sep 12 17:36:58.771016 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:36:58.776356 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:36:58.795837 ignition[769]: Ignition 2.19.0 Sep 12 17:36:58.795850 ignition[769]: Stage: disks Sep 12 17:36:58.796068 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:58.796080 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:58.800285 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:36:58.797054 ignition[769]: disks: disks passed Sep 12 17:36:58.797106 ignition[769]: Ignition finished successfully Sep 12 17:36:58.801587 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:36:58.802297 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:36:58.803022 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:36:58.803689 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:36:58.804501 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:36:58.809341 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:36:58.827794 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:36:58.831050 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:36:58.838249 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:36:58.934172 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:36:58.934521 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:36:58.935528 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:36:58.945316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:36:58.948366 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:36:58.950381 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 12 17:36:58.958294 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Sep 12 17:36:58.961569 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:36:58.962313 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:58.963105 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:36:58.966356 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:58.966382 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:36:58.963801 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:36:58.975496 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:36:58.980170 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:36:58.991900 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:36:58.994329 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:36:59.036323 coreos-metadata[788]: Sep 12 17:36:59.036 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:36:59.050268 coreos-metadata[788]: Sep 12 17:36:59.050 INFO Fetch successful Sep 12 17:36:59.052458 coreos-metadata[789]: Sep 12 17:36:59.052 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:36:59.056836 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:36:59.058035 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 12 17:36:59.058205 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 12 17:36:59.063663 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:36:59.065505 coreos-metadata[789]: Sep 12 17:36:59.064 INFO Fetch successful Sep 12 17:36:59.069975 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:36:59.074228 coreos-metadata[789]: Sep 12 17:36:59.074 INFO wrote hostname ci-4081.3.6-a-bde5b7e242 to /sysroot/etc/hostname Sep 12 17:36:59.075686 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:36:59.076988 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:36:59.172734 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:36:59.175317 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:36:59.178342 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:36:59.190252 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:59.205861 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:36:59.224258 ignition[909]: INFO : Ignition 2.19.0 Sep 12 17:36:59.224258 ignition[909]: INFO : Stage: mount Sep 12 17:36:59.225271 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:59.225271 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:59.227229 ignition[909]: INFO : mount: mount passed Sep 12 17:36:59.227229 ignition[909]: INFO : Ignition finished successfully Sep 12 17:36:59.228014 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:36:59.244360 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:36:59.405838 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:36:59.413523 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:36:59.436193 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (919) Sep 12 17:36:59.438788 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:59.438874 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:59.438896 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:36:59.443252 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:36:59.446038 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:36:59.480552 ignition[935]: INFO : Ignition 2.19.0 Sep 12 17:36:59.480552 ignition[935]: INFO : Stage: files Sep 12 17:36:59.481612 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:59.481612 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:59.482568 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:36:59.483103 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:36:59.483103 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:36:59.485909 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:36:59.486875 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:36:59.487822 unknown[935]: wrote ssh authorized keys file for user: core Sep 12 17:36:59.488633 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:36:59.489724 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:36:59.490377 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:36:59.544557 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:36:59.668753 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:36:59.668753 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:36:59.668753 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:36:59.742740 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:36:59.879091 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:36:59.879091 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:36:59.880650 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:36:59.880650 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:36:59.880650 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:36:59.880650 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:36:59.883315 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:36:59.883315 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:36:59.883315 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:36:59.883315 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:36:59.883315 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:36:59.883315 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:36:59.883315 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:36:59.883315 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:36:59.883315 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:37:00.122299 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:37:00.146369 systemd-networkd[754]: eth0: Gained IPv6LL Sep 12 17:37:00.479587 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:37:00.479587 ignition[935]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:37:00.481086 ignition[935]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:37:00.481688 ignition[935]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:37:00.481688 ignition[935]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:37:00.481688 ignition[935]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:37:00.483177 ignition[935]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:37:00.483177 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:37:00.483177 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:37:00.483177 ignition[935]: INFO : files: files passed Sep 12 17:37:00.483177 ignition[935]: INFO : Ignition finished successfully Sep 12 17:37:00.484158 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:37:00.491426 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:37:00.493358 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:37:00.499361 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:37:00.499521 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:37:00.520038 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:37:00.520038 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:37:00.523098 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:37:00.525332 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:37:00.526459 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:37:00.538521 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:37:00.576577 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:37:00.576695 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:37:00.577792 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:37:00.578489 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:37:00.579434 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:37:00.584418 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:37:00.601363 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:37:00.615487 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:37:00.629512 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:37:00.631170 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:37:00.631905 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:37:00.632615 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:37:00.632821 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:37:00.633885 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:37:00.634899 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:37:00.635764 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:37:00.636553 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:37:00.637354 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:37:00.638254 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:37:00.639422 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:37:00.640405 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:37:00.641248 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:37:00.642136 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:37:00.642921 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:37:00.643128 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:37:00.644189 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:37:00.644828 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:37:00.645803 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:37:00.646081 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:37:00.646910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:37:00.647101 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:37:00.648439 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:37:00.648635 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:37:00.649716 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:37:00.649882 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:37:00.650912 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:37:00.651137 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:37:00.658620 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:37:00.659202 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:37:00.659483 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:37:00.662688 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:37:00.663832 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:37:00.666316 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:37:00.668485 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:37:00.670266 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:37:00.678660 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:37:00.678814 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:37:00.699073 ignition[988]: INFO : Ignition 2.19.0 Sep 12 17:37:00.699073 ignition[988]: INFO : Stage: umount Sep 12 17:37:00.699073 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:37:00.699073 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:37:00.719095 ignition[988]: INFO : umount: umount passed Sep 12 17:37:00.719095 ignition[988]: INFO : Ignition finished successfully Sep 12 17:37:00.700590 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:37:00.718333 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:37:00.719291 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:37:00.723526 systemd-networkd[754]: eth1: Gained IPv6LL Sep 12 17:37:00.736815 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:37:00.736933 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:37:00.738904 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:37:00.738993 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:37:00.743446 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:37:00.743541 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:37:00.744283 systemd[1]: Stopped target network.target - Network. Sep 12 17:37:00.744709 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:37:00.744792 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:37:00.745383 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:37:00.746087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:37:00.748383 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:37:00.748993 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:37:00.749476 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:37:00.750496 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:37:00.750567 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:37:00.751599 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:37:00.751660 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:37:00.752430 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:37:00.752509 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:37:00.753181 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:37:00.753249 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:37:00.754473 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:37:00.755129 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:37:00.756263 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:37:00.756442 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:37:00.757785 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:37:00.757920 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:37:00.758221 systemd-networkd[754]: eth1: DHCPv6 lease lost Sep 12 17:37:00.762253 systemd-networkd[754]: eth0: DHCPv6 lease lost Sep 12 17:37:00.764447 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:37:00.764580 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:37:00.766261 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:37:00.766766 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:37:00.775305 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:37:00.776327 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:37:00.776856 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:37:00.777358 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:37:00.777971 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:37:00.779306 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:37:00.789735 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:37:00.789977 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:37:00.793168 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:37:00.793918 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:37:00.795061 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:37:00.795114 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:37:00.795614 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:37:00.795670 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:37:00.797202 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:37:00.797263 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:37:00.798101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:37:00.798175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:37:00.805420 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:37:00.806700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:37:00.806792 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:37:00.807345 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:37:00.807420 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:37:00.807939 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:37:00.808003 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:37:00.811286 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:37:00.811354 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:37:00.811895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:37:00.811955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:37:00.813046 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:37:00.813221 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:37:00.814254 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:37:00.814395 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:37:00.816545 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:37:00.821463 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:37:00.836152 systemd[1]: Switching root. Sep 12 17:37:00.884093 systemd-journald[183]: Journal stopped Sep 12 17:37:01.996304 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 12 17:37:01.996420 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:37:01.996444 kernel: SELinux: policy capability open_perms=1 Sep 12 17:37:01.996462 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:37:01.996479 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:37:01.996496 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:37:01.996514 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:37:01.996531 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:37:01.996548 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:37:01.996565 kernel: audit: type=1403 audit(1757698621.014:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:37:01.996583 systemd[1]: Successfully loaded SELinux policy in 39.208ms. Sep 12 17:37:01.996620 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.243ms. Sep 12 17:37:01.996642 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:37:01.996668 systemd[1]: Detected virtualization kvm. Sep 12 17:37:01.996687 systemd[1]: Detected architecture x86-64. Sep 12 17:37:01.996706 systemd[1]: Detected first boot. Sep 12 17:37:01.996732 systemd[1]: Hostname set to . Sep 12 17:37:01.996751 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:37:01.996770 zram_generator::config[1036]: No configuration found. Sep 12 17:37:01.996794 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:37:01.996814 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:37:01.996833 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:37:01.996853 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:37:01.996874 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:37:01.996894 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:37:01.996914 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:37:01.996957 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:37:01.996981 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:37:01.997000 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:37:01.997021 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:37:01.997039 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:37:01.997057 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:37:01.997077 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:37:01.997097 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:37:01.997117 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:37:01.997144 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:37:01.997181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:37:01.997199 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:37:01.997219 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:37:01.997239 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:37:01.997260 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:37:01.997280 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:37:01.997304 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:37:01.997325 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:37:01.997345 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:37:01.997365 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:37:01.997386 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:37:01.997415 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:37:01.997435 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:37:01.997455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:37:01.997475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:37:01.997495 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:37:01.997517 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:37:01.997537 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:37:01.997556 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:37:01.997576 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:37:01.997595 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:01.997615 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:37:01.997635 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:37:01.997652 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:37:01.997675 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:37:01.997695 systemd[1]: Reached target machines.target - Containers. Sep 12 17:37:01.997716 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:37:01.997734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:37:01.997751 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:37:01.997769 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:37:01.997787 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:37:01.997806 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:37:01.997826 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:37:01.997849 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:37:01.997868 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:37:01.997886 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:37:01.997905 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:37:01.997924 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:37:01.997941 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:37:01.997960 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:37:01.997978 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:37:01.998000 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:37:01.998018 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:37:01.998036 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:37:01.998055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:37:01.998074 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:37:01.998092 systemd[1]: Stopped verity-setup.service. Sep 12 17:37:01.998119 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:01.998187 systemd-journald[1101]: Collecting audit messages is disabled. Sep 12 17:37:01.998227 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:37:01.998247 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:37:01.998266 systemd-journald[1101]: Journal started Sep 12 17:37:01.998301 systemd-journald[1101]: Runtime Journal (/run/log/journal/d471fb85cc354147bfea378eee56713e) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:37:01.736714 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:37:01.758326 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:37:01.758961 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:37:02.009425 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:37:02.006498 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:37:02.008405 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:37:02.008934 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:37:02.009964 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:37:02.015046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:37:02.016380 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:37:02.016517 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:37:02.018584 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:37:02.018755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:37:02.020545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:37:02.020719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:37:02.040233 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:37:02.053220 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:37:02.055070 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:37:02.066315 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:37:02.080365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:37:02.083510 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:37:02.084744 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:37:02.086342 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:37:02.086416 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:37:02.092477 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:37:02.109477 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:37:02.114360 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:37:02.114947 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:37:02.123479 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:37:02.129184 kernel: fuse: init (API version 7.39) Sep 12 17:37:02.134361 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:37:02.137617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:37:02.141386 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:37:02.145338 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:37:02.146239 kernel: ACPI: bus type drm_connector registered Sep 12 17:37:02.147137 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:37:02.148052 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:37:02.159368 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:37:02.159732 systemd-journald[1101]: Time spent on flushing to /var/log/journal/d471fb85cc354147bfea378eee56713e is 119.005ms for 983 entries. Sep 12 17:37:02.159732 systemd-journald[1101]: System Journal (/var/log/journal/d471fb85cc354147bfea378eee56713e) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:37:02.299542 systemd-journald[1101]: Received client request to flush runtime journal. Sep 12 17:37:02.299627 kernel: loop: module loaded Sep 12 17:37:02.299673 kernel: loop0: detected capacity change from 0 to 221472 Sep 12 17:37:02.299701 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:37:02.160677 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:37:02.161265 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:37:02.165771 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:37:02.166447 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:37:02.190274 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:37:02.191016 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:37:02.191191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:37:02.195557 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:37:02.207398 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:37:02.231327 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:37:02.249092 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:37:02.249633 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:37:02.262362 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:37:02.307114 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:37:02.317359 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:37:02.318039 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:37:02.322181 kernel: loop1: detected capacity change from 0 to 8 Sep 12 17:37:02.322116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:37:02.333481 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:37:02.351589 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:37:02.358743 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:37:02.359977 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 17:37:02.364390 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:37:02.404101 kernel: loop3: detected capacity change from 0 to 142488 Sep 12 17:37:02.457082 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 12 17:37:02.457109 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 12 17:37:02.483687 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:37:02.487301 kernel: loop4: detected capacity change from 0 to 221472 Sep 12 17:37:02.505182 kernel: loop5: detected capacity change from 0 to 8 Sep 12 17:37:02.516974 kernel: loop6: detected capacity change from 0 to 140768 Sep 12 17:37:02.583094 kernel: loop7: detected capacity change from 0 to 142488 Sep 12 17:37:02.601792 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 12 17:37:02.605256 (sd-merge)[1178]: Merged extensions into '/usr'. Sep 12 17:37:02.615819 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:37:02.616318 systemd[1]: Reloading... Sep 12 17:37:02.828198 zram_generator::config[1207]: No configuration found. Sep 12 17:37:02.913289 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:37:02.995366 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:37:03.054677 systemd[1]: Reloading finished in 437 ms. Sep 12 17:37:03.094987 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:37:03.099742 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:37:03.112572 systemd[1]: Starting ensure-sysext.service... Sep 12 17:37:03.120578 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:37:03.147388 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:37:03.147414 systemd[1]: Reloading... Sep 12 17:37:03.193808 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:37:03.199012 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:37:03.200937 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:37:03.201634 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 17:37:03.201858 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 17:37:03.208991 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:37:03.210430 systemd-tmpfiles[1248]: Skipping /boot Sep 12 17:37:03.269949 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:37:03.269970 systemd-tmpfiles[1248]: Skipping /boot Sep 12 17:37:03.328242 zram_generator::config[1284]: No configuration found. Sep 12 17:37:03.524454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:37:03.617142 systemd[1]: Reloading finished in 469 ms. Sep 12 17:37:03.637023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:37:03.660433 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:37:03.673262 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:37:03.677407 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:37:03.683038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:37:03.691463 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:37:03.694067 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:37:03.710596 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:37:03.723656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:03.723979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:37:03.731652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:37:03.737579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:37:03.742568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:37:03.744411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:37:03.757123 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:37:03.758271 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:03.763404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:03.763736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:37:03.764008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:37:03.765216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:03.771904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:03.772381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:37:03.783594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:37:03.784485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:37:03.784734 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:03.797183 systemd[1]: Finished ensure-sysext.service. Sep 12 17:37:03.808688 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:37:03.813596 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Sep 12 17:37:03.820696 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:37:03.822616 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:37:03.828711 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:37:03.830742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:37:03.831246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:37:03.835577 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:37:03.858345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:37:03.872860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:37:03.874895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:37:03.875130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:37:03.876803 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:37:03.878348 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:37:03.879583 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:37:03.880841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:37:03.892263 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:37:03.918639 augenrules[1366]: No rules Sep 12 17:37:03.923252 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:37:03.939277 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:37:03.954460 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:37:04.004416 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:37:04.006506 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:37:04.026091 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:37:04.081367 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 12 17:37:04.081948 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:04.082211 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:37:04.084570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:37:04.093544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:37:04.097485 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:37:04.099222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:37:04.099293 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:37:04.099317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:37:04.146173 kernel: ISO 9660 Extensions: RRIP_1991A Sep 12 17:37:04.149985 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 12 17:37:04.181757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:37:04.182013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:37:04.195558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:37:04.197253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:37:04.199108 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:37:04.208711 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:37:04.210292 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:37:04.214370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:37:04.238354 systemd-networkd[1354]: lo: Link UP Sep 12 17:37:04.240245 systemd-networkd[1354]: lo: Gained carrier Sep 12 17:37:04.245930 systemd-networkd[1354]: Enumeration completed Sep 12 17:37:04.246230 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1377) Sep 12 17:37:04.248418 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:37:04.248733 systemd-networkd[1354]: eth0: Configuring with /run/systemd/network/10-6a:1f:fb:19:2d:2c.network. Sep 12 17:37:04.251750 systemd-networkd[1354]: eth1: Configuring with /run/systemd/network/10-52:d0:8f:3f:5c:1a.network. Sep 12 17:37:04.252497 systemd-networkd[1354]: eth0: Link UP Sep 12 17:37:04.252503 systemd-networkd[1354]: eth0: Gained carrier Sep 12 17:37:04.256668 systemd-resolved[1323]: Positive Trust Anchors: Sep 12 17:37:04.256908 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:37:04.256974 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:37:04.259561 systemd-networkd[1354]: eth1: Link UP Sep 12 17:37:04.259568 systemd-networkd[1354]: eth1: Gained carrier Sep 12 17:37:04.260478 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:37:04.267884 systemd-resolved[1323]: Using system hostname 'ci-4081.3.6-a-bde5b7e242'. Sep 12 17:37:04.271347 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:37:04.272041 systemd[1]: Reached target network.target - Network. Sep 12 17:37:04.272552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:37:04.293306 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:37:04.294014 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:37:04.385988 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:37:04.387626 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:37:04.391185 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 12 17:37:04.394175 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:37:04.397443 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:37:04.443946 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:37:04.484731 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 17:37:04.493706 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 12 17:37:04.493834 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 12 17:37:04.515196 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:37:04.519553 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 17:37:04.519672 kernel: [drm] features: -context_init Sep 12 17:37:04.536182 kernel: [drm] number of scanouts: 1 Sep 12 17:37:04.536314 kernel: [drm] number of cap sets: 0 Sep 12 17:37:05.169282 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 12 17:37:05.169063 systemd-resolved[1323]: Clock change detected. Flushing caches. Sep 12 17:37:05.169163 systemd-timesyncd[1343]: Contacted time server 51.81.226.229:123 (0.flatcar.pool.ntp.org). Sep 12 17:37:05.169234 systemd-timesyncd[1343]: Initial clock synchronization to Fri 2025-09-12 17:37:05.169005 UTC. Sep 12 17:37:05.186988 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:37:05.193974 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 17:37:05.194081 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:37:05.194109 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 17:37:05.203396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:37:05.216540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:37:05.217903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:37:05.236353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:37:05.254732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:37:05.255033 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:37:05.271639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:37:05.412039 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:37:05.426483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:37:05.439673 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:37:05.455320 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:37:05.477058 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:37:05.509000 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:37:05.510618 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:37:05.510799 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:37:05.511087 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:37:05.511252 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:37:05.511676 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:37:05.512523 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:37:05.513025 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:37:05.513118 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:37:05.513153 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:37:05.513209 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:37:05.515243 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:37:05.517268 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:37:05.524020 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:37:05.533291 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:37:05.537902 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:37:05.538885 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:37:05.541342 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:37:05.541508 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:37:05.542752 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:37:05.542789 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:37:05.556197 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:37:05.569336 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:37:05.575845 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:37:05.583137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:37:05.588246 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:37:05.590177 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:37:05.598839 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:37:05.612125 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:37:05.622332 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:37:05.629354 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:37:05.646239 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:37:05.650116 jq[1439]: false Sep 12 17:37:05.649527 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:37:05.651293 dbus-daemon[1437]: [system] SELinux support is enabled Sep 12 17:37:05.654978 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:37:05.657683 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:37:05.659520 coreos-metadata[1435]: Sep 12 17:37:05.659 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:37:05.669183 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:37:05.672303 coreos-metadata[1435]: Sep 12 17:37:05.672 INFO Fetch successful Sep 12 17:37:05.673730 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:37:05.687036 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:37:05.694778 jq[1448]: true Sep 12 17:37:05.700839 extend-filesystems[1440]: Found loop4 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found loop5 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found loop6 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found loop7 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found vda Sep 12 17:37:05.705814 extend-filesystems[1440]: Found vda1 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found vda2 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found vda3 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found usr Sep 12 17:37:05.705814 extend-filesystems[1440]: Found vda4 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found vda6 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found vda7 Sep 12 17:37:05.705814 extend-filesystems[1440]: Found vda9 Sep 12 17:37:05.705814 extend-filesystems[1440]: Checking size of /dev/vda9 Sep 12 17:37:05.708695 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:37:05.710074 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:37:05.740199 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:37:05.741053 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:37:05.755317 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:37:05.755383 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:37:05.758662 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:37:05.758816 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 12 17:37:05.758846 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:37:05.782374 extend-filesystems[1440]: Resized partition /dev/vda9 Sep 12 17:37:05.797115 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:37:05.811788 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 12 17:37:05.822592 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:37:05.838395 jq[1456]: true Sep 12 17:37:05.852123 tar[1455]: linux-amd64/helm Sep 12 17:37:05.860501 update_engine[1447]: I20250912 17:37:05.856063 1447 main.cc:92] Flatcar Update Engine starting Sep 12 17:37:05.873280 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:37:05.879978 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:37:05.887657 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:37:05.896283 update_engine[1447]: I20250912 17:37:05.891623 1447 update_check_scheduler.cc:74] Next update check in 5m17s Sep 12 17:37:05.902291 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:37:05.904979 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:37:05.907048 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:37:05.943416 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1355) Sep 12 17:37:05.993468 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 17:37:06.025541 systemd-logind[1446]: New seat seat0. Sep 12 17:37:06.026477 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:37:06.026477 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 17:37:06.026477 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 17:37:06.061852 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Sep 12 17:37:06.061852 extend-filesystems[1440]: Found vdb Sep 12 17:37:06.035246 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:37:06.035536 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:37:06.045546 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:37:06.045585 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:37:06.050826 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:37:06.113671 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:37:06.118929 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:37:06.144370 systemd[1]: Starting sshkeys.service... Sep 12 17:37:06.219544 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:37:06.228611 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:37:06.267995 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:37:06.274259 coreos-metadata[1506]: Sep 12 17:37:06.274 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:37:06.289957 coreos-metadata[1506]: Sep 12 17:37:06.288 INFO Fetch successful Sep 12 17:37:06.304105 unknown[1506]: wrote ssh authorized keys file for user: core Sep 12 17:37:06.356983 update-ssh-keys[1514]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:37:06.361056 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:37:06.373028 systemd[1]: Finished sshkeys.service. Sep 12 17:37:06.446968 containerd[1469]: time="2025-09-12T17:37:06.445787482Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:37:06.468116 systemd-networkd[1354]: eth0: Gained IPv6LL Sep 12 17:37:06.470159 systemd-networkd[1354]: eth1: Gained IPv6LL Sep 12 17:37:06.475897 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:37:06.480510 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:37:06.490446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:06.503471 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:37:06.551347 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:37:06.553618 containerd[1469]: time="2025-09-12T17:37:06.553547690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:06.558274 containerd[1469]: time="2025-09-12T17:37:06.558191343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:06.558274 containerd[1469]: time="2025-09-12T17:37:06.558247282Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:37:06.558274 containerd[1469]: time="2025-09-12T17:37:06.558269984Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:37:06.558501 containerd[1469]: time="2025-09-12T17:37:06.558441843Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:37:06.558501 containerd[1469]: time="2025-09-12T17:37:06.558457143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:06.558604 containerd[1469]: time="2025-09-12T17:37:06.558516383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:06.558604 containerd[1469]: time="2025-09-12T17:37:06.558528931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:06.560093 containerd[1469]: time="2025-09-12T17:37:06.558746206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:06.560093 containerd[1469]: time="2025-09-12T17:37:06.558775496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:06.560093 containerd[1469]: time="2025-09-12T17:37:06.558794645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:06.560093 containerd[1469]: time="2025-09-12T17:37:06.558810302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:06.560093 containerd[1469]: time="2025-09-12T17:37:06.558912536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:06.562134 containerd[1469]: time="2025-09-12T17:37:06.561534654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:37:06.562134 containerd[1469]: time="2025-09-12T17:37:06.561744820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:37:06.562134 containerd[1469]: time="2025-09-12T17:37:06.561764692Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:37:06.562134 containerd[1469]: time="2025-09-12T17:37:06.561865348Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:37:06.562134 containerd[1469]: time="2025-09-12T17:37:06.561912527Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:37:06.569070 containerd[1469]: time="2025-09-12T17:37:06.569007245Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:37:06.569208 containerd[1469]: time="2025-09-12T17:37:06.569107035Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:37:06.569208 containerd[1469]: time="2025-09-12T17:37:06.569199835Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:37:06.569321 containerd[1469]: time="2025-09-12T17:37:06.569220627Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:37:06.569321 containerd[1469]: time="2025-09-12T17:37:06.569237370Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.569478028Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.569857900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570003430Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570021980Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570078084Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570096883Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570112025Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570125545Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570140469Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570156726Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570171359Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570183284Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570198199Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:37:06.570733 containerd[1469]: time="2025-09-12T17:37:06.570220206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570234633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570270450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570287947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570301174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570315425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570328338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570371370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570387551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570401963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570414732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570429176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570442915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570470511Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570495384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571354 containerd[1469]: time="2025-09-12T17:37:06.570511100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.571873 containerd[1469]: time="2025-09-12T17:37:06.570522480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:37:06.575016 containerd[1469]: time="2025-09-12T17:37:06.573650192Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:37:06.575016 containerd[1469]: time="2025-09-12T17:37:06.573705944Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:37:06.575016 containerd[1469]: time="2025-09-12T17:37:06.573724444Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:37:06.575016 containerd[1469]: time="2025-09-12T17:37:06.573750128Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:37:06.575016 containerd[1469]: time="2025-09-12T17:37:06.573762498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.575016 containerd[1469]: time="2025-09-12T17:37:06.573778207Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:37:06.575016 containerd[1469]: time="2025-09-12T17:37:06.573792526Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:37:06.575016 containerd[1469]: time="2025-09-12T17:37:06.573803590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:37:06.575417 containerd[1469]: time="2025-09-12T17:37:06.574180158Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:37:06.575417 containerd[1469]: time="2025-09-12T17:37:06.574254763Z" level=info msg="Connect containerd service" Sep 12 17:37:06.575417 containerd[1469]: time="2025-09-12T17:37:06.574309916Z" level=info msg="using legacy CRI server" Sep 12 17:37:06.575417 containerd[1469]: time="2025-09-12T17:37:06.574317311Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:37:06.575417 containerd[1469]: time="2025-09-12T17:37:06.574421519Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:37:06.590819 containerd[1469]: time="2025-09-12T17:37:06.590373919Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:37:06.592293 containerd[1469]: time="2025-09-12T17:37:06.590995058Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:37:06.592293 containerd[1469]: time="2025-09-12T17:37:06.591054123Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:37:06.592293 containerd[1469]: time="2025-09-12T17:37:06.591115487Z" level=info msg="Start subscribing containerd event" Sep 12 17:37:06.592293 containerd[1469]: time="2025-09-12T17:37:06.591179049Z" level=info msg="Start recovering state" Sep 12 17:37:06.592293 containerd[1469]: time="2025-09-12T17:37:06.591295976Z" level=info msg="Start event monitor" Sep 12 17:37:06.592293 containerd[1469]: time="2025-09-12T17:37:06.591323674Z" level=info msg="Start snapshots syncer" Sep 12 17:37:06.592293 containerd[1469]: time="2025-09-12T17:37:06.591339730Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:37:06.592293 containerd[1469]: time="2025-09-12T17:37:06.591349347Z" level=info msg="Start streaming server" Sep 12 17:37:06.592293 containerd[1469]: time="2025-09-12T17:37:06.591433997Z" level=info msg="containerd successfully booted in 0.147525s" Sep 12 17:37:06.591986 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:37:06.595546 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:37:06.646858 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:37:06.660580 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:37:06.684492 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:37:06.684833 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:37:06.696658 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:37:06.722905 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:37:06.733556 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:37:06.744546 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:37:06.748713 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:37:07.074078 tar[1455]: linux-amd64/LICENSE Sep 12 17:37:07.074665 tar[1455]: linux-amd64/README.md Sep 12 17:37:07.100549 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:37:07.871855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:07.873120 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:37:07.875971 systemd[1]: Startup finished in 1.013s (kernel) + 5.321s (initrd) + 6.273s (userspace) = 12.608s. Sep 12 17:37:07.893407 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:37:08.581849 kubelet[1559]: E0912 17:37:08.581711 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:37:08.585094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:37:08.585265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:37:08.585619 systemd[1]: kubelet.service: Consumed 1.342s CPU time. Sep 12 17:37:11.205300 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:37:11.216329 systemd[1]: Started sshd@0-143.244.177.186:22-147.75.109.163:47614.service - OpenSSH per-connection server daemon (147.75.109.163:47614). Sep 12 17:37:11.280709 sshd[1571]: Accepted publickey for core from 147.75.109.163 port 47614 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:37:11.283418 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:11.295982 systemd-logind[1446]: New session 1 of user core. Sep 12 17:37:11.297426 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:37:11.306503 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:37:11.323418 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:37:11.330323 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:37:11.342767 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:37:11.455756 systemd[1575]: Queued start job for default target default.target. Sep 12 17:37:11.467239 systemd[1575]: Created slice app.slice - User Application Slice. Sep 12 17:37:11.467274 systemd[1575]: Reached target paths.target - Paths. Sep 12 17:37:11.467290 systemd[1575]: Reached target timers.target - Timers. Sep 12 17:37:11.468951 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:37:11.483154 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:37:11.483335 systemd[1575]: Reached target sockets.target - Sockets. Sep 12 17:37:11.483358 systemd[1575]: Reached target basic.target - Basic System. Sep 12 17:37:11.483432 systemd[1575]: Reached target default.target - Main User Target. Sep 12 17:37:11.483480 systemd[1575]: Startup finished in 132ms. Sep 12 17:37:11.483619 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:37:11.495599 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:37:11.558320 systemd[1]: Started sshd@1-143.244.177.186:22-147.75.109.163:47622.service - OpenSSH per-connection server daemon (147.75.109.163:47622). Sep 12 17:37:11.605482 sshd[1586]: Accepted publickey for core from 147.75.109.163 port 47622 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:37:11.607236 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:11.613784 systemd-logind[1446]: New session 2 of user core. Sep 12 17:37:11.623253 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:37:11.683123 sshd[1586]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:11.699111 systemd[1]: sshd@1-143.244.177.186:22-147.75.109.163:47622.service: Deactivated successfully. Sep 12 17:37:11.701246 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:37:11.703117 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:37:11.708376 systemd[1]: Started sshd@2-143.244.177.186:22-147.75.109.163:47624.service - OpenSSH per-connection server daemon (147.75.109.163:47624). Sep 12 17:37:11.710274 systemd-logind[1446]: Removed session 2. Sep 12 17:37:11.746748 sshd[1593]: Accepted publickey for core from 147.75.109.163 port 47624 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:37:11.748554 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:11.755232 systemd-logind[1446]: New session 3 of user core. Sep 12 17:37:11.762231 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:37:11.817966 sshd[1593]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:11.830054 systemd[1]: sshd@2-143.244.177.186:22-147.75.109.163:47624.service: Deactivated successfully. Sep 12 17:37:11.832165 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:37:11.833745 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:37:11.838329 systemd[1]: Started sshd@3-143.244.177.186:22-147.75.109.163:47636.service - OpenSSH per-connection server daemon (147.75.109.163:47636). Sep 12 17:37:11.840281 systemd-logind[1446]: Removed session 3. Sep 12 17:37:11.886600 sshd[1600]: Accepted publickey for core from 147.75.109.163 port 47636 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:37:11.888533 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:11.893233 systemd-logind[1446]: New session 4 of user core. Sep 12 17:37:11.904364 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:37:11.966716 sshd[1600]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:11.983168 systemd[1]: sshd@3-143.244.177.186:22-147.75.109.163:47636.service: Deactivated successfully. Sep 12 17:37:11.985635 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:37:11.988138 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:37:11.994589 systemd[1]: Started sshd@4-143.244.177.186:22-147.75.109.163:47642.service - OpenSSH per-connection server daemon (147.75.109.163:47642). Sep 12 17:37:11.996096 systemd-logind[1446]: Removed session 4. Sep 12 17:37:12.032504 sshd[1607]: Accepted publickey for core from 147.75.109.163 port 47642 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:37:12.034389 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:12.040220 systemd-logind[1446]: New session 5 of user core. Sep 12 17:37:12.047242 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:37:12.123441 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:37:12.123837 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:37:12.145448 sudo[1610]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:12.149168 sshd[1607]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:12.165220 systemd[1]: sshd@4-143.244.177.186:22-147.75.109.163:47642.service: Deactivated successfully. Sep 12 17:37:12.167348 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:37:12.169188 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:37:12.173309 systemd[1]: Started sshd@5-143.244.177.186:22-147.75.109.163:47650.service - OpenSSH per-connection server daemon (147.75.109.163:47650). Sep 12 17:37:12.175491 systemd-logind[1446]: Removed session 5. Sep 12 17:37:12.222633 sshd[1615]: Accepted publickey for core from 147.75.109.163 port 47650 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:37:12.223759 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:12.228311 systemd-logind[1446]: New session 6 of user core. Sep 12 17:37:12.241285 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:37:12.301892 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:37:12.302243 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:37:12.306380 sudo[1619]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:12.312784 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:37:12.313464 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:37:12.334420 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:37:12.336604 auditctl[1622]: No rules Sep 12 17:37:12.337006 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:37:12.337212 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:37:12.340770 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:37:12.378527 augenrules[1640]: No rules Sep 12 17:37:12.379848 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:37:12.381319 sudo[1618]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:12.385348 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:12.395027 systemd[1]: sshd@5-143.244.177.186:22-147.75.109.163:47650.service: Deactivated successfully. Sep 12 17:37:12.396867 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:37:12.399198 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:37:12.404357 systemd[1]: Started sshd@6-143.244.177.186:22-147.75.109.163:47654.service - OpenSSH per-connection server daemon (147.75.109.163:47654). Sep 12 17:37:12.406333 systemd-logind[1446]: Removed session 6. Sep 12 17:37:12.443705 sshd[1648]: Accepted publickey for core from 147.75.109.163 port 47654 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:37:12.445391 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:12.450511 systemd-logind[1446]: New session 7 of user core. Sep 12 17:37:12.457289 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:37:12.518219 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:37:12.519012 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:37:12.927296 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:37:12.929545 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:37:13.356199 dockerd[1667]: time="2025-09-12T17:37:13.356115073Z" level=info msg="Starting up" Sep 12 17:37:13.464870 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport268898975-merged.mount: Deactivated successfully. Sep 12 17:37:13.487972 dockerd[1667]: time="2025-09-12T17:37:13.487903680Z" level=info msg="Loading containers: start." Sep 12 17:37:13.615967 kernel: Initializing XFRM netlink socket Sep 12 17:37:13.711679 systemd-networkd[1354]: docker0: Link UP Sep 12 17:37:13.729960 dockerd[1667]: time="2025-09-12T17:37:13.729898998Z" level=info msg="Loading containers: done." Sep 12 17:37:13.747149 dockerd[1667]: time="2025-09-12T17:37:13.747085402Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:37:13.747321 dockerd[1667]: time="2025-09-12T17:37:13.747251101Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:37:13.747442 dockerd[1667]: time="2025-09-12T17:37:13.747415053Z" level=info msg="Daemon has completed initialization" Sep 12 17:37:13.777512 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:37:13.778662 dockerd[1667]: time="2025-09-12T17:37:13.778267226Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:37:14.749828 containerd[1469]: time="2025-09-12T17:37:14.749750303Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:37:15.395371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836434505.mount: Deactivated successfully. Sep 12 17:37:16.514574 containerd[1469]: time="2025-09-12T17:37:16.514186079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:16.515324 containerd[1469]: time="2025-09-12T17:37:16.515282094Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:37:16.515502 containerd[1469]: time="2025-09-12T17:37:16.515478068Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:16.518424 containerd[1469]: time="2025-09-12T17:37:16.518380085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:16.520488 containerd[1469]: time="2025-09-12T17:37:16.519961445Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.769516376s" Sep 12 17:37:16.520488 containerd[1469]: time="2025-09-12T17:37:16.520009470Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:37:16.521479 containerd[1469]: time="2025-09-12T17:37:16.520637808Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:37:18.082983 containerd[1469]: time="2025-09-12T17:37:18.082655539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:18.084875 containerd[1469]: time="2025-09-12T17:37:18.084803663Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:37:18.086992 containerd[1469]: time="2025-09-12T17:37:18.086031893Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:18.088770 containerd[1469]: time="2025-09-12T17:37:18.088721517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:18.090141 containerd[1469]: time="2025-09-12T17:37:18.090107494Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.569437365s" Sep 12 17:37:18.090262 containerd[1469]: time="2025-09-12T17:37:18.090246921Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:37:18.090852 containerd[1469]: time="2025-09-12T17:37:18.090830308Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:37:18.765652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:37:18.774312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:19.106287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:19.118568 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:37:19.225567 kubelet[1886]: E0912 17:37:19.225483 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:37:19.232467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:37:19.232725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:37:19.447088 containerd[1469]: time="2025-09-12T17:37:19.446277152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:19.448112 containerd[1469]: time="2025-09-12T17:37:19.447886514Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:37:19.448980 containerd[1469]: time="2025-09-12T17:37:19.448397362Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:19.453007 containerd[1469]: time="2025-09-12T17:37:19.452628288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:19.455106 containerd[1469]: time="2025-09-12T17:37:19.454436813Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.363499472s" Sep 12 17:37:19.455106 containerd[1469]: time="2025-09-12T17:37:19.454490751Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:37:19.455918 containerd[1469]: time="2025-09-12T17:37:19.455658821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:37:20.773847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420127293.mount: Deactivated successfully. Sep 12 17:37:21.402762 containerd[1469]: time="2025-09-12T17:37:21.402671643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:21.403877 containerd[1469]: time="2025-09-12T17:37:21.403802769Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:37:21.404485 containerd[1469]: time="2025-09-12T17:37:21.404441684Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:21.407960 containerd[1469]: time="2025-09-12T17:37:21.407892576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:21.409139 containerd[1469]: time="2025-09-12T17:37:21.409077660Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.953374404s" Sep 12 17:37:21.409139 containerd[1469]: time="2025-09-12T17:37:21.409133725Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:37:21.410255 containerd[1469]: time="2025-09-12T17:37:21.409970170Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:37:21.412443 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 12 17:37:21.878160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128085958.mount: Deactivated successfully. Sep 12 17:37:22.865055 containerd[1469]: time="2025-09-12T17:37:22.864976953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:22.866739 containerd[1469]: time="2025-09-12T17:37:22.866678410Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:37:22.867471 containerd[1469]: time="2025-09-12T17:37:22.866954857Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:22.870849 containerd[1469]: time="2025-09-12T17:37:22.870803921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:22.873061 containerd[1469]: time="2025-09-12T17:37:22.872972524Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.462931281s" Sep 12 17:37:22.873061 containerd[1469]: time="2025-09-12T17:37:22.873060830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:37:22.873886 containerd[1469]: time="2025-09-12T17:37:22.873706600Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:37:23.280783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949625278.mount: Deactivated successfully. Sep 12 17:37:23.285146 containerd[1469]: time="2025-09-12T17:37:23.285074650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:23.287268 containerd[1469]: time="2025-09-12T17:37:23.287153200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:37:23.289004 containerd[1469]: time="2025-09-12T17:37:23.288182115Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:23.291603 containerd[1469]: time="2025-09-12T17:37:23.291561882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:23.294075 containerd[1469]: time="2025-09-12T17:37:23.294025471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 420.270863ms" Sep 12 17:37:23.294293 containerd[1469]: time="2025-09-12T17:37:23.294267860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:37:23.295171 containerd[1469]: time="2025-09-12T17:37:23.295069223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:37:23.819486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1324153542.mount: Deactivated successfully. Sep 12 17:37:24.516236 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 12 17:37:25.829550 containerd[1469]: time="2025-09-12T17:37:25.829483431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:25.832096 containerd[1469]: time="2025-09-12T17:37:25.831984605Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:37:25.832965 containerd[1469]: time="2025-09-12T17:37:25.832548172Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:25.836934 containerd[1469]: time="2025-09-12T17:37:25.836874962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:25.839956 containerd[1469]: time="2025-09-12T17:37:25.838849219Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.543557959s" Sep 12 17:37:25.839956 containerd[1469]: time="2025-09-12T17:37:25.838908077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:37:28.684718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:28.698472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:28.738889 systemd[1]: Reloading requested from client PID 2038 ('systemctl') (unit session-7.scope)... Sep 12 17:37:28.738912 systemd[1]: Reloading... Sep 12 17:37:28.886981 zram_generator::config[2077]: No configuration found. Sep 12 17:37:29.018727 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:37:29.107367 systemd[1]: Reloading finished in 367 ms. Sep 12 17:37:29.168176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:29.168867 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:37:29.174810 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:29.176343 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:37:29.176678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:29.181335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:29.319539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:29.332590 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:37:29.396856 kubelet[2138]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:29.396856 kubelet[2138]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:37:29.396856 kubelet[2138]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:29.397363 kubelet[2138]: I0912 17:37:29.396949 2138 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:37:29.695354 kubelet[2138]: I0912 17:37:29.695195 2138 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:37:29.695354 kubelet[2138]: I0912 17:37:29.695252 2138 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:37:29.695563 kubelet[2138]: I0912 17:37:29.695537 2138 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:37:29.730904 kubelet[2138]: I0912 17:37:29.730658 2138 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:37:29.732404 kubelet[2138]: E0912 17:37:29.732361 2138 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.244.177.186:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.244.177.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:37:29.743408 kubelet[2138]: E0912 17:37:29.743350 2138 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:37:29.743408 kubelet[2138]: I0912 17:37:29.743408 2138 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:37:29.750428 kubelet[2138]: I0912 17:37:29.750366 2138 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:37:29.751476 kubelet[2138]: I0912 17:37:29.751426 2138 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:37:29.751794 kubelet[2138]: I0912 17:37:29.751733 2138 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:37:29.752115 kubelet[2138]: I0912 17:37:29.751797 2138 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-bde5b7e242","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:37:29.752246 kubelet[2138]: I0912 17:37:29.752159 2138 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:37:29.752246 kubelet[2138]: I0912 17:37:29.752179 2138 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:37:29.752516 kubelet[2138]: I0912 17:37:29.752398 2138 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:29.756531 kubelet[2138]: I0912 17:37:29.755848 2138 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:37:29.756531 kubelet[2138]: I0912 17:37:29.755914 2138 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:37:29.756531 kubelet[2138]: I0912 17:37:29.756002 2138 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:37:29.756531 kubelet[2138]: I0912 17:37:29.756045 2138 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:37:29.763119 kubelet[2138]: W0912 17:37:29.763054 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.244.177.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-bde5b7e242&limit=500&resourceVersion=0": dial tcp 143.244.177.186:6443: connect: connection refused Sep 12 17:37:29.763860 kubelet[2138]: E0912 17:37:29.763789 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.244.177.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-bde5b7e242&limit=500&resourceVersion=0\": dial tcp 143.244.177.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:37:29.764369 kubelet[2138]: I0912 17:37:29.764341 2138 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:37:29.770863 kubelet[2138]: I0912 17:37:29.770710 2138 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:37:29.771507 kubelet[2138]: W0912 17:37:29.771475 2138 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:37:29.772473 kubelet[2138]: I0912 17:37:29.772199 2138 server.go:1274] "Started kubelet" Sep 12 17:37:29.776363 kubelet[2138]: I0912 17:37:29.775858 2138 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:37:29.781173 kubelet[2138]: W0912 17:37:29.777070 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.244.177.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.244.177.186:6443: connect: connection refused Sep 12 17:37:29.781173 kubelet[2138]: E0912 17:37:29.780214 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.244.177.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.244.177.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:37:29.782095 kubelet[2138]: I0912 17:37:29.781531 2138 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:37:29.784660 kubelet[2138]: I0912 17:37:29.784637 2138 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:37:29.786181 kubelet[2138]: I0912 17:37:29.786154 2138 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:37:29.786471 kubelet[2138]: E0912 17:37:29.786439 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-bde5b7e242\" not found" Sep 12 17:37:29.789099 kubelet[2138]: I0912 17:37:29.787671 2138 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:37:29.789099 kubelet[2138]: I0912 17:37:29.787997 2138 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:37:29.793858 kubelet[2138]: I0912 17:37:29.793268 2138 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:37:29.793858 kubelet[2138]: I0912 17:37:29.793348 2138 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:37:29.795784 kubelet[2138]: I0912 17:37:29.795751 2138 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:37:29.800914 kubelet[2138]: E0912 17:37:29.800862 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.244.177.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-bde5b7e242?timeout=10s\": dial tcp 143.244.177.186:6443: connect: connection refused" interval="200ms" Sep 12 17:37:29.806992 kubelet[2138]: I0912 17:37:29.806105 2138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:37:29.808250 kubelet[2138]: I0912 17:37:29.808197 2138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:37:29.808421 kubelet[2138]: I0912 17:37:29.808276 2138 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:37:29.808421 kubelet[2138]: I0912 17:37:29.808310 2138 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:37:29.808506 kubelet[2138]: E0912 17:37:29.808382 2138 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:37:29.812971 kubelet[2138]: E0912 17:37:29.803324 2138 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.244.177.186:6443/api/v1/namespaces/default/events\": dial tcp 143.244.177.186:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-a-bde5b7e242.1864999bf9da8b23 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-a-bde5b7e242,UID:ci-4081.3.6-a-bde5b7e242,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-a-bde5b7e242,},FirstTimestamp:2025-09-12 17:37:29.772165923 +0000 UTC m=+0.432914635,LastTimestamp:2025-09-12 17:37:29.772165923 +0000 UTC m=+0.432914635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-a-bde5b7e242,}" Sep 12 17:37:29.812971 kubelet[2138]: I0912 17:37:29.810139 2138 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:37:29.812971 kubelet[2138]: I0912 17:37:29.810301 2138 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:37:29.815145 kubelet[2138]: W0912 17:37:29.815075 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.244.177.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.244.177.186:6443: connect: connection refused Sep 12 17:37:29.815365 kubelet[2138]: E0912 17:37:29.815339 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.244.177.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.244.177.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:37:29.815614 kubelet[2138]: E0912 17:37:29.815594 2138 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:37:29.818670 kubelet[2138]: W0912 17:37:29.818582 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.244.177.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.244.177.186:6443: connect: connection refused Sep 12 17:37:29.818822 kubelet[2138]: E0912 17:37:29.818673 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.244.177.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.244.177.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:37:29.820481 kubelet[2138]: I0912 17:37:29.820369 2138 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:37:29.846127 kubelet[2138]: I0912 17:37:29.845981 2138 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:37:29.846568 kubelet[2138]: I0912 17:37:29.846290 2138 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:37:29.846568 kubelet[2138]: I0912 17:37:29.846328 2138 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:29.848541 kubelet[2138]: I0912 17:37:29.848373 2138 policy_none.go:49] "None policy: Start" Sep 12 17:37:29.849743 kubelet[2138]: I0912 17:37:29.849447 2138 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:37:29.849743 kubelet[2138]: I0912 17:37:29.849479 2138 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:37:29.860628 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:37:29.872789 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:37:29.876928 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:37:29.886846 kubelet[2138]: E0912 17:37:29.886794 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-bde5b7e242\" not found" Sep 12 17:37:29.889005 kubelet[2138]: I0912 17:37:29.888530 2138 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:37:29.889005 kubelet[2138]: I0912 17:37:29.888761 2138 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:37:29.889005 kubelet[2138]: I0912 17:37:29.888775 2138 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:37:29.889545 kubelet[2138]: I0912 17:37:29.889518 2138 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:37:29.891421 kubelet[2138]: E0912 17:37:29.891387 2138 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-a-bde5b7e242\" not found" Sep 12 17:37:29.920511 systemd[1]: Created slice kubepods-burstable-podff8b73010dfd8a954fc2053ba30c1add.slice - libcontainer container kubepods-burstable-podff8b73010dfd8a954fc2053ba30c1add.slice. Sep 12 17:37:29.941682 systemd[1]: Created slice kubepods-burstable-pod8cd6e98cd4e2270f246ec4c43be9379a.slice - libcontainer container kubepods-burstable-pod8cd6e98cd4e2270f246ec4c43be9379a.slice. Sep 12 17:37:29.948882 systemd[1]: Created slice kubepods-burstable-pod33c11d7ea81d90cbe6c4dbad8e1a0333.slice - libcontainer container kubepods-burstable-pod33c11d7ea81d90cbe6c4dbad8e1a0333.slice. Sep 12 17:37:29.990549 kubelet[2138]: I0912 17:37:29.990491 2138 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:29.990908 kubelet[2138]: E0912 17:37:29.990882 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.244.177.186:6443/api/v1/nodes\": dial tcp 143.244.177.186:6443: connect: connection refused" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.001864 kubelet[2138]: E0912 17:37:30.001789 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.244.177.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-bde5b7e242?timeout=10s\": dial tcp 143.244.177.186:6443: connect: connection refused" interval="400ms" Sep 12 17:37:30.096739 kubelet[2138]: I0912 17:37:30.096292 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.096739 kubelet[2138]: I0912 17:37:30.096357 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33c11d7ea81d90cbe6c4dbad8e1a0333-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-bde5b7e242\" (UID: \"33c11d7ea81d90cbe6c4dbad8e1a0333\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.096739 kubelet[2138]: I0912 17:37:30.096385 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff8b73010dfd8a954fc2053ba30c1add-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-bde5b7e242\" (UID: \"ff8b73010dfd8a954fc2053ba30c1add\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.096739 kubelet[2138]: I0912 17:37:30.096486 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.096739 kubelet[2138]: I0912 17:37:30.096521 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.097075 kubelet[2138]: I0912 17:37:30.096545 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.097075 kubelet[2138]: I0912 17:37:30.096572 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.097075 kubelet[2138]: I0912 17:37:30.096598 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff8b73010dfd8a954fc2053ba30c1add-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-bde5b7e242\" (UID: \"ff8b73010dfd8a954fc2053ba30c1add\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.097075 kubelet[2138]: I0912 17:37:30.096619 2138 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff8b73010dfd8a954fc2053ba30c1add-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-bde5b7e242\" (UID: \"ff8b73010dfd8a954fc2053ba30c1add\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.192640 kubelet[2138]: I0912 17:37:30.192599 2138 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.193138 kubelet[2138]: E0912 17:37:30.193066 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.244.177.186:6443/api/v1/nodes\": dial tcp 143.244.177.186:6443: connect: connection refused" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.237922 kubelet[2138]: E0912 17:37:30.237757 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:30.239639 containerd[1469]: time="2025-09-12T17:37:30.239063275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-bde5b7e242,Uid:ff8b73010dfd8a954fc2053ba30c1add,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:30.242038 systemd-resolved[1323]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 12 17:37:30.245931 kubelet[2138]: E0912 17:37:30.245879 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:30.254175 containerd[1469]: time="2025-09-12T17:37:30.254092963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-bde5b7e242,Uid:8cd6e98cd4e2270f246ec4c43be9379a,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:30.261447 kubelet[2138]: E0912 17:37:30.261275 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:30.262795 containerd[1469]: time="2025-09-12T17:37:30.262461310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-bde5b7e242,Uid:33c11d7ea81d90cbe6c4dbad8e1a0333,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:30.403500 kubelet[2138]: E0912 17:37:30.403429 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.244.177.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-bde5b7e242?timeout=10s\": dial tcp 143.244.177.186:6443: connect: connection refused" interval="800ms" Sep 12 17:37:30.594651 kubelet[2138]: I0912 17:37:30.594582 2138 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.595044 kubelet[2138]: E0912 17:37:30.595002 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.244.177.186:6443/api/v1/nodes\": dial tcp 143.244.177.186:6443: connect: connection refused" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:30.733555 kubelet[2138]: W0912 17:37:30.733465 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.244.177.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-bde5b7e242&limit=500&resourceVersion=0": dial tcp 143.244.177.186:6443: connect: connection refused Sep 12 17:37:30.733706 kubelet[2138]: E0912 17:37:30.733568 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.244.177.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-bde5b7e242&limit=500&resourceVersion=0\": dial tcp 143.244.177.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:37:30.745505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375073093.mount: Deactivated successfully. Sep 12 17:37:30.751010 containerd[1469]: time="2025-09-12T17:37:30.750923720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:30.752696 containerd[1469]: time="2025-09-12T17:37:30.752622508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:37:30.753982 containerd[1469]: time="2025-09-12T17:37:30.753046342Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:30.753982 containerd[1469]: time="2025-09-12T17:37:30.753741653Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:30.754585 containerd[1469]: time="2025-09-12T17:37:30.754526238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:37:30.754724 containerd[1469]: time="2025-09-12T17:37:30.754700939Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:30.755221 containerd[1469]: time="2025-09-12T17:37:30.755193067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:37:30.759278 containerd[1469]: time="2025-09-12T17:37:30.759216337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:30.761887 containerd[1469]: time="2025-09-12T17:37:30.761825990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.229239ms" Sep 12 17:37:30.765068 containerd[1469]: time="2025-09-12T17:37:30.765026150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.875952ms" Sep 12 17:37:30.768738 containerd[1469]: time="2025-09-12T17:37:30.768682739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.470379ms" Sep 12 17:37:30.862997 kubelet[2138]: W0912 17:37:30.861567 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.244.177.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.244.177.186:6443: connect: connection refused Sep 12 17:37:30.862997 kubelet[2138]: E0912 17:37:30.861670 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.244.177.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.244.177.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:37:30.969103 containerd[1469]: time="2025-09-12T17:37:30.967333197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:30.969103 containerd[1469]: time="2025-09-12T17:37:30.967435854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:30.969103 containerd[1469]: time="2025-09-12T17:37:30.967455110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:30.969103 containerd[1469]: time="2025-09-12T17:37:30.967577605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:30.973961 containerd[1469]: time="2025-09-12T17:37:30.973181019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:30.975266 containerd[1469]: time="2025-09-12T17:37:30.975159964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:30.976763 containerd[1469]: time="2025-09-12T17:37:30.976600965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:30.978217 containerd[1469]: time="2025-09-12T17:37:30.977325377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:30.978686 containerd[1469]: time="2025-09-12T17:37:30.977127314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:30.979393 containerd[1469]: time="2025-09-12T17:37:30.979062179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:30.979393 containerd[1469]: time="2025-09-12T17:37:30.979127912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:30.984130 containerd[1469]: time="2025-09-12T17:37:30.982095957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:31.013185 systemd[1]: Started cri-containerd-ae8925cc34e0533c0d9931dcf77f3dd542ac82b028db2fc231840b46b2039793.scope - libcontainer container ae8925cc34e0533c0d9931dcf77f3dd542ac82b028db2fc231840b46b2039793. Sep 12 17:37:31.019780 systemd[1]: Started cri-containerd-87cf5d70666e45ef612f2bd01d86744b0bfa0d510a599593a3b92f193b317fba.scope - libcontainer container 87cf5d70666e45ef612f2bd01d86744b0bfa0d510a599593a3b92f193b317fba. Sep 12 17:37:31.044158 systemd[1]: Started cri-containerd-653a11377f2c22fedf417bd3696c1d8c4d1f99f576596a486399cecb2f94824a.scope - libcontainer container 653a11377f2c22fedf417bd3696c1d8c4d1f99f576596a486399cecb2f94824a. Sep 12 17:37:31.072998 kubelet[2138]: W0912 17:37:31.072913 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.244.177.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.244.177.186:6443: connect: connection refused Sep 12 17:37:31.073306 kubelet[2138]: E0912 17:37:31.073253 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.244.177.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.244.177.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:37:31.131301 containerd[1469]: time="2025-09-12T17:37:31.131108340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-bde5b7e242,Uid:8cd6e98cd4e2270f246ec4c43be9379a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae8925cc34e0533c0d9931dcf77f3dd542ac82b028db2fc231840b46b2039793\"" Sep 12 17:37:31.131700 containerd[1469]: time="2025-09-12T17:37:31.131639163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-bde5b7e242,Uid:ff8b73010dfd8a954fc2053ba30c1add,Namespace:kube-system,Attempt:0,} returns sandbox id \"87cf5d70666e45ef612f2bd01d86744b0bfa0d510a599593a3b92f193b317fba\"" Sep 12 17:37:31.133893 kubelet[2138]: E0912 17:37:31.133799 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:31.134484 kubelet[2138]: E0912 17:37:31.134358 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:31.139275 containerd[1469]: time="2025-09-12T17:37:31.139202884Z" level=info msg="CreateContainer within sandbox \"87cf5d70666e45ef612f2bd01d86744b0bfa0d510a599593a3b92f193b317fba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:37:31.140838 containerd[1469]: time="2025-09-12T17:37:31.140770905Z" level=info msg="CreateContainer within sandbox \"ae8925cc34e0533c0d9931dcf77f3dd542ac82b028db2fc231840b46b2039793\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:37:31.176179 containerd[1469]: time="2025-09-12T17:37:31.176051506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-bde5b7e242,Uid:33c11d7ea81d90cbe6c4dbad8e1a0333,Namespace:kube-system,Attempt:0,} returns sandbox id \"653a11377f2c22fedf417bd3696c1d8c4d1f99f576596a486399cecb2f94824a\"" Sep 12 17:37:31.176910 containerd[1469]: time="2025-09-12T17:37:31.176705896Z" level=info msg="CreateContainer within sandbox \"ae8925cc34e0533c0d9931dcf77f3dd542ac82b028db2fc231840b46b2039793\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e9b5f073233192dcf881fd3c670bc9b3c8d2c1c1038d95d9cd23c106991954eb\"" Sep 12 17:37:31.177618 kubelet[2138]: E0912 17:37:31.177452 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:31.179008 containerd[1469]: time="2025-09-12T17:37:31.177926754Z" level=info msg="StartContainer for \"e9b5f073233192dcf881fd3c670bc9b3c8d2c1c1038d95d9cd23c106991954eb\"" Sep 12 17:37:31.179246 containerd[1469]: time="2025-09-12T17:37:31.179222755Z" level=info msg="CreateContainer within sandbox \"87cf5d70666e45ef612f2bd01d86744b0bfa0d510a599593a3b92f193b317fba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9040baac4fd36416dabf09e106a777c535a019d874bce0423d0ad66727098e71\"" Sep 12 17:37:31.179846 containerd[1469]: time="2025-09-12T17:37:31.179822820Z" level=info msg="StartContainer for \"9040baac4fd36416dabf09e106a777c535a019d874bce0423d0ad66727098e71\"" Sep 12 17:37:31.183837 containerd[1469]: time="2025-09-12T17:37:31.183717991Z" level=info msg="CreateContainer within sandbox \"653a11377f2c22fedf417bd3696c1d8c4d1f99f576596a486399cecb2f94824a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:37:31.201691 containerd[1469]: time="2025-09-12T17:37:31.201641430Z" level=info msg="CreateContainer within sandbox \"653a11377f2c22fedf417bd3696c1d8c4d1f99f576596a486399cecb2f94824a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52431cf204be7acebfbb0b179304e8638646e0e5e3ac592d44a1ebf883745986\"" Sep 12 17:37:31.202566 containerd[1469]: time="2025-09-12T17:37:31.202532020Z" level=info msg="StartContainer for \"52431cf204be7acebfbb0b179304e8638646e0e5e3ac592d44a1ebf883745986\"" Sep 12 17:37:31.204399 kubelet[2138]: E0912 17:37:31.204234 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.244.177.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-bde5b7e242?timeout=10s\": dial tcp 143.244.177.186:6443: connect: connection refused" interval="1.6s" Sep 12 17:37:31.219178 systemd[1]: Started cri-containerd-e9b5f073233192dcf881fd3c670bc9b3c8d2c1c1038d95d9cd23c106991954eb.scope - libcontainer container e9b5f073233192dcf881fd3c670bc9b3c8d2c1c1038d95d9cd23c106991954eb. Sep 12 17:37:31.247184 systemd[1]: Started cri-containerd-9040baac4fd36416dabf09e106a777c535a019d874bce0423d0ad66727098e71.scope - libcontainer container 9040baac4fd36416dabf09e106a777c535a019d874bce0423d0ad66727098e71. Sep 12 17:37:31.278297 systemd[1]: Started cri-containerd-52431cf204be7acebfbb0b179304e8638646e0e5e3ac592d44a1ebf883745986.scope - libcontainer container 52431cf204be7acebfbb0b179304e8638646e0e5e3ac592d44a1ebf883745986. Sep 12 17:37:31.307789 kubelet[2138]: W0912 17:37:31.306530 2138 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.244.177.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.244.177.186:6443: connect: connection refused Sep 12 17:37:31.307789 kubelet[2138]: E0912 17:37:31.306630 2138 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.244.177.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.244.177.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:37:31.330152 containerd[1469]: time="2025-09-12T17:37:31.329851994Z" level=info msg="StartContainer for \"e9b5f073233192dcf881fd3c670bc9b3c8d2c1c1038d95d9cd23c106991954eb\" returns successfully" Sep 12 17:37:31.361067 containerd[1469]: time="2025-09-12T17:37:31.360357550Z" level=info msg="StartContainer for \"9040baac4fd36416dabf09e106a777c535a019d874bce0423d0ad66727098e71\" returns successfully" Sep 12 17:37:31.385826 containerd[1469]: time="2025-09-12T17:37:31.385670940Z" level=info msg="StartContainer for \"52431cf204be7acebfbb0b179304e8638646e0e5e3ac592d44a1ebf883745986\" returns successfully" Sep 12 17:37:31.399968 kubelet[2138]: I0912 17:37:31.397801 2138 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:31.400382 kubelet[2138]: E0912 17:37:31.400335 2138 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.244.177.186:6443/api/v1/nodes\": dial tcp 143.244.177.186:6443: connect: connection refused" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:31.841735 kubelet[2138]: E0912 17:37:31.841536 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:31.845215 kubelet[2138]: E0912 17:37:31.844661 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:31.849468 kubelet[2138]: E0912 17:37:31.849340 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:32.851834 kubelet[2138]: E0912 17:37:32.851763 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:33.002157 kubelet[2138]: I0912 17:37:33.001422 2138 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:33.285867 kubelet[2138]: E0912 17:37:33.285808 2138 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-a-bde5b7e242\" not found" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:33.380629 kubelet[2138]: I0912 17:37:33.380355 2138 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:33.380629 kubelet[2138]: E0912 17:37:33.380398 2138 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-a-bde5b7e242\": node \"ci-4081.3.6-a-bde5b7e242\" not found" Sep 12 17:37:33.395152 kubelet[2138]: E0912 17:37:33.395108 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-bde5b7e242\" not found" Sep 12 17:37:33.501707 kubelet[2138]: E0912 17:37:33.501226 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-bde5b7e242\" not found" Sep 12 17:37:33.601767 kubelet[2138]: E0912 17:37:33.601614 2138 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-bde5b7e242\" not found" Sep 12 17:37:33.777248 kubelet[2138]: I0912 17:37:33.775757 2138 apiserver.go:52] "Watching apiserver" Sep 12 17:37:33.794329 kubelet[2138]: I0912 17:37:33.794261 2138 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:37:34.941577 kubelet[2138]: W0912 17:37:34.941529 2138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:37:34.942239 kubelet[2138]: E0912 17:37:34.942014 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:35.280550 systemd[1]: Reloading requested from client PID 2418 ('systemctl') (unit session-7.scope)... Sep 12 17:37:35.280576 systemd[1]: Reloading... Sep 12 17:37:35.426977 zram_generator::config[2469]: No configuration found. Sep 12 17:37:35.566599 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:37:35.668399 systemd[1]: Reloading finished in 387 ms. Sep 12 17:37:35.719278 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:35.734662 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:37:35.735001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:35.742379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:35.932997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:35.945545 (kubelet)[2508]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:37:36.028216 kubelet[2508]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:36.028856 kubelet[2508]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:37:36.028964 kubelet[2508]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:36.029262 kubelet[2508]: I0912 17:37:36.029207 2508 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:37:36.041392 kubelet[2508]: I0912 17:37:36.041335 2508 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:37:36.041392 kubelet[2508]: I0912 17:37:36.041371 2508 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:37:36.041719 kubelet[2508]: I0912 17:37:36.041694 2508 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:37:36.043128 kubelet[2508]: I0912 17:37:36.043090 2508 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:37:36.045634 kubelet[2508]: I0912 17:37:36.045410 2508 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:37:36.052966 kubelet[2508]: E0912 17:37:36.052811 2508 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:37:36.052966 kubelet[2508]: I0912 17:37:36.052968 2508 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:37:36.061750 kubelet[2508]: I0912 17:37:36.061652 2508 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:37:36.061997 kubelet[2508]: I0912 17:37:36.061818 2508 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:37:36.061997 kubelet[2508]: I0912 17:37:36.061953 2508 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:37:36.062374 kubelet[2508]: I0912 17:37:36.061986 2508 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-bde5b7e242","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:37:36.062506 kubelet[2508]: I0912 17:37:36.062385 2508 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:37:36.062506 kubelet[2508]: I0912 17:37:36.062396 2508 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:37:36.062506 kubelet[2508]: I0912 17:37:36.062471 2508 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:36.062702 kubelet[2508]: I0912 17:37:36.062686 2508 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:37:36.062702 kubelet[2508]: I0912 17:37:36.062705 2508 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:37:36.062821 kubelet[2508]: I0912 17:37:36.062754 2508 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:37:36.062821 kubelet[2508]: I0912 17:37:36.062772 2508 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:37:36.067042 kubelet[2508]: I0912 17:37:36.065063 2508 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:37:36.067042 kubelet[2508]: I0912 17:37:36.065843 2508 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:37:36.071226 kubelet[2508]: I0912 17:37:36.068036 2508 server.go:1274] "Started kubelet" Sep 12 17:37:36.071226 kubelet[2508]: I0912 17:37:36.070424 2508 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:37:36.081675 kubelet[2508]: I0912 17:37:36.081611 2508 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:37:36.083037 kubelet[2508]: I0912 17:37:36.083004 2508 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:37:36.085383 kubelet[2508]: I0912 17:37:36.084325 2508 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:37:36.085383 kubelet[2508]: I0912 17:37:36.084667 2508 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:37:36.085383 kubelet[2508]: I0912 17:37:36.085072 2508 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:37:36.088546 kubelet[2508]: I0912 17:37:36.087008 2508 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:37:36.088546 kubelet[2508]: E0912 17:37:36.087371 2508 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-bde5b7e242\" not found" Sep 12 17:37:36.090496 kubelet[2508]: I0912 17:37:36.089986 2508 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:37:36.090496 kubelet[2508]: I0912 17:37:36.090264 2508 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:37:36.093115 kubelet[2508]: I0912 17:37:36.092921 2508 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:37:36.099733 kubelet[2508]: I0912 17:37:36.099682 2508 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:37:36.099733 kubelet[2508]: I0912 17:37:36.099735 2508 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:37:36.099970 kubelet[2508]: I0912 17:37:36.099761 2508 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:37:36.099970 kubelet[2508]: E0912 17:37:36.099824 2508 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:37:36.110805 kubelet[2508]: I0912 17:37:36.110759 2508 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:37:36.111168 kubelet[2508]: I0912 17:37:36.111137 2508 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:37:36.118829 kubelet[2508]: E0912 17:37:36.118757 2508 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:37:36.119611 kubelet[2508]: I0912 17:37:36.119579 2508 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:37:36.175111 kubelet[2508]: I0912 17:37:36.175066 2508 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:37:36.175111 kubelet[2508]: I0912 17:37:36.175088 2508 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:37:36.175342 kubelet[2508]: I0912 17:37:36.175198 2508 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:36.175418 kubelet[2508]: I0912 17:37:36.175391 2508 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:37:36.175418 kubelet[2508]: I0912 17:37:36.175406 2508 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:37:36.175523 kubelet[2508]: I0912 17:37:36.175426 2508 policy_none.go:49] "None policy: Start" Sep 12 17:37:36.176342 kubelet[2508]: I0912 17:37:36.176309 2508 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:37:36.176342 kubelet[2508]: I0912 17:37:36.176343 2508 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:37:36.176587 kubelet[2508]: I0912 17:37:36.176567 2508 state_mem.go:75] "Updated machine memory state" Sep 12 17:37:36.182519 kubelet[2508]: I0912 17:37:36.182315 2508 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:37:36.182694 kubelet[2508]: I0912 17:37:36.182670 2508 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:37:36.182765 kubelet[2508]: I0912 17:37:36.182683 2508 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:37:36.185994 kubelet[2508]: I0912 17:37:36.182960 2508 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:37:36.218308 kubelet[2508]: W0912 17:37:36.217343 2508 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:37:36.218308 kubelet[2508]: W0912 17:37:36.217599 2508 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:37:36.218308 kubelet[2508]: E0912 17:37:36.217656 2508 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.6-a-bde5b7e242\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.221950 kubelet[2508]: W0912 17:37:36.221883 2508 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:37:36.286579 kubelet[2508]: I0912 17:37:36.286520 2508 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.291194 kubelet[2508]: I0912 17:37:36.291155 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.292254 kubelet[2508]: I0912 17:37:36.292111 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff8b73010dfd8a954fc2053ba30c1add-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-bde5b7e242\" (UID: \"ff8b73010dfd8a954fc2053ba30c1add\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.292521 kubelet[2508]: I0912 17:37:36.292504 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff8b73010dfd8a954fc2053ba30c1add-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-bde5b7e242\" (UID: \"ff8b73010dfd8a954fc2053ba30c1add\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.293272 kubelet[2508]: I0912 17:37:36.292653 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.293505 kubelet[2508]: I0912 17:37:36.293475 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.293888 kubelet[2508]: I0912 17:37:36.293673 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.293888 kubelet[2508]: I0912 17:37:36.293701 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cd6e98cd4e2270f246ec4c43be9379a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-bde5b7e242\" (UID: \"8cd6e98cd4e2270f246ec4c43be9379a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.293888 kubelet[2508]: I0912 17:37:36.293723 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33c11d7ea81d90cbe6c4dbad8e1a0333-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-bde5b7e242\" (UID: \"33c11d7ea81d90cbe6c4dbad8e1a0333\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.293888 kubelet[2508]: I0912 17:37:36.293739 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff8b73010dfd8a954fc2053ba30c1add-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-bde5b7e242\" (UID: \"ff8b73010dfd8a954fc2053ba30c1add\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.297504 sudo[2541]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:37:36.297919 sudo[2541]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:37:36.301610 kubelet[2508]: I0912 17:37:36.301235 2508 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.301610 kubelet[2508]: I0912 17:37:36.301331 2508 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:36.519975 kubelet[2508]: E0912 17:37:36.518142 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:36.519975 kubelet[2508]: E0912 17:37:36.518588 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:36.523125 kubelet[2508]: E0912 17:37:36.522739 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:36.922458 sudo[2541]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:37.063745 kubelet[2508]: I0912 17:37:37.063386 2508 apiserver.go:52] "Watching apiserver" Sep 12 17:37:37.090972 kubelet[2508]: I0912 17:37:37.090726 2508 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:37:37.163433 kubelet[2508]: E0912 17:37:37.162438 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:37.164225 kubelet[2508]: E0912 17:37:37.164119 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:37.174480 kubelet[2508]: W0912 17:37:37.174029 2508 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:37:37.175763 kubelet[2508]: E0912 17:37:37.175011 2508 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.6-a-bde5b7e242\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-a-bde5b7e242" Sep 12 17:37:37.175763 kubelet[2508]: E0912 17:37:37.175249 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:37.221171 kubelet[2508]: I0912 17:37:37.221108 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-a-bde5b7e242" podStartSLOduration=3.221086357 podStartE2EDuration="3.221086357s" podCreationTimestamp="2025-09-12 17:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:37.219487512 +0000 UTC m=+1.263616637" watchObservedRunningTime="2025-09-12 17:37:37.221086357 +0000 UTC m=+1.265215479" Sep 12 17:37:37.240521 kubelet[2508]: I0912 17:37:37.239700 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-bde5b7e242" podStartSLOduration=1.239681776 podStartE2EDuration="1.239681776s" podCreationTimestamp="2025-09-12 17:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:37.239415937 +0000 UTC m=+1.283545068" watchObservedRunningTime="2025-09-12 17:37:37.239681776 +0000 UTC m=+1.283810900" Sep 12 17:37:37.265796 kubelet[2508]: I0912 17:37:37.265449 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-a-bde5b7e242" podStartSLOduration=1.2654286319999999 podStartE2EDuration="1.265428632s" podCreationTimestamp="2025-09-12 17:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:37.265252163 +0000 UTC m=+1.309381295" watchObservedRunningTime="2025-09-12 17:37:37.265428632 +0000 UTC m=+1.309557755" Sep 12 17:37:38.165292 kubelet[2508]: E0912 17:37:38.165246 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:38.860737 sudo[1651]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:38.866558 sshd[1648]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:38.870413 systemd[1]: sshd@6-143.244.177.186:22-147.75.109.163:47654.service: Deactivated successfully. Sep 12 17:37:38.873859 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:37:38.874127 systemd[1]: session-7.scope: Consumed 5.231s CPU time, 142.8M memory peak, 0B memory swap peak. Sep 12 17:37:38.876590 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:37:38.878596 systemd-logind[1446]: Removed session 7. Sep 12 17:37:39.366810 kubelet[2508]: E0912 17:37:39.366347 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:41.517852 kubelet[2508]: E0912 17:37:41.517815 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:41.783024 kubelet[2508]: I0912 17:37:41.782394 2508 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:37:41.783278 containerd[1469]: time="2025-09-12T17:37:41.782809831Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:37:41.783671 kubelet[2508]: I0912 17:37:41.783118 2508 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:37:42.173069 kubelet[2508]: E0912 17:37:42.172703 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:42.336097 systemd[1]: Created slice kubepods-burstable-podc7d52cc0_3b1d_48be_844e_e71d1c7d2391.slice - libcontainer container kubepods-burstable-podc7d52cc0_3b1d_48be_844e_e71d1c7d2391.slice. Sep 12 17:37:42.352919 systemd[1]: Created slice kubepods-besteffort-pod92a40241_ab4f_4431_a9a0_de54bdae460a.slice - libcontainer container kubepods-besteffort-pod92a40241_ab4f_4431_a9a0_de54bdae460a.slice. Sep 12 17:37:42.440472 kubelet[2508]: I0912 17:37:42.439071 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq6wp\" (UniqueName: \"kubernetes.io/projected/92a40241-ab4f-4431-a9a0-de54bdae460a-kube-api-access-vq6wp\") pod \"kube-proxy-f8625\" (UID: \"92a40241-ab4f-4431-a9a0-de54bdae460a\") " pod="kube-system/kube-proxy-f8625" Sep 12 17:37:42.440472 kubelet[2508]: I0912 17:37:42.439117 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92a40241-ab4f-4431-a9a0-de54bdae460a-kube-proxy\") pod \"kube-proxy-f8625\" (UID: \"92a40241-ab4f-4431-a9a0-de54bdae460a\") " pod="kube-system/kube-proxy-f8625" Sep 12 17:37:42.440472 kubelet[2508]: I0912 17:37:42.439137 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-config-path\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440472 kubelet[2508]: I0912 17:37:42.439158 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-hubble-tls\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440472 kubelet[2508]: I0912 17:37:42.439241 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtz5n\" (UniqueName: \"kubernetes.io/projected/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-kube-api-access-rtz5n\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440742 kubelet[2508]: I0912 17:37:42.439321 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-bpf-maps\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440742 kubelet[2508]: I0912 17:37:42.439346 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-host-proc-sys-kernel\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440742 kubelet[2508]: I0912 17:37:42.439395 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-run\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440742 kubelet[2508]: I0912 17:37:42.439413 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-xtables-lock\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440742 kubelet[2508]: I0912 17:37:42.439444 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cni-path\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440742 kubelet[2508]: I0912 17:37:42.439468 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-etc-cni-netd\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440898 kubelet[2508]: I0912 17:37:42.439484 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92a40241-ab4f-4431-a9a0-de54bdae460a-xtables-lock\") pod \"kube-proxy-f8625\" (UID: \"92a40241-ab4f-4431-a9a0-de54bdae460a\") " pod="kube-system/kube-proxy-f8625" Sep 12 17:37:42.440898 kubelet[2508]: I0912 17:37:42.439526 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-hostproc\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440898 kubelet[2508]: I0912 17:37:42.439567 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-lib-modules\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440898 kubelet[2508]: I0912 17:37:42.439622 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-clustermesh-secrets\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440898 kubelet[2508]: I0912 17:37:42.439640 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-host-proc-sys-net\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.440898 kubelet[2508]: I0912 17:37:42.439662 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92a40241-ab4f-4431-a9a0-de54bdae460a-lib-modules\") pod \"kube-proxy-f8625\" (UID: \"92a40241-ab4f-4431-a9a0-de54bdae460a\") " pod="kube-system/kube-proxy-f8625" Sep 12 17:37:42.441063 kubelet[2508]: I0912 17:37:42.439696 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-cgroup\") pod \"cilium-rsw5q\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " pod="kube-system/cilium-rsw5q" Sep 12 17:37:42.647142 kubelet[2508]: E0912 17:37:42.646932 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:42.651032 containerd[1469]: time="2025-09-12T17:37:42.650070444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsw5q,Uid:c7d52cc0-3b1d-48be-844e-e71d1c7d2391,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:42.662998 kubelet[2508]: E0912 17:37:42.662684 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:42.669917 containerd[1469]: time="2025-09-12T17:37:42.669556519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8625,Uid:92a40241-ab4f-4431-a9a0-de54bdae460a,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:42.758128 containerd[1469]: time="2025-09-12T17:37:42.757587582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:42.758128 containerd[1469]: time="2025-09-12T17:37:42.757691030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:42.758128 containerd[1469]: time="2025-09-12T17:37:42.757725317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:42.758128 containerd[1469]: time="2025-09-12T17:37:42.757903829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:42.788159 containerd[1469]: time="2025-09-12T17:37:42.787710566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:42.788159 containerd[1469]: time="2025-09-12T17:37:42.787793610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:42.788159 containerd[1469]: time="2025-09-12T17:37:42.787819486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:42.792223 containerd[1469]: time="2025-09-12T17:37:42.788644964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:42.791196 systemd[1]: Started cri-containerd-9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd.scope - libcontainer container 9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd. Sep 12 17:37:42.838189 systemd[1]: Started cri-containerd-ea096d6660f34711e579bad329e0ec1dda57b840d5596478c648f582d4d9a8c3.scope - libcontainer container ea096d6660f34711e579bad329e0ec1dda57b840d5596478c648f582d4d9a8c3. Sep 12 17:37:42.855720 systemd[1]: Created slice kubepods-besteffort-pod4b13ca77_3f4e_44ec_9de5_74125e759c96.slice - libcontainer container kubepods-besteffort-pod4b13ca77_3f4e_44ec_9de5_74125e759c96.slice. Sep 12 17:37:42.907800 containerd[1469]: time="2025-09-12T17:37:42.907484629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsw5q,Uid:c7d52cc0-3b1d-48be-844e-e71d1c7d2391,Namespace:kube-system,Attempt:0,} returns sandbox id \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\"" Sep 12 17:37:42.917876 kubelet[2508]: E0912 17:37:42.916063 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:42.927493 containerd[1469]: time="2025-09-12T17:37:42.927454422Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:37:42.956411 containerd[1469]: time="2025-09-12T17:37:42.956372454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8625,Uid:92a40241-ab4f-4431-a9a0-de54bdae460a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea096d6660f34711e579bad329e0ec1dda57b840d5596478c648f582d4d9a8c3\"" Sep 12 17:37:42.958706 kubelet[2508]: E0912 17:37:42.958667 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:42.962263 kubelet[2508]: I0912 17:37:42.961707 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b13ca77-3f4e-44ec-9de5-74125e759c96-cilium-config-path\") pod \"cilium-operator-5d85765b45-brcpt\" (UID: \"4b13ca77-3f4e-44ec-9de5-74125e759c96\") " pod="kube-system/cilium-operator-5d85765b45-brcpt" Sep 12 17:37:42.962263 kubelet[2508]: I0912 17:37:42.961769 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hpvx\" (UniqueName: \"kubernetes.io/projected/4b13ca77-3f4e-44ec-9de5-74125e759c96-kube-api-access-5hpvx\") pod \"cilium-operator-5d85765b45-brcpt\" (UID: \"4b13ca77-3f4e-44ec-9de5-74125e759c96\") " pod="kube-system/cilium-operator-5d85765b45-brcpt" Sep 12 17:37:42.982254 containerd[1469]: time="2025-09-12T17:37:42.982178932Z" level=info msg="CreateContainer within sandbox \"ea096d6660f34711e579bad329e0ec1dda57b840d5596478c648f582d4d9a8c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:37:42.995111 containerd[1469]: time="2025-09-12T17:37:42.995040047Z" level=info msg="CreateContainer within sandbox \"ea096d6660f34711e579bad329e0ec1dda57b840d5596478c648f582d4d9a8c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed59c294ca0bbc9940707a5c1f6de5e95e0844d3f1fb9419838fba9003360b11\"" Sep 12 17:37:42.998442 containerd[1469]: time="2025-09-12T17:37:42.996815761Z" level=info msg="StartContainer for \"ed59c294ca0bbc9940707a5c1f6de5e95e0844d3f1fb9419838fba9003360b11\"" Sep 12 17:37:43.031527 systemd[1]: Started cri-containerd-ed59c294ca0bbc9940707a5c1f6de5e95e0844d3f1fb9419838fba9003360b11.scope - libcontainer container ed59c294ca0bbc9940707a5c1f6de5e95e0844d3f1fb9419838fba9003360b11. Sep 12 17:37:43.067576 containerd[1469]: time="2025-09-12T17:37:43.067521470Z" level=info msg="StartContainer for \"ed59c294ca0bbc9940707a5c1f6de5e95e0844d3f1fb9419838fba9003360b11\" returns successfully" Sep 12 17:37:43.164276 kubelet[2508]: E0912 17:37:43.164229 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:43.169265 containerd[1469]: time="2025-09-12T17:37:43.169216734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-brcpt,Uid:4b13ca77-3f4e-44ec-9de5-74125e759c96,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:43.178927 kubelet[2508]: E0912 17:37:43.178479 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:43.201306 kubelet[2508]: I0912 17:37:43.201030 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f8625" podStartSLOduration=1.201002414 podStartE2EDuration="1.201002414s" podCreationTimestamp="2025-09-12 17:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:43.199089148 +0000 UTC m=+7.243218341" watchObservedRunningTime="2025-09-12 17:37:43.201002414 +0000 UTC m=+7.245131544" Sep 12 17:37:43.214065 containerd[1469]: time="2025-09-12T17:37:43.213861832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:43.215714 containerd[1469]: time="2025-09-12T17:37:43.214799657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:43.215714 containerd[1469]: time="2025-09-12T17:37:43.214905347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:43.215714 containerd[1469]: time="2025-09-12T17:37:43.215076900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:43.246223 systemd[1]: Started cri-containerd-947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9.scope - libcontainer container 947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9. Sep 12 17:37:43.305847 containerd[1469]: time="2025-09-12T17:37:43.305715728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-brcpt,Uid:4b13ca77-3f4e-44ec-9de5-74125e759c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9\"" Sep 12 17:37:43.307583 kubelet[2508]: E0912 17:37:43.307548 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:46.460627 kubelet[2508]: E0912 17:37:46.460139 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:47.201724 kubelet[2508]: E0912 17:37:47.201684 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:48.805928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461803668.mount: Deactivated successfully. Sep 12 17:37:49.376957 kubelet[2508]: E0912 17:37:49.376906 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:50.882905 containerd[1469]: time="2025-09-12T17:37:50.882716075Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:50.885073 containerd[1469]: time="2025-09-12T17:37:50.884997861Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:37:50.885498 containerd[1469]: time="2025-09-12T17:37:50.885450973Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:50.887159 containerd[1469]: time="2025-09-12T17:37:50.887014319Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.959362753s" Sep 12 17:37:50.887159 containerd[1469]: time="2025-09-12T17:37:50.887050555Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:37:50.903405 containerd[1469]: time="2025-09-12T17:37:50.902081159Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:37:50.904053 containerd[1469]: time="2025-09-12T17:37:50.904006757Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:37:51.039239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176302301.mount: Deactivated successfully. Sep 12 17:37:51.068675 containerd[1469]: time="2025-09-12T17:37:51.068620441Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\"" Sep 12 17:37:51.071021 containerd[1469]: time="2025-09-12T17:37:51.069661690Z" level=info msg="StartContainer for \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\"" Sep 12 17:37:51.176160 systemd[1]: run-containerd-runc-k8s.io-6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0-runc.5EMe41.mount: Deactivated successfully. Sep 12 17:37:51.191126 systemd[1]: Started cri-containerd-6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0.scope - libcontainer container 6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0. Sep 12 17:37:51.233919 containerd[1469]: time="2025-09-12T17:37:51.233876906Z" level=info msg="StartContainer for \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\" returns successfully" Sep 12 17:37:51.244790 systemd[1]: cri-containerd-6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0.scope: Deactivated successfully. Sep 12 17:37:51.269476 update_engine[1447]: I20250912 17:37:51.268610 1447 update_attempter.cc:509] Updating boot flags... Sep 12 17:37:51.316346 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2946) Sep 12 17:37:51.415162 containerd[1469]: time="2025-09-12T17:37:51.399482096Z" level=info msg="shim disconnected" id=6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0 namespace=k8s.io Sep 12 17:37:51.415162 containerd[1469]: time="2025-09-12T17:37:51.414625993Z" level=warning msg="cleaning up after shim disconnected" id=6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0 namespace=k8s.io Sep 12 17:37:51.415162 containerd[1469]: time="2025-09-12T17:37:51.414896445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:37:51.425053 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2947) Sep 12 17:37:52.030747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0-rootfs.mount: Deactivated successfully. Sep 12 17:37:52.095300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088784506.mount: Deactivated successfully. Sep 12 17:37:52.223139 kubelet[2508]: E0912 17:37:52.223104 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:52.232356 containerd[1469]: time="2025-09-12T17:37:52.232308425Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:37:52.254288 containerd[1469]: time="2025-09-12T17:37:52.254139453Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\"" Sep 12 17:37:52.257638 containerd[1469]: time="2025-09-12T17:37:52.256961897Z" level=info msg="StartContainer for \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\"" Sep 12 17:37:52.306612 systemd[1]: Started cri-containerd-66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e.scope - libcontainer container 66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e. Sep 12 17:37:52.374255 containerd[1469]: time="2025-09-12T17:37:52.373990324Z" level=info msg="StartContainer for \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\" returns successfully" Sep 12 17:37:52.395732 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:37:52.401992 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:37:52.402146 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:37:52.415686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:37:52.418221 systemd[1]: cri-containerd-66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e.scope: Deactivated successfully. Sep 12 17:37:52.460215 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:37:52.471567 containerd[1469]: time="2025-09-12T17:37:52.471475298Z" level=info msg="shim disconnected" id=66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e namespace=k8s.io Sep 12 17:37:52.471567 containerd[1469]: time="2025-09-12T17:37:52.471549095Z" level=warning msg="cleaning up after shim disconnected" id=66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e namespace=k8s.io Sep 12 17:37:52.471567 containerd[1469]: time="2025-09-12T17:37:52.471562158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:37:52.492850 containerd[1469]: time="2025-09-12T17:37:52.492800383Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:37:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:37:52.857929 containerd[1469]: time="2025-09-12T17:37:52.857858665Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:52.858742 containerd[1469]: time="2025-09-12T17:37:52.858702368Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:37:52.859078 containerd[1469]: time="2025-09-12T17:37:52.859048745Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:52.860751 containerd[1469]: time="2025-09-12T17:37:52.860702564Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.956529424s" Sep 12 17:37:52.861041 containerd[1469]: time="2025-09-12T17:37:52.860903089Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:37:52.864476 containerd[1469]: time="2025-09-12T17:37:52.864423148Z" level=info msg="CreateContainer within sandbox \"947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:37:52.890652 containerd[1469]: time="2025-09-12T17:37:52.890497185Z" level=info msg="CreateContainer within sandbox \"947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\"" Sep 12 17:37:52.891328 containerd[1469]: time="2025-09-12T17:37:52.891289297Z" level=info msg="StartContainer for \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\"" Sep 12 17:37:52.932328 systemd[1]: Started cri-containerd-90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b.scope - libcontainer container 90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b. Sep 12 17:37:52.982359 containerd[1469]: time="2025-09-12T17:37:52.982302716Z" level=info msg="StartContainer for \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\" returns successfully" Sep 12 17:37:53.232624 kubelet[2508]: E0912 17:37:53.230659 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:53.239461 containerd[1469]: time="2025-09-12T17:37:53.239388289Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:37:53.241533 kubelet[2508]: E0912 17:37:53.239751 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:53.266231 containerd[1469]: time="2025-09-12T17:37:53.266079107Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\"" Sep 12 17:37:53.268975 containerd[1469]: time="2025-09-12T17:37:53.266706745Z" level=info msg="StartContainer for \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\"" Sep 12 17:37:53.276732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046033230.mount: Deactivated successfully. Sep 12 17:37:53.341279 systemd[1]: Started cri-containerd-9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce.scope - libcontainer container 9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce. Sep 12 17:37:53.394637 containerd[1469]: time="2025-09-12T17:37:53.394455121Z" level=info msg="StartContainer for \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\" returns successfully" Sep 12 17:37:53.401231 systemd[1]: cri-containerd-9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce.scope: Deactivated successfully. Sep 12 17:37:53.440920 containerd[1469]: time="2025-09-12T17:37:53.440708716Z" level=info msg="shim disconnected" id=9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce namespace=k8s.io Sep 12 17:37:53.440920 containerd[1469]: time="2025-09-12T17:37:53.440787403Z" level=warning msg="cleaning up after shim disconnected" id=9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce namespace=k8s.io Sep 12 17:37:53.440920 containerd[1469]: time="2025-09-12T17:37:53.440797323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:37:53.467176 containerd[1469]: time="2025-09-12T17:37:53.466868214Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:37:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:37:54.032892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce-rootfs.mount: Deactivated successfully. Sep 12 17:37:54.245846 kubelet[2508]: E0912 17:37:54.245113 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:54.249567 kubelet[2508]: E0912 17:37:54.248929 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:54.254145 containerd[1469]: time="2025-09-12T17:37:54.254100838Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:37:54.270973 kubelet[2508]: I0912 17:37:54.270542 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-brcpt" podStartSLOduration=2.717486577 podStartE2EDuration="12.27051646s" podCreationTimestamp="2025-09-12 17:37:42 +0000 UTC" firstStartedPulling="2025-09-12 17:37:43.309337603 +0000 UTC m=+7.353466713" lastFinishedPulling="2025-09-12 17:37:52.862367487 +0000 UTC m=+16.906496596" observedRunningTime="2025-09-12 17:37:53.484617239 +0000 UTC m=+17.528746367" watchObservedRunningTime="2025-09-12 17:37:54.27051646 +0000 UTC m=+18.314645614" Sep 12 17:37:54.279828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874406758.mount: Deactivated successfully. Sep 12 17:37:54.290182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2642506894.mount: Deactivated successfully. Sep 12 17:37:54.292470 containerd[1469]: time="2025-09-12T17:37:54.292008097Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\"" Sep 12 17:37:54.294120 containerd[1469]: time="2025-09-12T17:37:54.294061686Z" level=info msg="StartContainer for \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\"" Sep 12 17:37:54.326208 systemd[1]: Started cri-containerd-884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da.scope - libcontainer container 884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da. Sep 12 17:37:54.360695 systemd[1]: cri-containerd-884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da.scope: Deactivated successfully. Sep 12 17:37:54.364013 containerd[1469]: time="2025-09-12T17:37:54.362772623Z" level=info msg="StartContainer for \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\" returns successfully" Sep 12 17:37:54.397441 containerd[1469]: time="2025-09-12T17:37:54.397303874Z" level=info msg="shim disconnected" id=884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da namespace=k8s.io Sep 12 17:37:54.398003 containerd[1469]: time="2025-09-12T17:37:54.397858092Z" level=warning msg="cleaning up after shim disconnected" id=884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da namespace=k8s.io Sep 12 17:37:54.398003 containerd[1469]: time="2025-09-12T17:37:54.397905665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:37:55.250574 kubelet[2508]: E0912 17:37:55.250175 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:55.253512 containerd[1469]: time="2025-09-12T17:37:55.253470919Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:37:55.274764 containerd[1469]: time="2025-09-12T17:37:55.274718383Z" level=info msg="CreateContainer within sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\"" Sep 12 17:37:55.277661 containerd[1469]: time="2025-09-12T17:37:55.277140444Z" level=info msg="StartContainer for \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\"" Sep 12 17:37:55.330249 systemd[1]: Started cri-containerd-7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c.scope - libcontainer container 7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c. Sep 12 17:37:55.366618 containerd[1469]: time="2025-09-12T17:37:55.366553108Z" level=info msg="StartContainer for \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\" returns successfully" Sep 12 17:37:55.521783 kubelet[2508]: I0912 17:37:55.521639 2508 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:37:55.566762 systemd[1]: Created slice kubepods-burstable-pod5526caff_0ed8_466a_801d_a1beca001427.slice - libcontainer container kubepods-burstable-pod5526caff_0ed8_466a_801d_a1beca001427.slice. Sep 12 17:37:55.578110 systemd[1]: Created slice kubepods-burstable-pod67c4f00a_74ac_4241_8a6f_d46093d18279.slice - libcontainer container kubepods-burstable-pod67c4f00a_74ac_4241_8a6f_d46093d18279.slice. Sep 12 17:37:55.658822 kubelet[2508]: I0912 17:37:55.658774 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5526caff-0ed8-466a-801d-a1beca001427-config-volume\") pod \"coredns-7c65d6cfc9-2wx8d\" (UID: \"5526caff-0ed8-466a-801d-a1beca001427\") " pod="kube-system/coredns-7c65d6cfc9-2wx8d" Sep 12 17:37:55.659132 kubelet[2508]: I0912 17:37:55.659091 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx7tq\" (UniqueName: \"kubernetes.io/projected/67c4f00a-74ac-4241-8a6f-d46093d18279-kube-api-access-wx7tq\") pod \"coredns-7c65d6cfc9-wqjmz\" (UID: \"67c4f00a-74ac-4241-8a6f-d46093d18279\") " pod="kube-system/coredns-7c65d6cfc9-wqjmz" Sep 12 17:37:55.659183 kubelet[2508]: I0912 17:37:55.659136 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tth6s\" (UniqueName: \"kubernetes.io/projected/5526caff-0ed8-466a-801d-a1beca001427-kube-api-access-tth6s\") pod \"coredns-7c65d6cfc9-2wx8d\" (UID: \"5526caff-0ed8-466a-801d-a1beca001427\") " pod="kube-system/coredns-7c65d6cfc9-2wx8d" Sep 12 17:37:55.659183 kubelet[2508]: I0912 17:37:55.659164 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c4f00a-74ac-4241-8a6f-d46093d18279-config-volume\") pod \"coredns-7c65d6cfc9-wqjmz\" (UID: \"67c4f00a-74ac-4241-8a6f-d46093d18279\") " pod="kube-system/coredns-7c65d6cfc9-wqjmz" Sep 12 17:37:55.876007 kubelet[2508]: E0912 17:37:55.874426 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:55.876651 containerd[1469]: time="2025-09-12T17:37:55.876586214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2wx8d,Uid:5526caff-0ed8-466a-801d-a1beca001427,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:55.887842 kubelet[2508]: E0912 17:37:55.885980 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:55.888013 containerd[1469]: time="2025-09-12T17:37:55.886913316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wqjmz,Uid:67c4f00a-74ac-4241-8a6f-d46093d18279,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:56.258889 kubelet[2508]: E0912 17:37:56.256676 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:56.278307 kubelet[2508]: I0912 17:37:56.277764 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rsw5q" podStartSLOduration=6.310302235 podStartE2EDuration="14.277738984s" podCreationTimestamp="2025-09-12 17:37:42 +0000 UTC" firstStartedPulling="2025-09-12 17:37:42.92085286 +0000 UTC m=+6.964981984" lastFinishedPulling="2025-09-12 17:37:50.888289615 +0000 UTC m=+14.932418733" observedRunningTime="2025-09-12 17:37:56.277294589 +0000 UTC m=+20.321423721" watchObservedRunningTime="2025-09-12 17:37:56.277738984 +0000 UTC m=+20.321868134" Sep 12 17:37:57.258982 kubelet[2508]: E0912 17:37:57.258923 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:57.625294 systemd-networkd[1354]: cilium_host: Link UP Sep 12 17:37:57.625522 systemd-networkd[1354]: cilium_net: Link UP Sep 12 17:37:57.625917 systemd-networkd[1354]: cilium_net: Gained carrier Sep 12 17:37:57.629600 systemd-networkd[1354]: cilium_host: Gained carrier Sep 12 17:37:57.629834 systemd-networkd[1354]: cilium_net: Gained IPv6LL Sep 12 17:37:57.631212 systemd-networkd[1354]: cilium_host: Gained IPv6LL Sep 12 17:37:57.774997 systemd-networkd[1354]: cilium_vxlan: Link UP Sep 12 17:37:57.775202 systemd-networkd[1354]: cilium_vxlan: Gained carrier Sep 12 17:37:58.189968 kernel: NET: Registered PF_ALG protocol family Sep 12 17:37:58.261423 kubelet[2508]: E0912 17:37:58.261194 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:37:59.013052 systemd-networkd[1354]: cilium_vxlan: Gained IPv6LL Sep 12 17:37:59.087768 systemd-networkd[1354]: lxc_health: Link UP Sep 12 17:37:59.097798 systemd-networkd[1354]: lxc_health: Gained carrier Sep 12 17:37:59.455518 systemd-networkd[1354]: lxc119847cebf55: Link UP Sep 12 17:37:59.462176 kernel: eth0: renamed from tmp0204e Sep 12 17:37:59.464891 systemd-networkd[1354]: lxc119847cebf55: Gained carrier Sep 12 17:37:59.485625 systemd-networkd[1354]: lxcb9b706073ca4: Link UP Sep 12 17:37:59.495286 kernel: eth0: renamed from tmp1fdce Sep 12 17:37:59.501550 systemd-networkd[1354]: lxcb9b706073ca4: Gained carrier Sep 12 17:38:00.548195 systemd-networkd[1354]: lxc_health: Gained IPv6LL Sep 12 17:38:00.652328 kubelet[2508]: E0912 17:38:00.652293 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:00.868792 systemd-networkd[1354]: lxc119847cebf55: Gained IPv6LL Sep 12 17:38:01.444173 systemd-networkd[1354]: lxcb9b706073ca4: Gained IPv6LL Sep 12 17:38:05.392624 containerd[1469]: time="2025-09-12T17:38:05.390415744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:05.392624 containerd[1469]: time="2025-09-12T17:38:05.390514591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:05.392624 containerd[1469]: time="2025-09-12T17:38:05.390539277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.392624 containerd[1469]: time="2025-09-12T17:38:05.390713840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.429257 systemd[1]: Started cri-containerd-1fdcef7bde580c9d8835b861f074d7e4b2bceca383011e11819e9edf24f05fe5.scope - libcontainer container 1fdcef7bde580c9d8835b861f074d7e4b2bceca383011e11819e9edf24f05fe5. Sep 12 17:38:05.505812 containerd[1469]: time="2025-09-12T17:38:05.505163453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:05.507148 containerd[1469]: time="2025-09-12T17:38:05.507082317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:05.507933 containerd[1469]: time="2025-09-12T17:38:05.507864777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.510326 containerd[1469]: time="2025-09-12T17:38:05.510250674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:05.526251 containerd[1469]: time="2025-09-12T17:38:05.526191652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2wx8d,Uid:5526caff-0ed8-466a-801d-a1beca001427,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fdcef7bde580c9d8835b861f074d7e4b2bceca383011e11819e9edf24f05fe5\"" Sep 12 17:38:05.529207 kubelet[2508]: E0912 17:38:05.528704 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:05.532742 containerd[1469]: time="2025-09-12T17:38:05.532682946Z" level=info msg="CreateContainer within sandbox \"1fdcef7bde580c9d8835b861f074d7e4b2bceca383011e11819e9edf24f05fe5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:38:05.571847 systemd[1]: Started cri-containerd-0204e8da3a1d869e5d44f192119223f6ec488cf8d40d47acc29ec50fd4c509f5.scope - libcontainer container 0204e8da3a1d869e5d44f192119223f6ec488cf8d40d47acc29ec50fd4c509f5. Sep 12 17:38:05.600761 containerd[1469]: time="2025-09-12T17:38:05.600539665Z" level=info msg="CreateContainer within sandbox \"1fdcef7bde580c9d8835b861f074d7e4b2bceca383011e11819e9edf24f05fe5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"477f911547ebff32c56f2095b7c9f46db323bcff5e8a446c1e1a94d043d8c58c\"" Sep 12 17:38:05.601613 containerd[1469]: time="2025-09-12T17:38:05.601576059Z" level=info msg="StartContainer for \"477f911547ebff32c56f2095b7c9f46db323bcff5e8a446c1e1a94d043d8c58c\"" Sep 12 17:38:05.650518 systemd[1]: Started cri-containerd-477f911547ebff32c56f2095b7c9f46db323bcff5e8a446c1e1a94d043d8c58c.scope - libcontainer container 477f911547ebff32c56f2095b7c9f46db323bcff5e8a446c1e1a94d043d8c58c. Sep 12 17:38:05.695217 containerd[1469]: time="2025-09-12T17:38:05.695174641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wqjmz,Uid:67c4f00a-74ac-4241-8a6f-d46093d18279,Namespace:kube-system,Attempt:0,} returns sandbox id \"0204e8da3a1d869e5d44f192119223f6ec488cf8d40d47acc29ec50fd4c509f5\"" Sep 12 17:38:05.699044 kubelet[2508]: E0912 17:38:05.697283 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:05.700262 containerd[1469]: time="2025-09-12T17:38:05.700204746Z" level=info msg="CreateContainer within sandbox \"0204e8da3a1d869e5d44f192119223f6ec488cf8d40d47acc29ec50fd4c509f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:38:05.738973 containerd[1469]: time="2025-09-12T17:38:05.736963740Z" level=info msg="CreateContainer within sandbox \"0204e8da3a1d869e5d44f192119223f6ec488cf8d40d47acc29ec50fd4c509f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14a14f61b167e113106801818bca98ad810a9ebd3e597c4925cf90057d5d0dd4\"" Sep 12 17:38:05.738973 containerd[1469]: time="2025-09-12T17:38:05.738516131Z" level=info msg="StartContainer for \"14a14f61b167e113106801818bca98ad810a9ebd3e597c4925cf90057d5d0dd4\"" Sep 12 17:38:05.757391 containerd[1469]: time="2025-09-12T17:38:05.757331607Z" level=info msg="StartContainer for \"477f911547ebff32c56f2095b7c9f46db323bcff5e8a446c1e1a94d043d8c58c\" returns successfully" Sep 12 17:38:05.798255 systemd[1]: Started cri-containerd-14a14f61b167e113106801818bca98ad810a9ebd3e597c4925cf90057d5d0dd4.scope - libcontainer container 14a14f61b167e113106801818bca98ad810a9ebd3e597c4925cf90057d5d0dd4. Sep 12 17:38:05.838631 containerd[1469]: time="2025-09-12T17:38:05.838499210Z" level=info msg="StartContainer for \"14a14f61b167e113106801818bca98ad810a9ebd3e597c4925cf90057d5d0dd4\" returns successfully" Sep 12 17:38:06.286864 kubelet[2508]: E0912 17:38:06.286651 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:06.295184 kubelet[2508]: E0912 17:38:06.294683 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:06.311358 kubelet[2508]: I0912 17:38:06.311253 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2wx8d" podStartSLOduration=24.311222325 podStartE2EDuration="24.311222325s" podCreationTimestamp="2025-09-12 17:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:38:06.308565803 +0000 UTC m=+30.352694933" watchObservedRunningTime="2025-09-12 17:38:06.311222325 +0000 UTC m=+30.355351459" Sep 12 17:38:06.387164 kubelet[2508]: I0912 17:38:06.386570 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wqjmz" podStartSLOduration=24.386545112 podStartE2EDuration="24.386545112s" podCreationTimestamp="2025-09-12 17:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:38:06.384981475 +0000 UTC m=+30.429110608" watchObservedRunningTime="2025-09-12 17:38:06.386545112 +0000 UTC m=+30.430674241" Sep 12 17:38:06.406612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522024149.mount: Deactivated successfully. Sep 12 17:38:07.296607 kubelet[2508]: E0912 17:38:07.296552 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:07.297798 kubelet[2508]: E0912 17:38:07.297393 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:08.299350 kubelet[2508]: E0912 17:38:08.299301 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:08.300834 kubelet[2508]: E0912 17:38:08.299439 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:09.578993 kubelet[2508]: I0912 17:38:09.578854 2508 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:38:09.580403 kubelet[2508]: E0912 17:38:09.580034 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:10.303251 kubelet[2508]: E0912 17:38:10.303153 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:15.441427 systemd[1]: Started sshd@7-143.244.177.186:22-147.75.109.163:50632.service - OpenSSH per-connection server daemon (147.75.109.163:50632). Sep 12 17:38:15.535082 sshd[3896]: Accepted publickey for core from 147.75.109.163 port 50632 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:15.537512 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:15.543717 systemd-logind[1446]: New session 8 of user core. Sep 12 17:38:15.551205 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:38:16.189221 sshd[3896]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:16.195394 systemd[1]: sshd@7-143.244.177.186:22-147.75.109.163:50632.service: Deactivated successfully. Sep 12 17:38:16.199195 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:38:16.199957 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:38:16.200902 systemd-logind[1446]: Removed session 8. Sep 12 17:38:21.204684 systemd[1]: Started sshd@8-143.244.177.186:22-147.75.109.163:39956.service - OpenSSH per-connection server daemon (147.75.109.163:39956). Sep 12 17:38:21.252975 sshd[3910]: Accepted publickey for core from 147.75.109.163 port 39956 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:21.254918 sshd[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:21.261272 systemd-logind[1446]: New session 9 of user core. Sep 12 17:38:21.271252 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:38:21.423087 sshd[3910]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:21.427888 systemd[1]: sshd@8-143.244.177.186:22-147.75.109.163:39956.service: Deactivated successfully. Sep 12 17:38:21.430772 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:38:21.432300 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:38:21.433602 systemd-logind[1446]: Removed session 9. Sep 12 17:38:26.454493 systemd[1]: Started sshd@9-143.244.177.186:22-147.75.109.163:39972.service - OpenSSH per-connection server daemon (147.75.109.163:39972). Sep 12 17:38:26.510542 sshd[3924]: Accepted publickey for core from 147.75.109.163 port 39972 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:26.512480 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:26.518573 systemd-logind[1446]: New session 10 of user core. Sep 12 17:38:26.522191 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:38:26.670294 sshd[3924]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:26.674449 systemd[1]: sshd@9-143.244.177.186:22-147.75.109.163:39972.service: Deactivated successfully. Sep 12 17:38:26.678466 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:38:26.681338 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:38:26.682844 systemd-logind[1446]: Removed session 10. Sep 12 17:38:31.687296 systemd[1]: Started sshd@10-143.244.177.186:22-147.75.109.163:32882.service - OpenSSH per-connection server daemon (147.75.109.163:32882). Sep 12 17:38:31.740469 sshd[3938]: Accepted publickey for core from 147.75.109.163 port 32882 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:31.742796 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:31.749102 systemd-logind[1446]: New session 11 of user core. Sep 12 17:38:31.761333 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:38:31.915599 sshd[3938]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:31.931045 systemd[1]: sshd@10-143.244.177.186:22-147.75.109.163:32882.service: Deactivated successfully. Sep 12 17:38:31.934314 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:38:31.935861 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:38:31.944451 systemd[1]: Started sshd@11-143.244.177.186:22-147.75.109.163:32894.service - OpenSSH per-connection server daemon (147.75.109.163:32894). Sep 12 17:38:31.945724 systemd-logind[1446]: Removed session 11. Sep 12 17:38:31.993352 sshd[3951]: Accepted publickey for core from 147.75.109.163 port 32894 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:31.995326 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:32.002428 systemd-logind[1446]: New session 12 of user core. Sep 12 17:38:32.009241 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:38:32.215302 sshd[3951]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:32.231485 systemd[1]: sshd@11-143.244.177.186:22-147.75.109.163:32894.service: Deactivated successfully. Sep 12 17:38:32.237192 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:38:32.247234 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:38:32.253453 systemd[1]: Started sshd@12-143.244.177.186:22-147.75.109.163:32896.service - OpenSSH per-connection server daemon (147.75.109.163:32896). Sep 12 17:38:32.257609 systemd-logind[1446]: Removed session 12. Sep 12 17:38:32.305466 sshd[3962]: Accepted publickey for core from 147.75.109.163 port 32896 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:32.306250 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:32.312776 systemd-logind[1446]: New session 13 of user core. Sep 12 17:38:32.318247 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:38:32.453062 sshd[3962]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:32.458529 systemd[1]: sshd@12-143.244.177.186:22-147.75.109.163:32896.service: Deactivated successfully. Sep 12 17:38:32.461404 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:38:32.462581 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:38:32.463585 systemd-logind[1446]: Removed session 13. Sep 12 17:38:37.468017 systemd[1]: Started sshd@13-143.244.177.186:22-147.75.109.163:32900.service - OpenSSH per-connection server daemon (147.75.109.163:32900). Sep 12 17:38:37.529129 sshd[3980]: Accepted publickey for core from 147.75.109.163 port 32900 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:37.531331 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:37.537800 systemd-logind[1446]: New session 14 of user core. Sep 12 17:38:37.543239 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:38:37.686096 sshd[3980]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:37.690029 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:38:37.691653 systemd[1]: sshd@13-143.244.177.186:22-147.75.109.163:32900.service: Deactivated successfully. Sep 12 17:38:37.695225 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:38:37.696501 systemd-logind[1446]: Removed session 14. Sep 12 17:38:42.707308 systemd[1]: Started sshd@14-143.244.177.186:22-147.75.109.163:54714.service - OpenSSH per-connection server daemon (147.75.109.163:54714). Sep 12 17:38:42.748720 sshd[3993]: Accepted publickey for core from 147.75.109.163 port 54714 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:42.750684 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:42.756133 systemd-logind[1446]: New session 15 of user core. Sep 12 17:38:42.760208 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:38:42.891475 sshd[3993]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:42.895144 systemd[1]: sshd@14-143.244.177.186:22-147.75.109.163:54714.service: Deactivated successfully. Sep 12 17:38:42.897758 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:38:42.900642 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:38:42.902323 systemd-logind[1446]: Removed session 15. Sep 12 17:38:47.914439 systemd[1]: Started sshd@15-143.244.177.186:22-147.75.109.163:54726.service - OpenSSH per-connection server daemon (147.75.109.163:54726). Sep 12 17:38:47.969916 sshd[4009]: Accepted publickey for core from 147.75.109.163 port 54726 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:47.973071 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:47.980557 systemd-logind[1446]: New session 16 of user core. Sep 12 17:38:47.988349 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:38:48.143743 sshd[4009]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:48.158738 systemd[1]: sshd@15-143.244.177.186:22-147.75.109.163:54726.service: Deactivated successfully. Sep 12 17:38:48.161470 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:38:48.163002 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:38:48.175564 systemd[1]: Started sshd@16-143.244.177.186:22-147.75.109.163:54728.service - OpenSSH per-connection server daemon (147.75.109.163:54728). Sep 12 17:38:48.179375 systemd-logind[1446]: Removed session 16. Sep 12 17:38:48.224351 sshd[4022]: Accepted publickey for core from 147.75.109.163 port 54728 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:48.226403 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:48.234360 systemd-logind[1446]: New session 17 of user core. Sep 12 17:38:48.241316 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:38:48.579529 sshd[4022]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:48.589432 systemd[1]: sshd@16-143.244.177.186:22-147.75.109.163:54728.service: Deactivated successfully. Sep 12 17:38:48.591927 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:38:48.594032 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:38:48.599351 systemd[1]: Started sshd@17-143.244.177.186:22-147.75.109.163:54740.service - OpenSSH per-connection server daemon (147.75.109.163:54740). Sep 12 17:38:48.603007 systemd-logind[1446]: Removed session 17. Sep 12 17:38:48.670075 sshd[4033]: Accepted publickey for core from 147.75.109.163 port 54740 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:48.671596 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:48.677707 systemd-logind[1446]: New session 18 of user core. Sep 12 17:38:48.687249 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:38:50.278550 sshd[4033]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:50.298446 systemd[1]: sshd@17-143.244.177.186:22-147.75.109.163:54740.service: Deactivated successfully. Sep 12 17:38:50.301793 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:38:50.304709 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:38:50.313644 systemd[1]: Started sshd@18-143.244.177.186:22-147.75.109.163:53024.service - OpenSSH per-connection server daemon (147.75.109.163:53024). Sep 12 17:38:50.314898 systemd-logind[1446]: Removed session 18. Sep 12 17:38:50.384769 sshd[4050]: Accepted publickey for core from 147.75.109.163 port 53024 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:50.386691 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:50.395724 systemd-logind[1446]: New session 19 of user core. Sep 12 17:38:50.398179 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:38:50.763022 sshd[4050]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:50.773684 systemd[1]: sshd@18-143.244.177.186:22-147.75.109.163:53024.service: Deactivated successfully. Sep 12 17:38:50.778718 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:38:50.782188 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:38:50.791260 systemd[1]: Started sshd@19-143.244.177.186:22-147.75.109.163:53030.service - OpenSSH per-connection server daemon (147.75.109.163:53030). Sep 12 17:38:50.794468 systemd-logind[1446]: Removed session 19. Sep 12 17:38:50.834352 sshd[4062]: Accepted publickey for core from 147.75.109.163 port 53030 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:50.836779 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:50.843038 systemd-logind[1446]: New session 20 of user core. Sep 12 17:38:50.854285 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:38:50.991454 sshd[4062]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:50.997158 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:38:50.997980 systemd[1]: sshd@19-143.244.177.186:22-147.75.109.163:53030.service: Deactivated successfully. Sep 12 17:38:51.001872 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:38:51.004340 systemd-logind[1446]: Removed session 20. Sep 12 17:38:56.015462 systemd[1]: Started sshd@20-143.244.177.186:22-147.75.109.163:53038.service - OpenSSH per-connection server daemon (147.75.109.163:53038). Sep 12 17:38:56.066970 sshd[4075]: Accepted publickey for core from 147.75.109.163 port 53038 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:56.068360 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:56.075445 systemd-logind[1446]: New session 21 of user core. Sep 12 17:38:56.085422 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:38:56.103485 kubelet[2508]: E0912 17:38:56.102088 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:38:56.268209 sshd[4075]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:56.275256 systemd[1]: sshd@20-143.244.177.186:22-147.75.109.163:53038.service: Deactivated successfully. Sep 12 17:38:56.278484 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:38:56.280253 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:38:56.281677 systemd-logind[1446]: Removed session 21. Sep 12 17:39:01.285382 systemd[1]: Started sshd@21-143.244.177.186:22-147.75.109.163:35552.service - OpenSSH per-connection server daemon (147.75.109.163:35552). Sep 12 17:39:01.346648 sshd[4090]: Accepted publickey for core from 147.75.109.163 port 35552 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:01.348909 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:01.358077 systemd-logind[1446]: New session 22 of user core. Sep 12 17:39:01.364819 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:39:01.537537 sshd[4090]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:01.543536 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:39:01.545246 systemd[1]: sshd@21-143.244.177.186:22-147.75.109.163:35552.service: Deactivated successfully. Sep 12 17:39:01.549253 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:39:01.552694 systemd-logind[1446]: Removed session 22. Sep 12 17:39:06.557417 systemd[1]: Started sshd@22-143.244.177.186:22-147.75.109.163:35564.service - OpenSSH per-connection server daemon (147.75.109.163:35564). Sep 12 17:39:06.599800 sshd[4103]: Accepted publickey for core from 147.75.109.163 port 35564 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:06.602415 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:06.608867 systemd-logind[1446]: New session 23 of user core. Sep 12 17:39:06.617216 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:39:06.748831 sshd[4103]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:06.754497 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:39:06.755664 systemd[1]: sshd@22-143.244.177.186:22-147.75.109.163:35564.service: Deactivated successfully. Sep 12 17:39:06.759214 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:39:06.761329 systemd-logind[1446]: Removed session 23. Sep 12 17:39:08.101376 kubelet[2508]: E0912 17:39:08.100865 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:10.101908 kubelet[2508]: E0912 17:39:10.100539 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:11.101030 kubelet[2508]: E0912 17:39:11.100992 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:11.766361 systemd[1]: Started sshd@23-143.244.177.186:22-147.75.109.163:51460.service - OpenSSH per-connection server daemon (147.75.109.163:51460). Sep 12 17:39:11.804993 sshd[4116]: Accepted publickey for core from 147.75.109.163 port 51460 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:11.806779 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:11.812778 systemd-logind[1446]: New session 24 of user core. Sep 12 17:39:11.821245 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:39:11.947727 sshd[4116]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:11.959162 systemd[1]: sshd@23-143.244.177.186:22-147.75.109.163:51460.service: Deactivated successfully. Sep 12 17:39:11.961694 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:39:11.963435 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:39:11.968333 systemd[1]: Started sshd@24-143.244.177.186:22-147.75.109.163:51468.service - OpenSSH per-connection server daemon (147.75.109.163:51468). Sep 12 17:39:11.970092 systemd-logind[1446]: Removed session 24. Sep 12 17:39:12.010996 sshd[4129]: Accepted publickey for core from 147.75.109.163 port 51468 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:12.013073 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:12.018374 systemd-logind[1446]: New session 25 of user core. Sep 12 17:39:12.030268 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:39:13.475459 containerd[1469]: time="2025-09-12T17:39:13.475350061Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:39:13.502682 containerd[1469]: time="2025-09-12T17:39:13.502501376Z" level=info msg="StopContainer for \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\" with timeout 2 (s)" Sep 12 17:39:13.502682 containerd[1469]: time="2025-09-12T17:39:13.502599782Z" level=info msg="StopContainer for \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\" with timeout 30 (s)" Sep 12 17:39:13.504161 containerd[1469]: time="2025-09-12T17:39:13.504127496Z" level=info msg="Stop container \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\" with signal terminated" Sep 12 17:39:13.504893 containerd[1469]: time="2025-09-12T17:39:13.504825461Z" level=info msg="Stop container \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\" with signal terminated" Sep 12 17:39:13.513213 systemd-networkd[1354]: lxc_health: Link DOWN Sep 12 17:39:13.513222 systemd-networkd[1354]: lxc_health: Lost carrier Sep 12 17:39:13.527266 systemd[1]: cri-containerd-90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b.scope: Deactivated successfully. Sep 12 17:39:13.545447 systemd[1]: cri-containerd-7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c.scope: Deactivated successfully. Sep 12 17:39:13.545781 systemd[1]: cri-containerd-7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c.scope: Consumed 9.526s CPU time. Sep 12 17:39:13.576524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b-rootfs.mount: Deactivated successfully. Sep 12 17:39:13.587321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c-rootfs.mount: Deactivated successfully. Sep 12 17:39:13.597600 containerd[1469]: time="2025-09-12T17:39:13.597354723Z" level=info msg="shim disconnected" id=7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c namespace=k8s.io Sep 12 17:39:13.597600 containerd[1469]: time="2025-09-12T17:39:13.597423359Z" level=warning msg="cleaning up after shim disconnected" id=7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c namespace=k8s.io Sep 12 17:39:13.597600 containerd[1469]: time="2025-09-12T17:39:13.597433109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:13.598547 containerd[1469]: time="2025-09-12T17:39:13.598401253Z" level=info msg="shim disconnected" id=90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b namespace=k8s.io Sep 12 17:39:13.598547 containerd[1469]: time="2025-09-12T17:39:13.598470463Z" level=warning msg="cleaning up after shim disconnected" id=90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b namespace=k8s.io Sep 12 17:39:13.598547 containerd[1469]: time="2025-09-12T17:39:13.598481514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:13.622664 containerd[1469]: time="2025-09-12T17:39:13.622605036Z" level=info msg="StopContainer for \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\" returns successfully" Sep 12 17:39:13.623819 containerd[1469]: time="2025-09-12T17:39:13.623620063Z" level=info msg="StopPodSandbox for \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\"" Sep 12 17:39:13.623819 containerd[1469]: time="2025-09-12T17:39:13.623674925Z" level=info msg="Container to stop \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:39:13.623819 containerd[1469]: time="2025-09-12T17:39:13.623687668Z" level=info msg="Container to stop \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:39:13.623819 containerd[1469]: time="2025-09-12T17:39:13.623698401Z" level=info msg="Container to stop \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:39:13.623819 containerd[1469]: time="2025-09-12T17:39:13.623707144Z" level=info msg="Container to stop \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:39:13.623819 containerd[1469]: time="2025-09-12T17:39:13.623717380Z" level=info msg="Container to stop \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:39:13.626602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd-shm.mount: Deactivated successfully. Sep 12 17:39:13.626899 containerd[1469]: time="2025-09-12T17:39:13.626869272Z" level=info msg="StopContainer for \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\" returns successfully" Sep 12 17:39:13.630965 containerd[1469]: time="2025-09-12T17:39:13.628519516Z" level=info msg="StopPodSandbox for \"947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9\"" Sep 12 17:39:13.630965 containerd[1469]: time="2025-09-12T17:39:13.628574113Z" level=info msg="Container to stop \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:39:13.632867 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9-shm.mount: Deactivated successfully. Sep 12 17:39:13.643176 systemd[1]: cri-containerd-9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd.scope: Deactivated successfully. Sep 12 17:39:13.644659 systemd[1]: cri-containerd-947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9.scope: Deactivated successfully. Sep 12 17:39:13.690055 containerd[1469]: time="2025-09-12T17:39:13.689993196Z" level=info msg="shim disconnected" id=9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd namespace=k8s.io Sep 12 17:39:13.690543 containerd[1469]: time="2025-09-12T17:39:13.690318374Z" level=warning msg="cleaning up after shim disconnected" id=9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd namespace=k8s.io Sep 12 17:39:13.690543 containerd[1469]: time="2025-09-12T17:39:13.690334649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:13.690543 containerd[1469]: time="2025-09-12T17:39:13.690401220Z" level=info msg="shim disconnected" id=947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9 namespace=k8s.io Sep 12 17:39:13.690543 containerd[1469]: time="2025-09-12T17:39:13.690428292Z" level=warning msg="cleaning up after shim disconnected" id=947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9 namespace=k8s.io Sep 12 17:39:13.690543 containerd[1469]: time="2025-09-12T17:39:13.690434907Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:13.726138 containerd[1469]: time="2025-09-12T17:39:13.725538219Z" level=info msg="TearDown network for sandbox \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" successfully" Sep 12 17:39:13.726138 containerd[1469]: time="2025-09-12T17:39:13.725590940Z" level=info msg="StopPodSandbox for \"9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd\" returns successfully" Sep 12 17:39:13.727618 containerd[1469]: time="2025-09-12T17:39:13.727552753Z" level=info msg="TearDown network for sandbox \"947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9\" successfully" Sep 12 17:39:13.727618 containerd[1469]: time="2025-09-12T17:39:13.727595498Z" level=info msg="StopPodSandbox for \"947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9\" returns successfully" Sep 12 17:39:13.809965 kubelet[2508]: I0912 17:39:13.808124 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtz5n\" (UniqueName: \"kubernetes.io/projected/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-kube-api-access-rtz5n\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.809965 kubelet[2508]: I0912 17:39:13.808193 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-bpf-maps\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.809965 kubelet[2508]: I0912 17:39:13.808222 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b13ca77-3f4e-44ec-9de5-74125e759c96-cilium-config-path\") pod \"4b13ca77-3f4e-44ec-9de5-74125e759c96\" (UID: \"4b13ca77-3f4e-44ec-9de5-74125e759c96\") " Sep 12 17:39:13.809965 kubelet[2508]: I0912 17:39:13.808249 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-lib-modules\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.809965 kubelet[2508]: I0912 17:39:13.808266 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-host-proc-sys-net\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.809965 kubelet[2508]: I0912 17:39:13.808282 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-hubble-tls\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.810586 kubelet[2508]: I0912 17:39:13.808297 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-host-proc-sys-kernel\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.810586 kubelet[2508]: I0912 17:39:13.808311 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-hostproc\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.810586 kubelet[2508]: I0912 17:39:13.808334 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-xtables-lock\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.810586 kubelet[2508]: I0912 17:39:13.808349 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-run\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.810586 kubelet[2508]: I0912 17:39:13.808364 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hpvx\" (UniqueName: \"kubernetes.io/projected/4b13ca77-3f4e-44ec-9de5-74125e759c96-kube-api-access-5hpvx\") pod \"4b13ca77-3f4e-44ec-9de5-74125e759c96\" (UID: \"4b13ca77-3f4e-44ec-9de5-74125e759c96\") " Sep 12 17:39:13.810586 kubelet[2508]: I0912 17:39:13.808410 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-config-path\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.810787 kubelet[2508]: I0912 17:39:13.808433 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-cgroup\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.810787 kubelet[2508]: I0912 17:39:13.808448 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cni-path\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.810787 kubelet[2508]: I0912 17:39:13.808463 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-etc-cni-netd\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.810787 kubelet[2508]: I0912 17:39:13.808490 2508 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-clustermesh-secrets\") pod \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\" (UID: \"c7d52cc0-3b1d-48be-844e-e71d1c7d2391\") " Sep 12 17:39:13.821988 kubelet[2508]: I0912 17:39:13.818617 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.821988 kubelet[2508]: I0912 17:39:13.818694 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.835966 kubelet[2508]: I0912 17:39:13.833738 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.837964 kubelet[2508]: I0912 17:39:13.836209 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cni-path" (OuterVolumeSpecName: "cni-path") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.837964 kubelet[2508]: I0912 17:39:13.836248 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.837964 kubelet[2508]: I0912 17:39:13.836407 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:39:13.837964 kubelet[2508]: I0912 17:39:13.836433 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-hostproc" (OuterVolumeSpecName: "hostproc") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.837964 kubelet[2508]: I0912 17:39:13.836466 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.838199 kubelet[2508]: I0912 17:39:13.836481 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.840968 kubelet[2508]: I0912 17:39:13.840067 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:39:13.846362 kubelet[2508]: I0912 17:39:13.842381 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b13ca77-3f4e-44ec-9de5-74125e759c96-kube-api-access-5hpvx" (OuterVolumeSpecName: "kube-api-access-5hpvx") pod "4b13ca77-3f4e-44ec-9de5-74125e759c96" (UID: "4b13ca77-3f4e-44ec-9de5-74125e759c96"). InnerVolumeSpecName "kube-api-access-5hpvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:39:13.850069 kubelet[2508]: I0912 17:39:13.846281 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.857841 kubelet[2508]: I0912 17:39:13.847094 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:39:13.857841 kubelet[2508]: I0912 17:39:13.849744 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b13ca77-3f4e-44ec-9de5-74125e759c96-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4b13ca77-3f4e-44ec-9de5-74125e759c96" (UID: "4b13ca77-3f4e-44ec-9de5-74125e759c96"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:39:13.860571 kubelet[2508]: I0912 17:39:13.860499 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:39:13.864129 kubelet[2508]: I0912 17:39:13.864073 2508 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-kube-api-access-rtz5n" (OuterVolumeSpecName: "kube-api-access-rtz5n") pod "c7d52cc0-3b1d-48be-844e-e71d1c7d2391" (UID: "c7d52cc0-3b1d-48be-844e-e71d1c7d2391"). InnerVolumeSpecName "kube-api-access-rtz5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:39:13.909411 kubelet[2508]: I0912 17:39:13.909365 2508 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-bpf-maps\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.909833 kubelet[2508]: I0912 17:39:13.909813 2508 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b13ca77-3f4e-44ec-9de5-74125e759c96-cilium-config-path\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.909928 kubelet[2508]: I0912 17:39:13.909917 2508 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-hostproc\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910011 kubelet[2508]: I0912 17:39:13.910002 2508 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-xtables-lock\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910057 kubelet[2508]: I0912 17:39:13.910050 2508 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-lib-modules\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910115 kubelet[2508]: I0912 17:39:13.910105 2508 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-host-proc-sys-net\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910169 kubelet[2508]: I0912 17:39:13.910153 2508 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-hubble-tls\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910236 kubelet[2508]: I0912 17:39:13.910226 2508 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-host-proc-sys-kernel\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910304 kubelet[2508]: I0912 17:39:13.910282 2508 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-run\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910355 kubelet[2508]: I0912 17:39:13.910347 2508 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hpvx\" (UniqueName: \"kubernetes.io/projected/4b13ca77-3f4e-44ec-9de5-74125e759c96-kube-api-access-5hpvx\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910423 kubelet[2508]: I0912 17:39:13.910410 2508 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-config-path\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910512 kubelet[2508]: I0912 17:39:13.910502 2508 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cilium-cgroup\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910573 kubelet[2508]: I0912 17:39:13.910556 2508 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-cni-path\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910627 kubelet[2508]: I0912 17:39:13.910618 2508 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-clustermesh-secrets\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910690 kubelet[2508]: I0912 17:39:13.910681 2508 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-etc-cni-netd\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:13.910747 kubelet[2508]: I0912 17:39:13.910732 2508 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtz5n\" (UniqueName: \"kubernetes.io/projected/c7d52cc0-3b1d-48be-844e-e71d1c7d2391-kube-api-access-rtz5n\") on node \"ci-4081.3.6-a-bde5b7e242\" DevicePath \"\"" Sep 12 17:39:14.110104 systemd[1]: Removed slice kubepods-besteffort-pod4b13ca77_3f4e_44ec_9de5_74125e759c96.slice - libcontainer container kubepods-besteffort-pod4b13ca77_3f4e_44ec_9de5_74125e759c96.slice. Sep 12 17:39:14.111847 systemd[1]: Removed slice kubepods-burstable-podc7d52cc0_3b1d_48be_844e_e71d1c7d2391.slice - libcontainer container kubepods-burstable-podc7d52cc0_3b1d_48be_844e_e71d1c7d2391.slice. Sep 12 17:39:14.112198 systemd[1]: kubepods-burstable-podc7d52cc0_3b1d_48be_844e_e71d1c7d2391.slice: Consumed 9.627s CPU time. Sep 12 17:39:14.436853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-947ecef925f96c617ffaed336ebcbf0f5c2e386631bc59d6a8555e38b6abe9c9-rootfs.mount: Deactivated successfully. Sep 12 17:39:14.437008 systemd[1]: var-lib-kubelet-pods-4b13ca77\x2d3f4e\x2d44ec\x2d9de5\x2d74125e759c96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5hpvx.mount: Deactivated successfully. Sep 12 17:39:14.437099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9404e687753b5660dda12f15de7db42de87be023a4b8dee7ed39efd22e35babd-rootfs.mount: Deactivated successfully. Sep 12 17:39:14.437183 systemd[1]: var-lib-kubelet-pods-c7d52cc0\x2d3b1d\x2d48be\x2d844e\x2de71d1c7d2391-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtz5n.mount: Deactivated successfully. Sep 12 17:39:14.437247 systemd[1]: var-lib-kubelet-pods-c7d52cc0\x2d3b1d\x2d48be\x2d844e\x2de71d1c7d2391-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:39:14.437335 systemd[1]: var-lib-kubelet-pods-c7d52cc0\x2d3b1d\x2d48be\x2d844e\x2de71d1c7d2391-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:39:14.453394 kubelet[2508]: I0912 17:39:14.452229 2508 scope.go:117] "RemoveContainer" containerID="7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c" Sep 12 17:39:14.464561 containerd[1469]: time="2025-09-12T17:39:14.464497337Z" level=info msg="RemoveContainer for \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\"" Sep 12 17:39:14.472717 containerd[1469]: time="2025-09-12T17:39:14.472661788Z" level=info msg="RemoveContainer for \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\" returns successfully" Sep 12 17:39:14.473404 kubelet[2508]: I0912 17:39:14.473362 2508 scope.go:117] "RemoveContainer" containerID="884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da" Sep 12 17:39:14.478814 containerd[1469]: time="2025-09-12T17:39:14.478664208Z" level=info msg="RemoveContainer for \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\"" Sep 12 17:39:14.482725 containerd[1469]: time="2025-09-12T17:39:14.482568675Z" level=info msg="RemoveContainer for \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\" returns successfully" Sep 12 17:39:14.484138 kubelet[2508]: I0912 17:39:14.484099 2508 scope.go:117] "RemoveContainer" containerID="9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce" Sep 12 17:39:14.487255 containerd[1469]: time="2025-09-12T17:39:14.487207671Z" level=info msg="RemoveContainer for \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\"" Sep 12 17:39:14.505303 containerd[1469]: time="2025-09-12T17:39:14.505114823Z" level=info msg="RemoveContainer for \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\" returns successfully" Sep 12 17:39:14.505555 kubelet[2508]: I0912 17:39:14.505512 2508 scope.go:117] "RemoveContainer" containerID="66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e" Sep 12 17:39:14.511855 containerd[1469]: time="2025-09-12T17:39:14.511687480Z" level=info msg="RemoveContainer for \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\"" Sep 12 17:39:14.518667 containerd[1469]: time="2025-09-12T17:39:14.518017550Z" level=info msg="RemoveContainer for \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\" returns successfully" Sep 12 17:39:14.519458 kubelet[2508]: I0912 17:39:14.518309 2508 scope.go:117] "RemoveContainer" containerID="6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0" Sep 12 17:39:14.523220 containerd[1469]: time="2025-09-12T17:39:14.523183160Z" level=info msg="RemoveContainer for \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\"" Sep 12 17:39:14.526285 containerd[1469]: time="2025-09-12T17:39:14.526162510Z" level=info msg="RemoveContainer for \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\" returns successfully" Sep 12 17:39:14.526493 kubelet[2508]: I0912 17:39:14.526458 2508 scope.go:117] "RemoveContainer" containerID="7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c" Sep 12 17:39:14.553810 containerd[1469]: time="2025-09-12T17:39:14.530155256Z" level=error msg="ContainerStatus for \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\": not found" Sep 12 17:39:14.562636 kubelet[2508]: E0912 17:39:14.562300 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\": not found" containerID="7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c" Sep 12 17:39:14.572967 kubelet[2508]: I0912 17:39:14.562545 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c"} err="failed to get container status \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a9c8cfec205f12e3e426a7b3093df07f824a5a5094997ad5f2d7bb3b39e4e5c\": not found" Sep 12 17:39:14.573207 kubelet[2508]: I0912 17:39:14.573185 2508 scope.go:117] "RemoveContainer" containerID="884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da" Sep 12 17:39:14.573765 containerd[1469]: time="2025-09-12T17:39:14.573640257Z" level=error msg="ContainerStatus for \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\": not found" Sep 12 17:39:14.574030 kubelet[2508]: E0912 17:39:14.573891 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\": not found" containerID="884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da" Sep 12 17:39:14.574030 kubelet[2508]: I0912 17:39:14.574008 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da"} err="failed to get container status \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\": rpc error: code = NotFound desc = an error occurred when try to find container \"884d36915d7305a7d06debdde786d4aaaeeeed08d46f0bbd6add67abaeaa57da\": not found" Sep 12 17:39:14.574145 kubelet[2508]: I0912 17:39:14.574044 2508 scope.go:117] "RemoveContainer" containerID="9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce" Sep 12 17:39:14.574444 containerd[1469]: time="2025-09-12T17:39:14.574388772Z" level=error msg="ContainerStatus for \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\": not found" Sep 12 17:39:14.574616 kubelet[2508]: E0912 17:39:14.574582 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\": not found" containerID="9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce" Sep 12 17:39:14.574661 kubelet[2508]: I0912 17:39:14.574622 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce"} err="failed to get container status \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e519d9b46575d0c0378db7ad94c31f55646b2b7b91bb8762d8c62cda5c7bfce\": not found" Sep 12 17:39:14.574661 kubelet[2508]: I0912 17:39:14.574651 2508 scope.go:117] "RemoveContainer" containerID="66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e" Sep 12 17:39:14.574982 containerd[1469]: time="2025-09-12T17:39:14.574878281Z" level=error msg="ContainerStatus for \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\": not found" Sep 12 17:39:14.575120 kubelet[2508]: E0912 17:39:14.575062 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\": not found" containerID="66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e" Sep 12 17:39:14.575120 kubelet[2508]: I0912 17:39:14.575107 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e"} err="failed to get container status \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\": rpc error: code = NotFound desc = an error occurred when try to find container \"66d00a6d798b617bc691d984ce3a794cb43df6e79c561ff2e88677d3a981f18e\": not found" Sep 12 17:39:14.575181 kubelet[2508]: I0912 17:39:14.575131 2508 scope.go:117] "RemoveContainer" containerID="6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0" Sep 12 17:39:14.575467 containerd[1469]: time="2025-09-12T17:39:14.575420533Z" level=error msg="ContainerStatus for \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\": not found" Sep 12 17:39:14.575583 kubelet[2508]: E0912 17:39:14.575557 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\": not found" containerID="6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0" Sep 12 17:39:14.575624 kubelet[2508]: I0912 17:39:14.575591 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0"} err="failed to get container status \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ccd8397b3a8cf3fb873a567608ce6e574f347638b3b20c28f4099fbd980cef0\": not found" Sep 12 17:39:14.575624 kubelet[2508]: I0912 17:39:14.575613 2508 scope.go:117] "RemoveContainer" containerID="90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b" Sep 12 17:39:14.577562 containerd[1469]: time="2025-09-12T17:39:14.577520621Z" level=info msg="RemoveContainer for \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\"" Sep 12 17:39:14.581059 containerd[1469]: time="2025-09-12T17:39:14.580915805Z" level=info msg="RemoveContainer for \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\" returns successfully" Sep 12 17:39:14.581490 kubelet[2508]: I0912 17:39:14.581454 2508 scope.go:117] "RemoveContainer" containerID="90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b" Sep 12 17:39:14.581756 containerd[1469]: time="2025-09-12T17:39:14.581726811Z" level=error msg="ContainerStatus for \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\": not found" Sep 12 17:39:14.582033 kubelet[2508]: E0912 17:39:14.582007 2508 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\": not found" containerID="90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b" Sep 12 17:39:14.582101 kubelet[2508]: I0912 17:39:14.582060 2508 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b"} err="failed to get container status \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\": rpc error: code = NotFound desc = an error occurred when try to find container \"90f4d373c5caeab8908819e331f7762f5fedca846dd2ad6ad0e430e62f34c21b\": not found" Sep 12 17:39:15.360225 sshd[4129]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:15.374132 systemd[1]: sshd@24-143.244.177.186:22-147.75.109.163:51468.service: Deactivated successfully. Sep 12 17:39:15.377448 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:39:15.380010 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:39:15.392409 systemd[1]: Started sshd@25-143.244.177.186:22-147.75.109.163:51476.service - OpenSSH per-connection server daemon (147.75.109.163:51476). Sep 12 17:39:15.394323 systemd-logind[1446]: Removed session 25. Sep 12 17:39:15.428857 sshd[4297]: Accepted publickey for core from 147.75.109.163 port 51476 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:15.430483 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:15.435125 systemd-logind[1446]: New session 26 of user core. Sep 12 17:39:15.441172 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:39:16.025436 sshd[4297]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:16.038180 systemd[1]: sshd@25-143.244.177.186:22-147.75.109.163:51476.service: Deactivated successfully. Sep 12 17:39:16.041451 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:39:16.048014 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:39:16.055356 systemd[1]: Started sshd@26-143.244.177.186:22-147.75.109.163:51478.service - OpenSSH per-connection server daemon (147.75.109.163:51478). Sep 12 17:39:16.059284 systemd-logind[1446]: Removed session 26. Sep 12 17:39:16.066963 kubelet[2508]: E0912 17:39:16.066834 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d52cc0-3b1d-48be-844e-e71d1c7d2391" containerName="cilium-agent" Sep 12 17:39:16.066963 kubelet[2508]: E0912 17:39:16.066875 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d52cc0-3b1d-48be-844e-e71d1c7d2391" containerName="mount-cgroup" Sep 12 17:39:16.066963 kubelet[2508]: E0912 17:39:16.066886 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b13ca77-3f4e-44ec-9de5-74125e759c96" containerName="cilium-operator" Sep 12 17:39:16.066963 kubelet[2508]: E0912 17:39:16.066896 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d52cc0-3b1d-48be-844e-e71d1c7d2391" containerName="mount-bpf-fs" Sep 12 17:39:16.066963 kubelet[2508]: E0912 17:39:16.066903 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d52cc0-3b1d-48be-844e-e71d1c7d2391" containerName="clean-cilium-state" Sep 12 17:39:16.066963 kubelet[2508]: E0912 17:39:16.066910 2508 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d52cc0-3b1d-48be-844e-e71d1c7d2391" containerName="apply-sysctl-overwrites" Sep 12 17:39:16.066963 kubelet[2508]: I0912 17:39:16.066966 2508 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d52cc0-3b1d-48be-844e-e71d1c7d2391" containerName="cilium-agent" Sep 12 17:39:16.066963 kubelet[2508]: I0912 17:39:16.066975 2508 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b13ca77-3f4e-44ec-9de5-74125e759c96" containerName="cilium-operator" Sep 12 17:39:16.114248 kubelet[2508]: I0912 17:39:16.112278 2508 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b13ca77-3f4e-44ec-9de5-74125e759c96" path="/var/lib/kubelet/pods/4b13ca77-3f4e-44ec-9de5-74125e759c96/volumes" Sep 12 17:39:16.114248 kubelet[2508]: I0912 17:39:16.112774 2508 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d52cc0-3b1d-48be-844e-e71d1c7d2391" path="/var/lib/kubelet/pods/c7d52cc0-3b1d-48be-844e-e71d1c7d2391/volumes" Sep 12 17:39:16.120109 systemd[1]: Created slice kubepods-burstable-pod7b217f9e_e7ea_49c2_b3fa_1b8a4e8c3d4e.slice - libcontainer container kubepods-burstable-pod7b217f9e_e7ea_49c2_b3fa_1b8a4e8c3d4e.slice. Sep 12 17:39:16.124097 kubelet[2508]: I0912 17:39:16.124049 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-lib-modules\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124097 kubelet[2508]: I0912 17:39:16.124095 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-clustermesh-secrets\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124679 kubelet[2508]: I0912 17:39:16.124127 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-etc-cni-netd\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124679 kubelet[2508]: I0912 17:39:16.124142 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-host-proc-sys-kernel\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124679 kubelet[2508]: I0912 17:39:16.124178 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-hostproc\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124679 kubelet[2508]: I0912 17:39:16.124194 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-cilium-ipsec-secrets\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124679 kubelet[2508]: I0912 17:39:16.124213 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-bpf-maps\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124679 kubelet[2508]: I0912 17:39:16.124239 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-cilium-run\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124857 kubelet[2508]: I0912 17:39:16.124264 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-cilium-cgroup\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124857 kubelet[2508]: I0912 17:39:16.124280 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-cilium-config-path\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124857 kubelet[2508]: I0912 17:39:16.124296 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-host-proc-sys-net\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.124857 kubelet[2508]: I0912 17:39:16.124310 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-cni-path\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.125287 kubelet[2508]: I0912 17:39:16.124987 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-xtables-lock\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.125287 kubelet[2508]: I0912 17:39:16.125031 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-hubble-tls\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.125287 kubelet[2508]: I0912 17:39:16.125049 2508 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjg47\" (UniqueName: \"kubernetes.io/projected/7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e-kube-api-access-kjg47\") pod \"cilium-4vfsp\" (UID: \"7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e\") " pod="kube-system/cilium-4vfsp" Sep 12 17:39:16.132668 sshd[4308]: Accepted publickey for core from 147.75.109.163 port 51478 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:16.135599 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:16.145713 systemd-logind[1446]: New session 27 of user core. Sep 12 17:39:16.149275 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:39:16.220142 sshd[4308]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:16.222241 kubelet[2508]: E0912 17:39:16.222148 2508 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:39:16.227908 systemd[1]: sshd@26-143.244.177.186:22-147.75.109.163:51478.service: Deactivated successfully. Sep 12 17:39:16.231922 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:39:16.253159 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:39:16.254715 systemd-logind[1446]: Removed session 27. Sep 12 17:39:16.261331 systemd[1]: Started sshd@27-143.244.177.186:22-147.75.109.163:51494.service - OpenSSH per-connection server daemon (147.75.109.163:51494). Sep 12 17:39:16.314441 sshd[4319]: Accepted publickey for core from 147.75.109.163 port 51494 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:16.316868 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:16.321449 systemd-logind[1446]: New session 28 of user core. Sep 12 17:39:16.328218 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:39:16.429306 kubelet[2508]: E0912 17:39:16.426032 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:16.429544 containerd[1469]: time="2025-09-12T17:39:16.426748762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vfsp,Uid:7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:16.468839 containerd[1469]: time="2025-09-12T17:39:16.468438491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:16.469233 containerd[1469]: time="2025-09-12T17:39:16.468910745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:16.469433 containerd[1469]: time="2025-09-12T17:39:16.469065383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:16.471622 containerd[1469]: time="2025-09-12T17:39:16.471231128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:16.512177 systemd[1]: Started cri-containerd-24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1.scope - libcontainer container 24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1. Sep 12 17:39:16.564700 containerd[1469]: time="2025-09-12T17:39:16.564286340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vfsp,Uid:7b217f9e-e7ea-49c2-b3fa-1b8a4e8c3d4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\"" Sep 12 17:39:16.566289 kubelet[2508]: E0912 17:39:16.566251 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:16.572751 containerd[1469]: time="2025-09-12T17:39:16.572671758Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:39:16.594534 containerd[1469]: time="2025-09-12T17:39:16.594423602Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"597daffc32b5bed7ff07aac682f38ca4f59b63224a1b12a2c4ba9a1d9ce5a66b\"" Sep 12 17:39:16.598578 containerd[1469]: time="2025-09-12T17:39:16.597273839Z" level=info msg="StartContainer for \"597daffc32b5bed7ff07aac682f38ca4f59b63224a1b12a2c4ba9a1d9ce5a66b\"" Sep 12 17:39:16.642438 systemd[1]: Started cri-containerd-597daffc32b5bed7ff07aac682f38ca4f59b63224a1b12a2c4ba9a1d9ce5a66b.scope - libcontainer container 597daffc32b5bed7ff07aac682f38ca4f59b63224a1b12a2c4ba9a1d9ce5a66b. Sep 12 17:39:16.681340 containerd[1469]: time="2025-09-12T17:39:16.681284420Z" level=info msg="StartContainer for \"597daffc32b5bed7ff07aac682f38ca4f59b63224a1b12a2c4ba9a1d9ce5a66b\" returns successfully" Sep 12 17:39:16.698292 systemd[1]: cri-containerd-597daffc32b5bed7ff07aac682f38ca4f59b63224a1b12a2c4ba9a1d9ce5a66b.scope: Deactivated successfully. Sep 12 17:39:16.736811 containerd[1469]: time="2025-09-12T17:39:16.736723315Z" level=info msg="shim disconnected" id=597daffc32b5bed7ff07aac682f38ca4f59b63224a1b12a2c4ba9a1d9ce5a66b namespace=k8s.io Sep 12 17:39:16.736811 containerd[1469]: time="2025-09-12T17:39:16.736797718Z" level=warning msg="cleaning up after shim disconnected" id=597daffc32b5bed7ff07aac682f38ca4f59b63224a1b12a2c4ba9a1d9ce5a66b namespace=k8s.io Sep 12 17:39:16.736811 containerd[1469]: time="2025-09-12T17:39:16.736811703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:17.486185 kubelet[2508]: E0912 17:39:17.486105 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:17.490108 containerd[1469]: time="2025-09-12T17:39:17.490047822Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:39:17.512146 containerd[1469]: time="2025-09-12T17:39:17.512091425Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0bc79c71e16c3718fd8009ffa6d5027e864a379e7eaf1b75950a59362a05e8d1\"" Sep 12 17:39:17.513126 containerd[1469]: time="2025-09-12T17:39:17.513060480Z" level=info msg="StartContainer for \"0bc79c71e16c3718fd8009ffa6d5027e864a379e7eaf1b75950a59362a05e8d1\"" Sep 12 17:39:17.559206 systemd[1]: Started cri-containerd-0bc79c71e16c3718fd8009ffa6d5027e864a379e7eaf1b75950a59362a05e8d1.scope - libcontainer container 0bc79c71e16c3718fd8009ffa6d5027e864a379e7eaf1b75950a59362a05e8d1. Sep 12 17:39:17.596733 containerd[1469]: time="2025-09-12T17:39:17.595869531Z" level=info msg="StartContainer for \"0bc79c71e16c3718fd8009ffa6d5027e864a379e7eaf1b75950a59362a05e8d1\" returns successfully" Sep 12 17:39:17.609180 systemd[1]: cri-containerd-0bc79c71e16c3718fd8009ffa6d5027e864a379e7eaf1b75950a59362a05e8d1.scope: Deactivated successfully. Sep 12 17:39:17.640755 containerd[1469]: time="2025-09-12T17:39:17.640646753Z" level=info msg="shim disconnected" id=0bc79c71e16c3718fd8009ffa6d5027e864a379e7eaf1b75950a59362a05e8d1 namespace=k8s.io Sep 12 17:39:17.640755 containerd[1469]: time="2025-09-12T17:39:17.640744642Z" level=warning msg="cleaning up after shim disconnected" id=0bc79c71e16c3718fd8009ffa6d5027e864a379e7eaf1b75950a59362a05e8d1 namespace=k8s.io Sep 12 17:39:17.641049 containerd[1469]: time="2025-09-12T17:39:17.640757826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:18.235648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bc79c71e16c3718fd8009ffa6d5027e864a379e7eaf1b75950a59362a05e8d1-rootfs.mount: Deactivated successfully. Sep 12 17:39:18.455210 kubelet[2508]: I0912 17:39:18.455145 2508 setters.go:600] "Node became not ready" node="ci-4081.3.6-a-bde5b7e242" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:39:18Z","lastTransitionTime":"2025-09-12T17:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:39:18.496546 kubelet[2508]: E0912 17:39:18.493736 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:18.509189 containerd[1469]: time="2025-09-12T17:39:18.509031953Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:39:18.531750 containerd[1469]: time="2025-09-12T17:39:18.531701591Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b\"" Sep 12 17:39:18.535606 containerd[1469]: time="2025-09-12T17:39:18.535549431Z" level=info msg="StartContainer for \"945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b\"" Sep 12 17:39:18.581652 systemd[1]: run-containerd-runc-k8s.io-945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b-runc.LkoeTn.mount: Deactivated successfully. Sep 12 17:39:18.588168 systemd[1]: Started cri-containerd-945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b.scope - libcontainer container 945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b. Sep 12 17:39:18.623681 containerd[1469]: time="2025-09-12T17:39:18.623523214Z" level=info msg="StartContainer for \"945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b\" returns successfully" Sep 12 17:39:18.630655 systemd[1]: cri-containerd-945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b.scope: Deactivated successfully. Sep 12 17:39:18.660137 containerd[1469]: time="2025-09-12T17:39:18.660064983Z" level=info msg="shim disconnected" id=945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b namespace=k8s.io Sep 12 17:39:18.660137 containerd[1469]: time="2025-09-12T17:39:18.660132793Z" level=warning msg="cleaning up after shim disconnected" id=945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b namespace=k8s.io Sep 12 17:39:18.660137 containerd[1469]: time="2025-09-12T17:39:18.660142456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:19.235627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-945f2afbb4ce7fd3fff2e51aa7e64a7c4c484a2b196bea7e3d6deb1badabc28b-rootfs.mount: Deactivated successfully. Sep 12 17:39:19.498807 kubelet[2508]: E0912 17:39:19.498093 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:19.505022 containerd[1469]: time="2025-09-12T17:39:19.504732734Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:39:19.525770 containerd[1469]: time="2025-09-12T17:39:19.525674081Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f\"" Sep 12 17:39:19.531082 containerd[1469]: time="2025-09-12T17:39:19.528256172Z" level=info msg="StartContainer for \"0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f\"" Sep 12 17:39:19.565548 systemd[1]: run-containerd-runc-k8s.io-0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f-runc.ldl0Z7.mount: Deactivated successfully. Sep 12 17:39:19.575271 systemd[1]: Started cri-containerd-0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f.scope - libcontainer container 0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f. Sep 12 17:39:19.604157 systemd[1]: cri-containerd-0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f.scope: Deactivated successfully. Sep 12 17:39:19.606523 containerd[1469]: time="2025-09-12T17:39:19.606453780Z" level=info msg="StartContainer for \"0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f\" returns successfully" Sep 12 17:39:19.636180 containerd[1469]: time="2025-09-12T17:39:19.636112382Z" level=info msg="shim disconnected" id=0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f namespace=k8s.io Sep 12 17:39:19.636180 containerd[1469]: time="2025-09-12T17:39:19.636173335Z" level=warning msg="cleaning up after shim disconnected" id=0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f namespace=k8s.io Sep 12 17:39:19.636180 containerd[1469]: time="2025-09-12T17:39:19.636184149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:19.651888 containerd[1469]: time="2025-09-12T17:39:19.651012019Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:39:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:39:20.102175 kubelet[2508]: E0912 17:39:20.101480 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:20.237059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0714a8dd1f3f2dc1bd693feb0f475f3d1dcaffe94fadb1c2a38216afe653ca6f-rootfs.mount: Deactivated successfully. Sep 12 17:39:20.503134 kubelet[2508]: E0912 17:39:20.502548 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:20.506458 containerd[1469]: time="2025-09-12T17:39:20.505589729Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:39:20.523974 containerd[1469]: time="2025-09-12T17:39:20.522224073Z" level=info msg="CreateContainer within sandbox \"24d1b70894daa6d26d6951bc26648c7fd5f6f34e74f820ab133c1a8ce39263a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"052576ca5e3afeef16f4cec0717913ac35ec60184717fee4096b9747959ac4ba\"" Sep 12 17:39:20.527389 containerd[1469]: time="2025-09-12T17:39:20.524822874Z" level=info msg="StartContainer for \"052576ca5e3afeef16f4cec0717913ac35ec60184717fee4096b9747959ac4ba\"" Sep 12 17:39:20.578856 systemd[1]: Started cri-containerd-052576ca5e3afeef16f4cec0717913ac35ec60184717fee4096b9747959ac4ba.scope - libcontainer container 052576ca5e3afeef16f4cec0717913ac35ec60184717fee4096b9747959ac4ba. Sep 12 17:39:20.621505 containerd[1469]: time="2025-09-12T17:39:20.620697402Z" level=info msg="StartContainer for \"052576ca5e3afeef16f4cec0717913ac35ec60184717fee4096b9747959ac4ba\" returns successfully" Sep 12 17:39:21.080996 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 17:39:21.101123 kubelet[2508]: E0912 17:39:21.101018 2508 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-wqjmz" podUID="67c4f00a-74ac-4241-8a6f-d46093d18279" Sep 12 17:39:21.508733 kubelet[2508]: E0912 17:39:21.508382 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:21.533057 kubelet[2508]: I0912 17:39:21.532977 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4vfsp" podStartSLOduration=5.532953984 podStartE2EDuration="5.532953984s" podCreationTimestamp="2025-09-12 17:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:21.531116734 +0000 UTC m=+105.575245881" watchObservedRunningTime="2025-09-12 17:39:21.532953984 +0000 UTC m=+105.577083113" Sep 12 17:39:22.511439 kubelet[2508]: E0912 17:39:22.510383 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:23.100799 kubelet[2508]: E0912 17:39:23.100752 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:24.472324 systemd-networkd[1354]: lxc_health: Link UP Sep 12 17:39:24.475817 systemd-networkd[1354]: lxc_health: Gained carrier Sep 12 17:39:25.203834 systemd[1]: run-containerd-runc-k8s.io-052576ca5e3afeef16f4cec0717913ac35ec60184717fee4096b9747959ac4ba-runc.frAxku.mount: Deactivated successfully. Sep 12 17:39:26.308404 systemd-networkd[1354]: lxc_health: Gained IPv6LL Sep 12 17:39:26.430284 kubelet[2508]: E0912 17:39:26.428697 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:26.525022 kubelet[2508]: E0912 17:39:26.524610 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:27.454107 systemd[1]: run-containerd-runc-k8s.io-052576ca5e3afeef16f4cec0717913ac35ec60184717fee4096b9747959ac4ba-runc.cJ8QAu.mount: Deactivated successfully. Sep 12 17:39:27.527069 kubelet[2508]: E0912 17:39:27.526852 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:29.835375 sshd[4319]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:29.838878 systemd-logind[1446]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:39:29.839124 systemd[1]: sshd@27-143.244.177.186:22-147.75.109.163:51494.service: Deactivated successfully. Sep 12 17:39:29.841720 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:39:29.847355 systemd-logind[1446]: Removed session 28.