Aug 13 07:08:15.951270 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:08:15.951297 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:08:15.951310 kernel: BIOS-provided physical RAM map: Aug 13 07:08:15.951317 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:08:15.951323 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:08:15.951330 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:08:15.951343 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 07:08:15.951350 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 07:08:15.952617 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:08:15.952636 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:08:15.952643 kernel: NX (Execute Disable) protection: active Aug 13 07:08:15.952650 kernel: APIC: Static calls initialized Aug 13 07:08:15.952663 kernel: SMBIOS 2.8 present. Aug 13 07:08:15.952670 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 07:08:15.952679 kernel: Hypervisor detected: KVM Aug 13 07:08:15.952690 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:08:15.952701 kernel: kvm-clock: using sched offset of 3190711611 cycles Aug 13 07:08:15.952710 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:08:15.952718 kernel: tsc: Detected 2494.138 MHz processor Aug 13 07:08:15.952726 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:08:15.952734 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:08:15.952742 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 07:08:15.952750 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:08:15.952758 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:08:15.952769 kernel: ACPI: Early table checksum verification disabled Aug 13 07:08:15.952776 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 07:08:15.952784 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:15.952792 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:15.952800 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:15.952808 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 07:08:15.952816 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:15.952823 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:15.952831 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:15.952842 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:15.952854 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 07:08:15.952862 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 07:08:15.952869 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 07:08:15.952877 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 07:08:15.952884 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 07:08:15.952893 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 07:08:15.952925 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 07:08:15.952933 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:08:15.952951 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:08:15.952959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:08:15.952968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:08:15.952979 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 07:08:15.952987 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 07:08:15.952999 kernel: Zone ranges: Aug 13 07:08:15.953007 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:08:15.953031 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 07:08:15.953040 kernel: Normal empty Aug 13 07:08:15.953048 kernel: Movable zone start for each node Aug 13 07:08:15.953056 kernel: Early memory node ranges Aug 13 07:08:15.953064 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:08:15.953072 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 07:08:15.953080 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 07:08:15.953092 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:08:15.953100 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:08:15.953111 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 07:08:15.953119 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:08:15.953127 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:08:15.953136 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:08:15.953144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:08:15.953153 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:08:15.953165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:08:15.953176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:08:15.953184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:08:15.953193 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:08:15.953201 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:08:15.953232 kernel: TSC deadline timer available Aug 13 07:08:15.953240 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:08:15.953249 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:08:15.953257 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 07:08:15.953268 kernel: Booting paravirtualized kernel on KVM Aug 13 07:08:15.953277 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:08:15.953289 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:08:15.953297 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:08:15.953310 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:08:15.953318 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:08:15.953326 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 07:08:15.953336 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:08:15.953345 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:08:15.953363 kernel: random: crng init done Aug 13 07:08:15.953375 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:08:15.953384 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:08:15.953392 kernel: Fallback order for Node 0: 0 Aug 13 07:08:15.953400 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 07:08:15.953411 kernel: Policy zone: DMA32 Aug 13 07:08:15.953420 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:08:15.953429 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 125148K reserved, 0K cma-reserved) Aug 13 07:08:15.953437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:08:15.953448 kernel: Kernel/User page tables isolation: enabled Aug 13 07:08:15.953457 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:08:15.953465 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:08:15.953473 kernel: Dynamic Preempt: voluntary Aug 13 07:08:15.953481 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:08:15.953491 kernel: rcu: RCU event tracing is enabled. Aug 13 07:08:15.953499 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:08:15.953507 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:08:15.953516 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:08:15.953524 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:08:15.953536 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:08:15.953544 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:08:15.953552 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:08:15.953561 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:08:15.953571 kernel: Console: colour VGA+ 80x25 Aug 13 07:08:15.953580 kernel: printk: console [tty0] enabled Aug 13 07:08:15.953595 kernel: printk: console [ttyS0] enabled Aug 13 07:08:15.953604 kernel: ACPI: Core revision 20230628 Aug 13 07:08:15.953612 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:08:15.953624 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:08:15.953632 kernel: x2apic enabled Aug 13 07:08:15.953641 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:08:15.953649 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:08:15.953657 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 13 07:08:15.953666 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Aug 13 07:08:15.953674 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:08:15.953682 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:08:15.953702 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:08:15.953711 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:08:15.953720 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:08:15.953732 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:08:15.953740 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:08:15.953749 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:08:15.953758 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:08:15.953767 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:08:15.953776 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:08:15.953790 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:08:15.953799 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:08:15.953808 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:08:15.953817 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:08:15.953836 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:08:15.953846 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:08:15.953854 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:08:15.953863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:08:15.953875 kernel: landlock: Up and running. Aug 13 07:08:15.953893 kernel: SELinux: Initializing. Aug 13 07:08:15.953902 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:08:15.953911 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:08:15.953920 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 07:08:15.953928 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:15.953937 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:15.953946 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:15.953955 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 07:08:15.953967 kernel: signal: max sigframe size: 1776 Aug 13 07:08:15.953976 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:08:15.953985 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:08:15.953994 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:08:15.954002 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:08:15.954011 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:08:15.954024 kernel: .... node #0, CPUs: #1 Aug 13 07:08:15.954050 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:08:15.954061 kernel: smpboot: Max logical packages: 1 Aug 13 07:08:15.954097 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Aug 13 07:08:15.954123 kernel: devtmpfs: initialized Aug 13 07:08:15.954132 kernel: x86/mm: Memory block size: 128MB Aug 13 07:08:15.954141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:08:15.954149 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:08:15.954158 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:08:15.954167 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:08:15.954176 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:08:15.954185 kernel: audit: type=2000 audit(1755068895.328:1): state=initialized audit_enabled=0 res=1 Aug 13 07:08:15.954197 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:08:15.954206 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:08:15.954215 kernel: cpuidle: using governor menu Aug 13 07:08:15.954223 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:08:15.954232 kernel: dca service started, version 1.12.1 Aug 13 07:08:15.954241 kernel: PCI: Using configuration type 1 for base access Aug 13 07:08:15.954250 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:08:15.954259 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:08:15.954268 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:08:15.954279 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:08:15.954288 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:08:15.954297 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:08:15.954305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:08:15.954314 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:08:15.954323 kernel: ACPI: Interpreter enabled Aug 13 07:08:15.954331 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:08:15.954340 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:08:15.954463 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:08:15.954486 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:08:15.954495 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 07:08:15.954504 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:08:15.954707 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:08:15.954910 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:08:15.955036 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:08:15.955050 kernel: acpiphp: Slot [3] registered Aug 13 07:08:15.955064 kernel: acpiphp: Slot [4] registered Aug 13 07:08:15.955073 kernel: acpiphp: Slot [5] registered Aug 13 07:08:15.955087 kernel: acpiphp: Slot [6] registered Aug 13 07:08:15.955096 kernel: acpiphp: Slot [7] registered Aug 13 07:08:15.955105 kernel: acpiphp: Slot [8] registered Aug 13 07:08:15.955114 kernel: acpiphp: Slot [9] registered Aug 13 07:08:15.955122 kernel: acpiphp: Slot [10] registered Aug 13 07:08:15.955131 kernel: acpiphp: Slot [11] registered Aug 13 07:08:15.955140 kernel: acpiphp: Slot [12] registered Aug 13 07:08:15.955149 kernel: acpiphp: Slot [13] registered Aug 13 07:08:15.955161 kernel: acpiphp: Slot [14] registered Aug 13 07:08:15.955170 kernel: acpiphp: Slot [15] registered Aug 13 07:08:15.955179 kernel: acpiphp: Slot [16] registered Aug 13 07:08:15.955188 kernel: acpiphp: Slot [17] registered Aug 13 07:08:15.955198 kernel: acpiphp: Slot [18] registered Aug 13 07:08:15.955206 kernel: acpiphp: Slot [19] registered Aug 13 07:08:15.955215 kernel: acpiphp: Slot [20] registered Aug 13 07:08:15.955224 kernel: acpiphp: Slot [21] registered Aug 13 07:08:15.955233 kernel: acpiphp: Slot [22] registered Aug 13 07:08:15.955244 kernel: acpiphp: Slot [23] registered Aug 13 07:08:15.955253 kernel: acpiphp: Slot [24] registered Aug 13 07:08:15.955262 kernel: acpiphp: Slot [25] registered Aug 13 07:08:15.955270 kernel: acpiphp: Slot [26] registered Aug 13 07:08:15.955279 kernel: acpiphp: Slot [27] registered Aug 13 07:08:15.955287 kernel: acpiphp: Slot [28] registered Aug 13 07:08:15.955296 kernel: acpiphp: Slot [29] registered Aug 13 07:08:15.955305 kernel: acpiphp: Slot [30] registered Aug 13 07:08:15.955313 kernel: acpiphp: Slot [31] registered Aug 13 07:08:15.955325 kernel: PCI host bridge to bus 0000:00 Aug 13 07:08:15.955455 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:08:15.955549 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:08:15.955643 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:08:15.955746 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 07:08:15.955860 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 07:08:15.955947 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:08:15.956095 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:08:15.956204 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 07:08:15.956315 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 07:08:15.957161 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 07:08:15.957301 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:08:15.957424 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:08:15.957775 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:08:15.958054 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:08:15.958314 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 07:08:15.958450 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 07:08:15.958621 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:08:15.958750 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 07:08:15.958847 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 07:08:15.958970 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 07:08:15.959072 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 07:08:15.959194 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 07:08:15.959331 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 07:08:15.959449 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 07:08:15.959557 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:08:15.959682 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:08:15.959788 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 07:08:15.959893 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 07:08:15.959991 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 07:08:15.960101 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:08:15.960257 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 07:08:15.960385 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 07:08:15.960519 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 07:08:15.960648 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 07:08:15.960942 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 07:08:15.961104 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 07:08:15.961271 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 07:08:15.962454 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:08:15.962615 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:08:15.962725 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 07:08:15.962822 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 07:08:15.962934 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:08:15.963039 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 07:08:15.963154 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 07:08:15.963252 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 07:08:15.964549 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 07:08:15.964695 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 07:08:15.964814 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 07:08:15.964834 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:08:15.964845 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:08:15.964854 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:08:15.964863 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:08:15.964872 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:08:15.964881 kernel: iommu: Default domain type: Translated Aug 13 07:08:15.964895 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:08:15.964904 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:08:15.964913 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:08:15.964922 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:08:15.964931 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 07:08:15.965082 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 07:08:15.965195 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 07:08:15.965330 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:08:15.965349 kernel: vgaarb: loaded Aug 13 07:08:15.965477 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:08:15.965488 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:08:15.965497 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:08:15.965513 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:08:15.965522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:08:15.965532 kernel: pnp: PnP ACPI init Aug 13 07:08:15.965546 kernel: pnp: PnP ACPI: found 4 devices Aug 13 07:08:15.965559 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:08:15.965576 kernel: NET: Registered PF_INET protocol family Aug 13 07:08:15.965585 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:08:15.965594 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:08:15.965603 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:08:15.965612 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:08:15.965621 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:08:15.965630 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:08:15.965639 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:08:15.965648 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:08:15.965661 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:08:15.965670 kernel: NET: Registered PF_XDP protocol family Aug 13 07:08:15.965837 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:08:15.965943 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:08:15.966030 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:08:15.966117 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 07:08:15.966204 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 07:08:15.966308 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 07:08:15.966426 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:08:15.966440 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 07:08:15.966538 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 42228 usecs Aug 13 07:08:15.966551 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:08:15.966560 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:08:15.966569 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 13 07:08:15.966578 kernel: Initialise system trusted keyrings Aug 13 07:08:15.966593 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:08:15.966606 kernel: Key type asymmetric registered Aug 13 07:08:15.966615 kernel: Asymmetric key parser 'x509' registered Aug 13 07:08:15.966624 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:08:15.966633 kernel: io scheduler mq-deadline registered Aug 13 07:08:15.966642 kernel: io scheduler kyber registered Aug 13 07:08:15.966651 kernel: io scheduler bfq registered Aug 13 07:08:15.966678 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:08:15.966688 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 07:08:15.966697 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:08:15.966705 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:08:15.966718 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:08:15.966727 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:08:15.966762 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:08:15.966771 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:08:15.966780 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:08:15.966924 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:08:15.966940 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:08:15.967046 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:08:15.967141 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:08:15 UTC (1755068895) Aug 13 07:08:15.967287 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:08:15.967299 kernel: intel_pstate: CPU model not supported Aug 13 07:08:15.967309 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:08:15.967318 kernel: Segment Routing with IPv6 Aug 13 07:08:15.967327 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:08:15.967336 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:08:15.967345 kernel: Key type dns_resolver registered Aug 13 07:08:15.967383 kernel: IPI shorthand broadcast: enabled Aug 13 07:08:15.967393 kernel: sched_clock: Marking stable (970006975, 101416721)->(1187272439, -115848743) Aug 13 07:08:15.967402 kernel: registered taskstats version 1 Aug 13 07:08:15.967411 kernel: Loading compiled-in X.509 certificates Aug 13 07:08:15.967420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:08:15.967429 kernel: Key type .fscrypt registered Aug 13 07:08:15.967437 kernel: Key type fscrypt-provisioning registered Aug 13 07:08:15.967446 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:08:15.967455 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:08:15.967467 kernel: ima: No architecture policies found Aug 13 07:08:15.967496 kernel: clk: Disabling unused clocks Aug 13 07:08:15.967505 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:08:15.967514 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:08:15.967552 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:08:15.967584 kernel: Run /init as init process Aug 13 07:08:15.967596 kernel: with arguments: Aug 13 07:08:15.967609 kernel: /init Aug 13 07:08:15.967618 kernel: with environment: Aug 13 07:08:15.967630 kernel: HOME=/ Aug 13 07:08:15.967639 kernel: TERM=linux Aug 13 07:08:15.967649 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:08:15.967661 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:08:15.967674 systemd[1]: Detected virtualization kvm. Aug 13 07:08:15.967683 systemd[1]: Detected architecture x86-64. Aug 13 07:08:15.967693 systemd[1]: Running in initrd. Aug 13 07:08:15.967702 systemd[1]: No hostname configured, using default hostname. Aug 13 07:08:15.967714 systemd[1]: Hostname set to . Aug 13 07:08:15.967724 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:08:15.967734 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:08:15.967744 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:15.967753 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:15.967764 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:08:15.967773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:08:15.967783 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:08:15.967796 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:08:15.967807 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:08:15.967817 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:08:15.967827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:15.967837 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:08:15.967847 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:08:15.967859 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:08:15.967870 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:08:15.967880 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:08:15.967892 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:08:15.967902 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:08:15.967940 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:08:15.967953 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:08:15.967963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:15.967973 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:15.967983 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:15.967993 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:08:15.968003 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:08:15.968013 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:08:15.968022 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:08:15.968035 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:08:15.968045 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:08:15.968055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:08:15.968064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:15.968074 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:08:15.968114 systemd-journald[184]: Collecting audit messages is disabled. Aug 13 07:08:15.968141 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:15.968151 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:08:15.968169 systemd-journald[184]: Journal started Aug 13 07:08:15.968193 systemd-journald[184]: Runtime Journal (/run/log/journal/8df142453bcd4c1cb4d1766a20016443) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:08:15.965604 systemd-modules-load[185]: Inserted module 'overlay' Aug 13 07:08:15.975379 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:08:15.981379 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:08:16.010747 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:08:16.040172 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:08:16.040214 kernel: Bridge firewalling registered Aug 13 07:08:16.014579 systemd-modules-load[185]: Inserted module 'br_netfilter' Aug 13 07:08:16.041953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:16.049016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:16.050115 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:08:16.059872 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:16.065876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:08:16.069740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:08:16.072515 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:08:16.101900 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:16.105184 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:16.113945 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:08:16.115451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:16.121872 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:08:16.149552 dracut-cmdline[220]: dracut-dracut-053 Aug 13 07:08:16.159674 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:08:16.168318 systemd-resolved[218]: Positive Trust Anchors: Aug 13 07:08:16.168339 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:08:16.168424 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:08:16.172710 systemd-resolved[218]: Defaulting to hostname 'linux'. Aug 13 07:08:16.175240 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:08:16.176234 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:08:16.278400 kernel: SCSI subsystem initialized Aug 13 07:08:16.288489 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:08:16.300493 kernel: iscsi: registered transport (tcp) Aug 13 07:08:16.324480 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:08:16.324551 kernel: QLogic iSCSI HBA Driver Aug 13 07:08:16.377774 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:08:16.381618 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:08:16.413418 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:08:16.413514 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:08:16.413529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:08:16.461440 kernel: raid6: avx2x4 gen() 15706 MB/s Aug 13 07:08:16.477423 kernel: raid6: avx2x2 gen() 15481 MB/s Aug 13 07:08:16.494793 kernel: raid6: avx2x1 gen() 11166 MB/s Aug 13 07:08:16.494873 kernel: raid6: using algorithm avx2x4 gen() 15706 MB/s Aug 13 07:08:16.512864 kernel: raid6: .... xor() 8635 MB/s, rmw enabled Aug 13 07:08:16.513016 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:08:16.538420 kernel: xor: automatically using best checksumming function avx Aug 13 07:08:16.717396 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:08:16.734635 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:08:16.747739 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:08:16.764887 systemd-udevd[404]: Using default interface naming scheme 'v255'. Aug 13 07:08:16.770874 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:08:16.780727 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:08:16.805352 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Aug 13 07:08:16.847813 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:08:16.854761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:08:16.929491 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:08:16.941860 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:08:16.970280 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:08:16.973801 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:08:16.974972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:08:16.975577 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:08:16.983889 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:08:17.021798 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:08:17.038402 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 07:08:17.045155 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:08:17.053426 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:08:17.062618 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:08:17.062712 kernel: GPT:9289727 != 125829119 Aug 13 07:08:17.062733 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:08:17.062751 kernel: GPT:9289727 != 125829119 Aug 13 07:08:17.062768 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:08:17.062785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:08:17.086399 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 07:08:17.096730 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Aug 13 07:08:17.121392 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:08:17.134422 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:08:17.134542 kernel: AES CTR mode by8 optimization enabled Aug 13 07:08:17.156810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:08:17.156973 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:17.158825 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:17.159828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:17.160790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:17.161236 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:17.173820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:17.179399 kernel: libata version 3.00 loaded. Aug 13 07:08:17.186744 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 07:08:17.194396 kernel: scsi host1: ata_piix Aug 13 07:08:17.194803 kernel: scsi host2: ata_piix Aug 13 07:08:17.195028 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 07:08:17.195053 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 07:08:17.204468 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (462) Aug 13 07:08:17.224602 kernel: ACPI: bus type USB registered Aug 13 07:08:17.224717 kernel: usbcore: registered new interface driver usbfs Aug 13 07:08:17.225624 kernel: usbcore: registered new interface driver hub Aug 13 07:08:17.225679 kernel: usbcore: registered new device driver usb Aug 13 07:08:17.247414 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (452) Aug 13 07:08:17.268386 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:08:17.295064 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:08:17.295815 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:08:17.301652 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:17.308225 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:08:17.313607 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:08:17.326799 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:08:17.330688 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:17.342081 disk-uuid[535]: Primary Header is updated. Aug 13 07:08:17.342081 disk-uuid[535]: Secondary Entries is updated. Aug 13 07:08:17.342081 disk-uuid[535]: Secondary Header is updated. Aug 13 07:08:17.369394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:08:17.370110 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:17.378432 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:08:17.419502 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 07:08:17.419888 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 07:08:17.420098 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 07:08:17.421399 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 07:08:17.421695 kernel: hub 1-0:1.0: USB hub found Aug 13 07:08:17.422990 kernel: hub 1-0:1.0: 2 ports detected Aug 13 07:08:18.393597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:08:18.395289 disk-uuid[536]: The operation has completed successfully. Aug 13 07:08:18.461493 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:08:18.461734 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:08:18.494817 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:08:18.499831 sh[568]: Success Aug 13 07:08:18.517458 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:08:18.599101 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:08:18.601338 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:08:18.602164 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:08:18.635058 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:08:18.635161 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:08:18.635177 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:08:18.637494 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:08:18.637597 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:08:18.647711 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:08:18.649693 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:08:18.659664 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:08:18.662585 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:08:18.676212 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:18.676311 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:08:18.676409 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:08:18.680421 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:08:18.697098 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:08:18.699198 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:18.705753 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:08:18.713452 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:08:18.858959 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:08:18.866681 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:08:18.884044 ignition[649]: Ignition 2.19.0 Aug 13 07:08:18.884862 ignition[649]: Stage: fetch-offline Aug 13 07:08:18.884937 ignition[649]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:18.884948 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:18.885057 ignition[649]: parsed url from cmdline: "" Aug 13 07:08:18.885061 ignition[649]: no config URL provided Aug 13 07:08:18.885066 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:08:18.885074 ignition[649]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:08:18.885080 ignition[649]: failed to fetch config: resource requires networking Aug 13 07:08:18.890677 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:08:18.885965 ignition[649]: Ignition finished successfully Aug 13 07:08:18.895006 systemd-networkd[757]: lo: Link UP Aug 13 07:08:18.895017 systemd-networkd[757]: lo: Gained carrier Aug 13 07:08:18.898075 systemd-networkd[757]: Enumeration completed Aug 13 07:08:18.898672 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:08:18.898678 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 07:08:18.899787 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:08:18.899956 systemd-networkd[757]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:08:18.899963 systemd-networkd[757]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:08:18.900718 systemd-networkd[757]: eth0: Link UP Aug 13 07:08:18.900721 systemd[1]: Reached target network.target - Network. Aug 13 07:08:18.900723 systemd-networkd[757]: eth0: Gained carrier Aug 13 07:08:18.900732 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:08:18.903868 systemd-networkd[757]: eth1: Link UP Aug 13 07:08:18.903873 systemd-networkd[757]: eth1: Gained carrier Aug 13 07:08:18.903886 systemd-networkd[757]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:08:18.911756 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:08:18.917546 systemd-networkd[757]: eth0: DHCPv4 address 64.227.105.74/20, gateway 64.227.96.1 acquired from 169.254.169.253 Aug 13 07:08:18.922569 systemd-networkd[757]: eth1: DHCPv4 address 10.124.0.28/20 acquired from 169.254.169.253 Aug 13 07:08:18.952804 ignition[760]: Ignition 2.19.0 Aug 13 07:08:18.953642 ignition[760]: Stage: fetch Aug 13 07:08:18.954011 ignition[760]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:18.954029 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:18.954223 ignition[760]: parsed url from cmdline: "" Aug 13 07:08:18.954229 ignition[760]: no config URL provided Aug 13 07:08:18.954238 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:08:18.954253 ignition[760]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:08:18.954346 ignition[760]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 07:08:18.976719 ignition[760]: GET result: OK Aug 13 07:08:18.977536 ignition[760]: parsing config with SHA512: f47b06c74aa7c22371999095950b39630cda5a45b60bfc39d105b5e6071562c24b3176098b96dccab438be17ec871a55b8be89686d7f6a08b08af3e46e722a64 Aug 13 07:08:18.983231 unknown[760]: fetched base config from "system" Aug 13 07:08:18.983250 unknown[760]: fetched base config from "system" Aug 13 07:08:18.983918 ignition[760]: fetch: fetch complete Aug 13 07:08:18.983262 unknown[760]: fetched user config from "digitalocean" Aug 13 07:08:18.983928 ignition[760]: fetch: fetch passed Aug 13 07:08:18.983999 ignition[760]: Ignition finished successfully Aug 13 07:08:18.986767 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:08:18.993684 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:08:19.030052 ignition[767]: Ignition 2.19.0 Aug 13 07:08:19.030070 ignition[767]: Stage: kargs Aug 13 07:08:19.030482 ignition[767]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:19.030502 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:19.031802 ignition[767]: kargs: kargs passed Aug 13 07:08:19.031879 ignition[767]: Ignition finished successfully Aug 13 07:08:19.033773 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:08:19.041756 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:08:19.082226 ignition[774]: Ignition 2.19.0 Aug 13 07:08:19.082244 ignition[774]: Stage: disks Aug 13 07:08:19.082676 ignition[774]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:19.082698 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:19.084428 ignition[774]: disks: disks passed Aug 13 07:08:19.084531 ignition[774]: Ignition finished successfully Aug 13 07:08:19.086289 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:08:19.092747 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:08:19.093495 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:08:19.094574 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:08:19.095442 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:08:19.096190 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:08:19.104747 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:08:19.141321 systemd-fsck[783]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:08:19.145547 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:08:19.150886 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:08:19.274737 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:08:19.275423 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:08:19.276545 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:08:19.285528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:08:19.288106 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:08:19.290721 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 13 07:08:19.298393 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (791) Aug 13 07:08:19.302397 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:19.306382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:08:19.306450 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:08:19.302572 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:08:19.302997 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:08:19.303031 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:08:19.308028 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:08:19.312150 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:08:19.318505 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:08:19.321719 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:08:19.389332 coreos-metadata[793]: Aug 13 07:08:19.389 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:08:19.394321 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:08:19.403153 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:08:19.404444 coreos-metadata[793]: Aug 13 07:08:19.402 INFO Fetch successful Aug 13 07:08:19.406197 coreos-metadata[794]: Aug 13 07:08:19.403 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:08:19.410271 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 07:08:19.410391 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 13 07:08:19.412713 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:08:19.416700 coreos-metadata[794]: Aug 13 07:08:19.416 INFO Fetch successful Aug 13 07:08:19.420708 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:08:19.423248 coreos-metadata[794]: Aug 13 07:08:19.423 INFO wrote hostname ci-4081.3.5-9-a0c30e4e4a to /sysroot/etc/hostname Aug 13 07:08:19.425195 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:08:19.523601 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:08:19.534605 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:08:19.536615 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:08:19.550437 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:19.570093 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:08:19.594410 ignition[911]: INFO : Ignition 2.19.0 Aug 13 07:08:19.594410 ignition[911]: INFO : Stage: mount Aug 13 07:08:19.594410 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:19.594410 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:19.596299 ignition[911]: INFO : mount: mount passed Aug 13 07:08:19.596299 ignition[911]: INFO : Ignition finished successfully Aug 13 07:08:19.596902 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:08:19.603542 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:08:19.633870 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:08:19.637595 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:08:19.649396 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (923) Aug 13 07:08:19.652603 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:19.652667 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:08:19.652681 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:08:19.656456 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:08:19.658969 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:08:19.689544 ignition[940]: INFO : Ignition 2.19.0 Aug 13 07:08:19.690378 ignition[940]: INFO : Stage: files Aug 13 07:08:19.691472 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:19.691472 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:19.692594 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:08:19.693312 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:08:19.693312 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:08:19.695623 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:08:19.696401 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:08:19.697523 unknown[940]: wrote ssh authorized keys file for user: core Aug 13 07:08:19.698174 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:08:19.698932 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:08:19.698932 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 07:08:19.740183 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:08:19.891616 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:08:19.891616 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:08:19.893241 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 07:08:20.195510 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:08:20.317711 systemd-networkd[757]: eth0: Gained IPv6LL Aug 13 07:08:20.491608 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:08:20.491608 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:08:20.493103 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:08:20.493103 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:08:20.493103 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:08:20.493103 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:08:20.493103 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:08:20.493103 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:08:20.493103 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:08:20.497824 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:08:20.497824 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:08:20.497824 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:08:20.497824 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:08:20.497824 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:08:20.497824 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 07:08:20.573743 systemd-networkd[757]: eth1: Gained IPv6LL Aug 13 07:08:20.719662 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:08:21.163408 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:08:21.163408 ignition[940]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 07:08:21.166344 ignition[940]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:08:21.167405 ignition[940]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:08:21.167405 ignition[940]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 07:08:21.167405 ignition[940]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:08:21.167405 ignition[940]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:08:21.167405 ignition[940]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:08:21.167405 ignition[940]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:08:21.167405 ignition[940]: INFO : files: files passed Aug 13 07:08:21.167405 ignition[940]: INFO : Ignition finished successfully Aug 13 07:08:21.169840 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:08:21.182746 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:08:21.187466 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:08:21.189458 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:08:21.190049 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:08:21.209272 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:08:21.209272 initrd-setup-root-after-ignition[968]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:08:21.211818 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:08:21.214588 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:08:21.215629 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:08:21.223624 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:08:21.269429 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:08:21.269585 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:08:21.270828 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:08:21.271555 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:08:21.272638 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:08:21.280635 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:08:21.297280 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:08:21.306680 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:08:21.321389 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:08:21.322922 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:08:21.323647 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:08:21.324587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:08:21.324808 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:08:21.326177 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:08:21.326947 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:08:21.327673 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:08:21.328601 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:08:21.329432 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:08:21.330269 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:08:21.331135 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:08:21.331983 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:08:21.332913 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:08:21.333692 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:08:21.334345 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:08:21.334587 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:08:21.336016 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:08:21.336680 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:21.337530 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:08:21.337754 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:21.338479 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:08:21.338678 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:08:21.340224 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:08:21.340503 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:08:21.341590 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:08:21.341773 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:08:21.342716 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:08:21.342950 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:08:21.355747 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:08:21.356979 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:08:21.357229 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:21.362703 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:08:21.363214 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:08:21.363454 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:08:21.364118 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:08:21.364277 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:08:21.375828 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:08:21.380855 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:08:21.386479 ignition[992]: INFO : Ignition 2.19.0 Aug 13 07:08:21.386479 ignition[992]: INFO : Stage: umount Aug 13 07:08:21.388979 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:21.388979 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:21.388979 ignition[992]: INFO : umount: umount passed Aug 13 07:08:21.388979 ignition[992]: INFO : Ignition finished successfully Aug 13 07:08:21.390047 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:08:21.390257 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:08:21.397042 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:08:21.397214 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:08:21.397827 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:08:21.397916 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:08:21.398429 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:08:21.398509 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:08:21.398999 systemd[1]: Stopped target network.target - Network. Aug 13 07:08:21.402165 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:08:21.402277 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:08:21.403035 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:08:21.403447 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:08:21.407487 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:21.408261 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:08:21.408792 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:08:21.422037 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:08:21.432395 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:08:21.432988 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:08:21.433032 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:08:21.433386 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:08:21.433449 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:08:21.433838 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:08:21.433879 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:08:21.434827 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:08:21.435907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:08:21.438824 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:08:21.440450 systemd-networkd[757]: eth1: DHCPv6 lease lost Aug 13 07:08:21.457587 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:08:21.457798 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:08:21.461384 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:08:21.461577 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:08:21.461608 systemd-networkd[757]: eth0: DHCPv6 lease lost Aug 13 07:08:21.463109 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:08:21.463254 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:08:21.464375 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:08:21.464449 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:08:21.465764 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:08:21.465950 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:08:21.467939 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:08:21.468049 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:21.474673 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:08:21.475209 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:08:21.475316 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:08:21.477804 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:08:21.477902 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:21.478767 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:08:21.478849 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:21.481537 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:08:21.497202 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:08:21.497509 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:08:21.499833 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:08:21.499934 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:21.500582 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:08:21.500645 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:21.501607 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:08:21.501684 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:08:21.503698 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:08:21.503784 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:08:21.505227 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:08:21.505318 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:21.510310 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:08:21.510746 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:08:21.510840 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:21.511480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:21.511567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:21.514349 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:08:21.515547 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:08:21.530299 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:08:21.530525 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:08:21.532549 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:08:21.544696 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:08:21.554776 systemd[1]: Switching root. Aug 13 07:08:21.584882 systemd-journald[184]: Journal stopped Aug 13 07:08:22.886162 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Aug 13 07:08:22.886249 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:08:22.886265 kernel: SELinux: policy capability open_perms=1 Aug 13 07:08:22.886277 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:08:22.886288 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:08:22.886303 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:08:22.886315 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:08:22.886327 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:08:22.886339 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:08:22.886355 kernel: audit: type=1403 audit(1755068901.728:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:08:22.886640 systemd[1]: Successfully loaded SELinux policy in 38.902ms. Aug 13 07:08:22.886679 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.827ms. Aug 13 07:08:22.886694 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:08:22.886707 systemd[1]: Detected virtualization kvm. Aug 13 07:08:22.886724 systemd[1]: Detected architecture x86-64. Aug 13 07:08:22.886736 systemd[1]: Detected first boot. Aug 13 07:08:22.886749 systemd[1]: Hostname set to . Aug 13 07:08:22.886761 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:08:22.886774 zram_generator::config[1034]: No configuration found. Aug 13 07:08:22.886792 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:08:22.886809 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:08:22.886822 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:08:22.886839 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:08:22.886853 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:08:22.886865 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:08:22.886878 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:08:22.886891 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:08:22.886904 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:08:22.886918 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:08:22.886931 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:08:22.886946 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:08:22.886959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:22.886972 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:22.886984 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:08:22.887001 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:08:22.887014 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:08:22.887026 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:08:22.887039 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:08:22.887051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:22.887066 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:08:22.887079 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:08:22.887091 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:08:22.887104 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:08:22.887118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:08:22.887131 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:08:22.887147 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:08:22.887160 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:08:22.887173 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:08:22.887186 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:08:22.887198 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:22.887211 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:22.887223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:22.887236 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:08:22.887249 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:08:22.887262 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:08:22.887278 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:08:22.887290 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:22.887303 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:08:22.887316 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:08:22.887328 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:08:22.887343 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:08:22.887367 systemd[1]: Reached target machines.target - Containers. Aug 13 07:08:22.887380 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:08:22.887396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:08:22.887409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:08:22.887422 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:08:22.887434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:08:22.887446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:08:22.887458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:08:22.887471 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:08:22.887483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:08:22.887500 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:08:22.887548 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:08:22.887562 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:08:22.887576 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:08:22.887590 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:08:22.887602 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:08:22.887615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:08:22.887627 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:08:22.887640 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:08:22.887656 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:08:22.887669 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:08:22.887683 systemd[1]: Stopped verity-setup.service. Aug 13 07:08:22.887695 kernel: loop: module loaded Aug 13 07:08:22.887708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:22.887720 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:08:22.887733 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:08:22.887745 kernel: fuse: init (API version 7.39) Aug 13 07:08:22.887757 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:08:22.888427 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:08:22.888463 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:08:22.888478 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:08:22.888491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:22.888504 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:08:22.888522 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:08:22.888536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:08:22.888552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:08:22.888565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:08:22.888579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:08:22.888594 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:08:22.888607 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:08:22.888620 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:08:22.888633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:08:22.888646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:22.888659 kernel: ACPI: bus type drm_connector registered Aug 13 07:08:22.888672 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:08:22.888684 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:08:22.888697 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:08:22.888713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:08:22.888725 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:08:22.888738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:08:22.888794 systemd-journald[1103]: Collecting audit messages is disabled. Aug 13 07:08:22.888821 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:08:22.888833 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:08:22.888845 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:08:22.888862 systemd-journald[1103]: Journal started Aug 13 07:08:22.888890 systemd-journald[1103]: Runtime Journal (/run/log/journal/8df142453bcd4c1cb4d1766a20016443) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:08:22.896470 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:08:22.896557 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:08:22.896587 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:08:22.896617 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:08:22.491013 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:08:22.516215 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:08:22.516924 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:08:22.902588 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:08:22.912390 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:08:22.915380 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:08:22.919595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:08:22.925402 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:08:22.929544 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:08:22.933423 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:08:22.944701 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:08:22.946926 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:08:22.949016 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:08:22.965715 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:08:22.983026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:23.002793 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:08:23.011784 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:08:23.033130 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 07:08:23.034536 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:08:23.039074 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:08:23.048722 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:08:23.069468 systemd-journald[1103]: Time spent on flushing to /var/log/journal/8df142453bcd4c1cb4d1766a20016443 is 79.005ms for 994 entries. Aug 13 07:08:23.069468 systemd-journald[1103]: System Journal (/var/log/journal/8df142453bcd4c1cb4d1766a20016443) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:08:23.169127 systemd-journald[1103]: Received client request to flush runtime journal. Aug 13 07:08:23.169638 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:08:23.169701 kernel: loop1: detected capacity change from 0 to 8 Aug 13 07:08:23.169732 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:08:23.134136 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:08:23.137151 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:08:23.174542 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:08:23.234033 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:08:23.244695 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:08:23.259800 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:08:23.267609 kernel: loop3: detected capacity change from 0 to 140768 Aug 13 07:08:23.272758 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:08:23.327420 kernel: loop4: detected capacity change from 0 to 224512 Aug 13 07:08:23.345951 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:08:23.353405 kernel: loop5: detected capacity change from 0 to 8 Aug 13 07:08:23.355490 kernel: loop6: detected capacity change from 0 to 142488 Aug 13 07:08:23.356878 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Aug 13 07:08:23.356911 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Aug 13 07:08:23.379575 kernel: loop7: detected capacity change from 0 to 140768 Aug 13 07:08:23.382348 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:23.396786 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 07:08:23.398893 (sd-merge)[1178]: Merged extensions into '/usr'. Aug 13 07:08:23.410829 systemd[1]: Reloading requested from client PID 1137 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:08:23.410855 systemd[1]: Reloading... Aug 13 07:08:23.559683 zram_generator::config[1208]: No configuration found. Aug 13 07:08:23.762514 ldconfig[1133]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:08:23.821967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:08:23.914079 systemd[1]: Reloading finished in 500 ms. Aug 13 07:08:23.944838 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:08:23.946142 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:08:23.960726 systemd[1]: Starting ensure-sysext.service... Aug 13 07:08:23.970667 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:08:23.995543 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:08:23.995573 systemd[1]: Reloading... Aug 13 07:08:24.044735 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:08:24.046312 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:08:24.050071 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:08:24.053556 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Aug 13 07:08:24.053909 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Aug 13 07:08:24.059309 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:08:24.060631 systemd-tmpfiles[1249]: Skipping /boot Aug 13 07:08:24.085503 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:08:24.085694 systemd-tmpfiles[1249]: Skipping /boot Aug 13 07:08:24.151390 zram_generator::config[1282]: No configuration found. Aug 13 07:08:24.337840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:08:24.429514 systemd[1]: Reloading finished in 433 ms. Aug 13 07:08:24.447623 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:08:24.467696 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:08:24.474685 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:08:24.478666 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:08:24.489735 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:08:24.500646 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:08:24.515038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:24.515750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:08:24.524892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:08:24.528810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:08:24.539889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:08:24.540649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:08:24.540858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:24.554905 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:08:24.557572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:24.557886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:08:24.558179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:08:24.558338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:24.573827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:24.574226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:08:24.577824 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:08:24.578715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:08:24.578969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:24.588350 systemd[1]: Finished ensure-sysext.service. Aug 13 07:08:24.599786 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:08:24.612473 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:08:24.624642 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:08:24.627489 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:08:24.644666 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:08:24.664642 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:08:24.665348 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:08:24.676440 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:08:24.677342 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:08:24.679118 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:08:24.679271 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:08:24.688429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:08:24.688674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:08:24.689559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:08:24.700763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:08:24.700967 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:08:24.702082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:08:24.730250 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:08:24.735840 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:08:24.737067 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:08:24.787387 augenrules[1364]: No rules Aug 13 07:08:24.789449 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:08:24.791999 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Aug 13 07:08:24.828567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:08:24.829474 systemd-resolved[1324]: Positive Trust Anchors: Aug 13 07:08:24.829487 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:08:24.829537 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:08:24.837180 systemd-resolved[1324]: Using system hostname 'ci-4081.3.5-9-a0c30e4e4a'. Aug 13 07:08:24.840593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:08:24.841181 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:08:24.842956 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:08:24.857138 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:08:24.861923 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:08:24.959613 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 07:08:24.961485 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:24.961782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:08:24.971578 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:08:24.983461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:08:24.998709 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:08:24.999312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:08:24.999431 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:08:24.999447 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:25.018549 systemd-networkd[1371]: lo: Link UP Aug 13 07:08:25.023061 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 07:08:25.018562 systemd-networkd[1371]: lo: Gained carrier Aug 13 07:08:25.022917 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 07:08:25.027672 systemd-networkd[1371]: Enumeration completed Aug 13 07:08:25.027833 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:08:25.030036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:08:25.030435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:08:25.031165 systemd-networkd[1371]: eth0: Configuring with /run/systemd/network/10-fa:0b:5a:9c:67:30.network. Aug 13 07:08:25.035968 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:08:25.037323 systemd-networkd[1371]: eth0: Link UP Aug 13 07:08:25.037333 systemd-networkd[1371]: eth0: Gained carrier Aug 13 07:08:25.039960 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:08:25.040727 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:08:25.043189 systemd[1]: Reached target network.target - Network. Aug 13 07:08:25.050543 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Aug 13 07:08:25.055801 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:08:25.057081 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:08:25.059058 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:08:25.059335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:08:25.063506 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:08:25.148447 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:08:25.162414 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1379) Aug 13 07:08:25.173501 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:08:25.204158 systemd-networkd[1371]: eth1: Configuring with /run/systemd/network/10-8e:a9:e8:4b:55:50.network. Aug 13 07:08:25.205888 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Aug 13 07:08:25.206570 systemd-networkd[1371]: eth1: Link UP Aug 13 07:08:25.206578 systemd-networkd[1371]: eth1: Gained carrier Aug 13 07:08:25.210186 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Aug 13 07:08:25.211819 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Aug 13 07:08:25.236396 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 07:08:25.263487 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:08:25.335370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:08:25.350305 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:08:25.360838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:25.406128 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:08:25.419427 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:08:25.535492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:25.543990 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 07:08:25.547446 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 07:08:25.552387 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:08:25.553463 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 07:08:25.553554 kernel: [drm] features: -context_init Aug 13 07:08:25.556456 kernel: [drm] number of scanouts: 1 Aug 13 07:08:25.556573 kernel: [drm] number of cap sets: 0 Aug 13 07:08:25.562445 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 13 07:08:25.565425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:25.565586 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:25.565792 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:25.574517 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 13 07:08:25.574627 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:08:25.577520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:25.583391 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 07:08:25.616415 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:08:25.620774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:25.621086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:25.632727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:25.649185 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:08:25.660839 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:08:25.685793 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:08:25.708019 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:25.728695 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:08:25.729339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:08:25.730699 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:08:25.731058 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:08:25.731250 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:08:25.731710 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:08:25.732009 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:08:25.732130 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:08:25.732272 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:08:25.732316 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:08:25.733271 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:08:25.736094 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:08:25.739155 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:08:25.751392 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:08:25.758829 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:08:25.761997 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:08:25.764151 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:08:25.765988 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:08:25.767626 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:08:25.767655 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:08:25.771263 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:08:25.779663 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:08:25.791889 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:08:25.799745 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:08:25.807289 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:08:25.816026 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:08:25.819114 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:08:25.827690 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:08:25.839677 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:08:25.846658 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:08:25.862706 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:08:25.879759 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:08:25.882601 jq[1442]: false Aug 13 07:08:25.883275 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:08:25.884185 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:08:25.888990 dbus-daemon[1441]: [system] SELinux support is enabled Aug 13 07:08:25.893761 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:08:25.903894 extend-filesystems[1443]: Found loop4 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found loop5 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found loop6 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found loop7 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found vda Aug 13 07:08:25.913783 extend-filesystems[1443]: Found vda1 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found vda2 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found vda3 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found usr Aug 13 07:08:25.913783 extend-filesystems[1443]: Found vda4 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found vda6 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found vda7 Aug 13 07:08:25.913783 extend-filesystems[1443]: Found vda9 Aug 13 07:08:25.913783 extend-filesystems[1443]: Checking size of /dev/vda9 Aug 13 07:08:25.907773 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:08:26.015603 coreos-metadata[1440]: Aug 13 07:08:25.956 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:08:26.015603 coreos-metadata[1440]: Aug 13 07:08:25.994 INFO Fetch successful Aug 13 07:08:25.915742 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:08:25.927476 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:08:25.954941 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:08:25.955750 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:08:25.969062 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:08:25.969134 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:08:25.973852 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:08:25.974567 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 07:08:25.974623 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:08:25.977023 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:08:26.032564 extend-filesystems[1443]: Resized partition /dev/vda9 Aug 13 07:08:26.062207 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 07:08:25.977980 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:08:26.066126 jq[1459]: true Aug 13 07:08:26.083570 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:08:26.027746 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:08:26.034051 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:08:26.129032 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:08:26.139475 tar[1470]: linux-amd64/LICENSE Aug 13 07:08:26.141533 tar[1470]: linux-amd64/helm Aug 13 07:08:26.148385 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Aug 13 07:08:26.176458 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:08:26.188050 update_engine[1456]: I20250813 07:08:26.187283 1456 main.cc:92] Flatcar Update Engine starting Aug 13 07:08:26.202586 update_engine[1456]: I20250813 07:08:26.190739 1456 update_check_scheduler.cc:74] Next update check in 8m33s Aug 13 07:08:26.202780 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:08:26.202780 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:08:26.202780 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:08:26.192094 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:08:26.228690 jq[1476]: true Aug 13 07:08:26.229059 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Aug 13 07:08:26.229059 extend-filesystems[1443]: Found vdb Aug 13 07:08:26.204737 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:08:26.215796 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:08:26.218764 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:08:26.287925 systemd-logind[1454]: New seat seat0. Aug 13 07:08:26.310251 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:08:26.317632 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:08:26.323943 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:08:26.335097 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:08:26.340764 systemd-networkd[1371]: eth0: Gained IPv6LL Aug 13 07:08:26.341123 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:08:26.341616 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Aug 13 07:08:26.347967 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:08:26.351443 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:08:26.362975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:26.378141 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:08:26.473299 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:08:26.480981 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:08:26.508905 systemd[1]: Starting sshkeys.service... Aug 13 07:08:26.523465 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:08:26.584867 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:08:26.616868 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:08:26.626614 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:08:26.759285 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:08:26.770109 coreos-metadata[1527]: Aug 13 07:08:26.769 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:08:26.778007 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:08:26.787079 coreos-metadata[1527]: Aug 13 07:08:26.784 INFO Fetch successful Aug 13 07:08:26.801573 unknown[1527]: wrote ssh authorized keys file for user: core Aug 13 07:08:26.818028 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:08:26.855649 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:08:26.856531 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:08:26.882674 update-ssh-keys[1540]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:08:26.885140 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:08:26.890849 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:08:26.898456 systemd[1]: Finished sshkeys.service. Aug 13 07:08:26.966220 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:08:26.981350 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:08:26.996007 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:08:26.997792 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:08:27.011915 containerd[1475]: time="2025-08-13T07:08:27.010732339Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:08:27.080598 containerd[1475]: time="2025-08-13T07:08:27.080495385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:27.087857 containerd[1475]: time="2025-08-13T07:08:27.086948435Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:27.087857 containerd[1475]: time="2025-08-13T07:08:27.087626785Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:08:27.087857 containerd[1475]: time="2025-08-13T07:08:27.087653027Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:08:27.088157 containerd[1475]: time="2025-08-13T07:08:27.087912652Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:08:27.088157 containerd[1475]: time="2025-08-13T07:08:27.087947532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:27.088157 containerd[1475]: time="2025-08-13T07:08:27.088025960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:27.088157 containerd[1475]: time="2025-08-13T07:08:27.088038941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:27.088832 containerd[1475]: time="2025-08-13T07:08:27.088340745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:27.088832 containerd[1475]: time="2025-08-13T07:08:27.088383623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:27.088832 containerd[1475]: time="2025-08-13T07:08:27.088399472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:27.088832 containerd[1475]: time="2025-08-13T07:08:27.088683265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:27.088832 containerd[1475]: time="2025-08-13T07:08:27.088808117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:27.089372 containerd[1475]: time="2025-08-13T07:08:27.089090707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:08:27.089372 containerd[1475]: time="2025-08-13T07:08:27.089262637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:08:27.089372 containerd[1475]: time="2025-08-13T07:08:27.089281954Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:08:27.089473 containerd[1475]: time="2025-08-13T07:08:27.089437713Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:08:27.089548 containerd[1475]: time="2025-08-13T07:08:27.089512940Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.094498313Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.094610922Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.094657307Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.094677320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.094697165Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.094925425Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.095235873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.095394455Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.095421057Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.095448866Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.095477562Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.095499362Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.095518914Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:08:27.097067 containerd[1475]: time="2025-08-13T07:08:27.095540755Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095599302Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095623011Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095640238Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095657240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095688912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095708815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095727495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095747419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095764889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095786541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095827417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095847736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095868221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.097839 containerd[1475]: time="2025-08-13T07:08:27.095888831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.095905621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.095922787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.095944766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.095969756Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096025618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096047473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096063024Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096138524Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096167989Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096189823Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096205893Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096219379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096316277Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:08:27.098261 containerd[1475]: time="2025-08-13T07:08:27.096331822Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:08:27.098886 containerd[1475]: time="2025-08-13T07:08:27.096347459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.097239115Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.097336949Z" level=info msg="Connect containerd service" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.097694999Z" level=info msg="using legacy CRI server" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.097720699Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.097887285Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.099809242Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.101841446Z" level=info msg="Start subscribing containerd event" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.101971879Z" level=info msg="Start recovering state" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.102124848Z" level=info msg="Start event monitor" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.102163802Z" level=info msg="Start snapshots syncer" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.102181363Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:08:27.098933 containerd[1475]: time="2025-08-13T07:08:27.102198373Z" level=info msg="Start streaming server" Aug 13 07:08:27.103304 containerd[1475]: time="2025-08-13T07:08:27.102721356Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:08:27.103304 containerd[1475]: time="2025-08-13T07:08:27.102783996Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:08:27.103046 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:08:27.106017 containerd[1475]: time="2025-08-13T07:08:27.105149593Z" level=info msg="containerd successfully booted in 0.096349s" Aug 13 07:08:27.166784 systemd-networkd[1371]: eth1: Gained IPv6LL Aug 13 07:08:27.168665 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Aug 13 07:08:27.482787 tar[1470]: linux-amd64/README.md Aug 13 07:08:27.513611 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:08:28.167186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:28.170551 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:08:28.176021 systemd[1]: Startup finished in 1.111s (kernel) + 6.023s (initrd) + 6.485s (userspace) = 13.620s. Aug 13 07:08:28.188729 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:08:28.951262 kubelet[1563]: E0813 07:08:28.951161 1563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:08:28.955111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:08:28.955344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:08:28.955836 systemd[1]: kubelet.service: Consumed 1.496s CPU time. Aug 13 07:08:29.681402 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:08:29.692831 systemd[1]: Started sshd@0-64.227.105.74:22-139.178.89.65:58790.service - OpenSSH per-connection server daemon (139.178.89.65:58790). Aug 13 07:08:29.855457 sshd[1575]: Accepted publickey for core from 139.178.89.65 port 58790 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:29.859303 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:29.872479 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:08:29.880855 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:08:29.883971 systemd-logind[1454]: New session 1 of user core. Aug 13 07:08:29.909629 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:08:29.916875 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:08:29.931262 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:08:30.100566 systemd[1579]: Queued start job for default target default.target. Aug 13 07:08:30.110352 systemd[1579]: Created slice app.slice - User Application Slice. Aug 13 07:08:30.110432 systemd[1579]: Reached target paths.target - Paths. Aug 13 07:08:30.110456 systemd[1579]: Reached target timers.target - Timers. Aug 13 07:08:30.112918 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:08:30.131041 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:08:30.131298 systemd[1579]: Reached target sockets.target - Sockets. Aug 13 07:08:30.131328 systemd[1579]: Reached target basic.target - Basic System. Aug 13 07:08:30.131889 systemd[1579]: Reached target default.target - Main User Target. Aug 13 07:08:30.131961 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:08:30.131968 systemd[1579]: Startup finished in 187ms. Aug 13 07:08:30.140721 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:08:30.216007 systemd[1]: Started sshd@1-64.227.105.74:22-139.178.89.65:58798.service - OpenSSH per-connection server daemon (139.178.89.65:58798). Aug 13 07:08:30.264541 sshd[1590]: Accepted publickey for core from 139.178.89.65 port 58798 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:30.266717 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:30.272491 systemd-logind[1454]: New session 2 of user core. Aug 13 07:08:30.287723 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:08:30.350224 sshd[1590]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:30.362983 systemd[1]: sshd@1-64.227.105.74:22-139.178.89.65:58798.service: Deactivated successfully. Aug 13 07:08:30.364939 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:08:30.365870 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:08:30.376967 systemd[1]: Started sshd@2-64.227.105.74:22-139.178.89.65:58812.service - OpenSSH per-connection server daemon (139.178.89.65:58812). Aug 13 07:08:30.378600 systemd-logind[1454]: Removed session 2. Aug 13 07:08:30.421925 sshd[1597]: Accepted publickey for core from 139.178.89.65 port 58812 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:30.424067 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:30.431874 systemd-logind[1454]: New session 3 of user core. Aug 13 07:08:30.439715 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:08:30.501185 sshd[1597]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:30.512633 systemd[1]: sshd@2-64.227.105.74:22-139.178.89.65:58812.service: Deactivated successfully. Aug 13 07:08:30.515595 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:08:30.518198 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:08:30.524006 systemd[1]: Started sshd@3-64.227.105.74:22-139.178.89.65:58814.service - OpenSSH per-connection server daemon (139.178.89.65:58814). Aug 13 07:08:30.525943 systemd-logind[1454]: Removed session 3. Aug 13 07:08:30.574418 sshd[1604]: Accepted publickey for core from 139.178.89.65 port 58814 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:30.576653 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:30.582894 systemd-logind[1454]: New session 4 of user core. Aug 13 07:08:30.593754 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:08:30.656858 sshd[1604]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:30.671798 systemd[1]: sshd@3-64.227.105.74:22-139.178.89.65:58814.service: Deactivated successfully. Aug 13 07:08:30.674488 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:08:30.676664 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:08:30.681784 systemd[1]: Started sshd@4-64.227.105.74:22-139.178.89.65:58818.service - OpenSSH per-connection server daemon (139.178.89.65:58818). Aug 13 07:08:30.683880 systemd-logind[1454]: Removed session 4. Aug 13 07:08:30.743945 sshd[1611]: Accepted publickey for core from 139.178.89.65 port 58818 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:30.746540 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:30.752796 systemd-logind[1454]: New session 5 of user core. Aug 13 07:08:30.761706 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:08:30.839070 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:08:30.840421 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:08:30.858691 sudo[1614]: pam_unix(sudo:session): session closed for user root Aug 13 07:08:30.863161 sshd[1611]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:30.872973 systemd[1]: sshd@4-64.227.105.74:22-139.178.89.65:58818.service: Deactivated successfully. Aug 13 07:08:30.875482 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:08:30.878722 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:08:30.883980 systemd[1]: Started sshd@5-64.227.105.74:22-139.178.89.65:58828.service - OpenSSH per-connection server daemon (139.178.89.65:58828). Aug 13 07:08:30.886600 systemd-logind[1454]: Removed session 5. Aug 13 07:08:30.942089 sshd[1619]: Accepted publickey for core from 139.178.89.65 port 58828 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:30.947657 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:30.955809 systemd-logind[1454]: New session 6 of user core. Aug 13 07:08:30.961740 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:08:31.028158 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:08:31.028813 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:08:31.034799 sudo[1623]: pam_unix(sudo:session): session closed for user root Aug 13 07:08:31.042701 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:08:31.043061 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:08:31.069803 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:08:31.071721 auditctl[1626]: No rules Aug 13 07:08:31.072170 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:08:31.072531 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:08:31.075621 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:08:31.127400 augenrules[1644]: No rules Aug 13 07:08:31.128470 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:08:31.130508 sudo[1622]: pam_unix(sudo:session): session closed for user root Aug 13 07:08:31.134760 sshd[1619]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:31.143730 systemd[1]: sshd@5-64.227.105.74:22-139.178.89.65:58828.service: Deactivated successfully. Aug 13 07:08:31.146474 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:08:31.148568 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:08:31.156032 systemd[1]: Started sshd@6-64.227.105.74:22-139.178.89.65:58834.service - OpenSSH per-connection server daemon (139.178.89.65:58834). Aug 13 07:08:31.157804 systemd-logind[1454]: Removed session 6. Aug 13 07:08:31.200336 sshd[1652]: Accepted publickey for core from 139.178.89.65 port 58834 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:08:31.201526 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:08:31.208708 systemd-logind[1454]: New session 7 of user core. Aug 13 07:08:31.216712 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:08:31.277413 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:08:31.278349 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:08:31.796766 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:08:31.798283 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:08:32.331432 dockerd[1670]: time="2025-08-13T07:08:32.331072307Z" level=info msg="Starting up" Aug 13 07:08:32.490456 dockerd[1670]: time="2025-08-13T07:08:32.490153829Z" level=info msg="Loading containers: start." Aug 13 07:08:32.630578 kernel: Initializing XFRM netlink socket Aug 13 07:08:32.667109 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Aug 13 07:08:32.668731 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Aug 13 07:08:32.730646 systemd-networkd[1371]: docker0: Link UP Aug 13 07:08:32.731172 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Aug 13 07:08:32.748950 dockerd[1670]: time="2025-08-13T07:08:32.748773842Z" level=info msg="Loading containers: done." Aug 13 07:08:32.772788 dockerd[1670]: time="2025-08-13T07:08:32.772584850Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:08:32.772788 dockerd[1670]: time="2025-08-13T07:08:32.772764939Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:08:32.773092 dockerd[1670]: time="2025-08-13T07:08:32.772954039Z" level=info msg="Daemon has completed initialization" Aug 13 07:08:32.817085 dockerd[1670]: time="2025-08-13T07:08:32.816380134Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:08:32.816762 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:08:33.729506 containerd[1475]: time="2025-08-13T07:08:33.729346400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 07:08:34.382586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286000878.mount: Deactivated successfully. Aug 13 07:08:35.705393 containerd[1475]: time="2025-08-13T07:08:35.705159340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:35.709482 containerd[1475]: time="2025-08-13T07:08:35.707658787Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 07:08:35.709482 containerd[1475]: time="2025-08-13T07:08:35.708667773Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:35.712399 containerd[1475]: time="2025-08-13T07:08:35.712278226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:35.714638 containerd[1475]: time="2025-08-13T07:08:35.714051367Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.984614505s" Aug 13 07:08:35.714638 containerd[1475]: time="2025-08-13T07:08:35.714110876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 07:08:35.715560 containerd[1475]: time="2025-08-13T07:08:35.715529580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 07:08:37.241738 containerd[1475]: time="2025-08-13T07:08:37.240459980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:37.241738 containerd[1475]: time="2025-08-13T07:08:37.241410925Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 07:08:37.241738 containerd[1475]: time="2025-08-13T07:08:37.241678698Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:37.245411 containerd[1475]: time="2025-08-13T07:08:37.245302436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:37.247013 containerd[1475]: time="2025-08-13T07:08:37.246952784Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.531384818s" Aug 13 07:08:37.247013 containerd[1475]: time="2025-08-13T07:08:37.247006440Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 07:08:37.247663 containerd[1475]: time="2025-08-13T07:08:37.247526393Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 07:08:38.579465 containerd[1475]: time="2025-08-13T07:08:38.578907891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:38.580857 containerd[1475]: time="2025-08-13T07:08:38.580787030Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 07:08:38.581671 containerd[1475]: time="2025-08-13T07:08:38.581591454Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:38.590404 containerd[1475]: time="2025-08-13T07:08:38.589616206Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.342048751s" Aug 13 07:08:38.590404 containerd[1475]: time="2025-08-13T07:08:38.589683396Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 07:08:38.591503 containerd[1475]: time="2025-08-13T07:08:38.591145430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 07:08:38.591503 containerd[1475]: time="2025-08-13T07:08:38.591164714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:39.197040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:08:39.205273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:39.525725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:39.527699 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:08:39.611213 kubelet[1889]: E0813 07:08:39.611101 1889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:08:39.616531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:08:39.616736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:08:39.832774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327669464.mount: Deactivated successfully. Aug 13 07:08:40.452862 containerd[1475]: time="2025-08-13T07:08:40.452813743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:40.453702 containerd[1475]: time="2025-08-13T07:08:40.453649015Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 07:08:40.454227 containerd[1475]: time="2025-08-13T07:08:40.454201403Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:40.456279 containerd[1475]: time="2025-08-13T07:08:40.456244680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:40.457068 containerd[1475]: time="2025-08-13T07:08:40.457035328Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.865829817s" Aug 13 07:08:40.457188 containerd[1475]: time="2025-08-13T07:08:40.457172735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 07:08:40.457784 containerd[1475]: time="2025-08-13T07:08:40.457748539Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:08:40.458925 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 07:08:40.972911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181902273.mount: Deactivated successfully. Aug 13 07:08:41.843546 containerd[1475]: time="2025-08-13T07:08:41.843485012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:41.846445 containerd[1475]: time="2025-08-13T07:08:41.846322870Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:08:41.848447 containerd[1475]: time="2025-08-13T07:08:41.846870279Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:41.852468 containerd[1475]: time="2025-08-13T07:08:41.852411391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:41.854937 containerd[1475]: time="2025-08-13T07:08:41.854881410Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.397094372s" Aug 13 07:08:41.854937 containerd[1475]: time="2025-08-13T07:08:41.854937414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:08:41.855907 containerd[1475]: time="2025-08-13T07:08:41.855562318Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:08:42.319990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1304815233.mount: Deactivated successfully. Aug 13 07:08:42.324604 containerd[1475]: time="2025-08-13T07:08:42.324548977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:42.325314 containerd[1475]: time="2025-08-13T07:08:42.325268465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:08:42.326398 containerd[1475]: time="2025-08-13T07:08:42.325808837Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:42.328205 containerd[1475]: time="2025-08-13T07:08:42.328170741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:42.329187 containerd[1475]: time="2025-08-13T07:08:42.329150867Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.543539ms" Aug 13 07:08:42.329272 containerd[1475]: time="2025-08-13T07:08:42.329188844Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:08:42.329816 containerd[1475]: time="2025-08-13T07:08:42.329793835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 07:08:42.809061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618669676.mount: Deactivated successfully. Aug 13 07:08:43.549654 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 07:08:44.622242 containerd[1475]: time="2025-08-13T07:08:44.620902750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:44.623600 containerd[1475]: time="2025-08-13T07:08:44.623547124Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 07:08:44.624624 containerd[1475]: time="2025-08-13T07:08:44.624578663Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:44.627604 containerd[1475]: time="2025-08-13T07:08:44.627562939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:44.629152 containerd[1475]: time="2025-08-13T07:08:44.629108985Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.299285518s" Aug 13 07:08:44.629152 containerd[1475]: time="2025-08-13T07:08:44.629153258Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 07:08:47.416837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:47.428755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:47.472789 systemd[1]: Reloading requested from client PID 2038 ('systemctl') (unit session-7.scope)... Aug 13 07:08:47.472815 systemd[1]: Reloading... Aug 13 07:08:47.629433 zram_generator::config[2077]: No configuration found. Aug 13 07:08:47.781088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:08:47.878707 systemd[1]: Reloading finished in 405 ms. Aug 13 07:08:47.951646 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:08:47.951797 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:08:47.952287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:47.958870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:48.147394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:48.158888 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:08:48.231073 kubelet[2131]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:08:48.231073 kubelet[2131]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:08:48.231073 kubelet[2131]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:08:48.231597 kubelet[2131]: I0813 07:08:48.231155 2131 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:08:48.791397 kubelet[2131]: I0813 07:08:48.790253 2131 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:08:48.791397 kubelet[2131]: I0813 07:08:48.790316 2131 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:08:48.791397 kubelet[2131]: I0813 07:08:48.790749 2131 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:08:48.826417 kubelet[2131]: I0813 07:08:48.826351 2131 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:08:48.830575 kubelet[2131]: E0813 07:08:48.830518 2131 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.227.105.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.105.74:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:48.840033 kubelet[2131]: E0813 07:08:48.839942 2131 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:08:48.841085 kubelet[2131]: I0813 07:08:48.840328 2131 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:08:48.845012 kubelet[2131]: I0813 07:08:48.844972 2131 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:08:48.849962 kubelet[2131]: I0813 07:08:48.849843 2131 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:08:48.850251 kubelet[2131]: I0813 07:08:48.849948 2131 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-9-a0c30e4e4a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:08:48.852309 kubelet[2131]: I0813 07:08:48.852240 2131 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:08:48.852309 kubelet[2131]: I0813 07:08:48.852298 2131 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:08:48.854017 kubelet[2131]: I0813 07:08:48.853939 2131 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:08:48.858268 kubelet[2131]: I0813 07:08:48.858131 2131 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:08:48.858268 kubelet[2131]: I0813 07:08:48.858192 2131 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:08:48.858268 kubelet[2131]: I0813 07:08:48.858242 2131 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:08:48.858268 kubelet[2131]: I0813 07:08:48.858269 2131 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:08:48.868604 kubelet[2131]: W0813 07:08:48.868384 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.105.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-9-a0c30e4e4a&limit=500&resourceVersion=0": dial tcp 64.227.105.74:6443: connect: connection refused Aug 13 07:08:48.868604 kubelet[2131]: E0813 07:08:48.868456 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.105.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-9-a0c30e4e4a&limit=500&resourceVersion=0\": dial tcp 64.227.105.74:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:48.869460 kubelet[2131]: W0813 07:08:48.869287 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.105.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.105.74:6443: connect: connection refused Aug 13 07:08:48.869460 kubelet[2131]: E0813 07:08:48.869339 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.105.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.105.74:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:48.869460 kubelet[2131]: I0813 07:08:48.869454 2131 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:08:48.873428 kubelet[2131]: I0813 07:08:48.873381 2131 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:08:48.874754 kubelet[2131]: W0813 07:08:48.874653 2131 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:08:48.877416 kubelet[2131]: I0813 07:08:48.877124 2131 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:08:48.877416 kubelet[2131]: I0813 07:08:48.877176 2131 server.go:1287] "Started kubelet" Aug 13 07:08:48.878499 kubelet[2131]: I0813 07:08:48.877939 2131 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:08:48.879701 kubelet[2131]: I0813 07:08:48.879287 2131 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:08:48.886138 kubelet[2131]: I0813 07:08:48.886033 2131 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:08:48.886899 kubelet[2131]: I0813 07:08:48.886771 2131 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:08:48.892444 kubelet[2131]: I0813 07:08:48.891615 2131 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:08:48.892836 kubelet[2131]: E0813 07:08:48.889929 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.105.74:6443/api/v1/namespaces/default/events\": dial tcp 64.227.105.74:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-9-a0c30e4e4a.185b41e45e0c3fca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-9-a0c30e4e4a,UID:ci-4081.3.5-9-a0c30e4e4a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-9-a0c30e4e4a,},FirstTimestamp:2025-08-13 07:08:48.877150154 +0000 UTC m=+0.712108451,LastTimestamp:2025-08-13 07:08:48.877150154 +0000 UTC m=+0.712108451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-9-a0c30e4e4a,}" Aug 13 07:08:48.896419 kubelet[2131]: I0813 07:08:48.895848 2131 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:08:48.897438 kubelet[2131]: I0813 07:08:48.897412 2131 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:08:48.898665 kubelet[2131]: E0813 07:08:48.898457 2131 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" Aug 13 07:08:48.900029 kubelet[2131]: I0813 07:08:48.899888 2131 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:08:48.900567 kubelet[2131]: I0813 07:08:48.900486 2131 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:08:48.904289 kubelet[2131]: W0813 07:08:48.904214 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.105.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.105.74:6443: connect: connection refused Aug 13 07:08:48.904894 kubelet[2131]: E0813 07:08:48.904547 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.105.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.105.74:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:48.904894 kubelet[2131]: E0813 07:08:48.904690 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.105.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-9-a0c30e4e4a?timeout=10s\": dial tcp 64.227.105.74:6443: connect: connection refused" interval="200ms" Aug 13 07:08:48.913912 kubelet[2131]: I0813 07:08:48.913872 2131 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:08:48.914603 kubelet[2131]: I0813 07:08:48.914151 2131 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:08:48.914603 kubelet[2131]: I0813 07:08:48.914331 2131 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:08:48.933715 kubelet[2131]: I0813 07:08:48.933490 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:08:48.938427 kubelet[2131]: I0813 07:08:48.938339 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:08:48.938427 kubelet[2131]: I0813 07:08:48.938425 2131 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:08:48.938629 kubelet[2131]: I0813 07:08:48.938472 2131 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:08:48.938629 kubelet[2131]: I0813 07:08:48.938487 2131 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:08:48.938629 kubelet[2131]: E0813 07:08:48.938565 2131 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:08:48.942985 kubelet[2131]: W0813 07:08:48.942912 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.105.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.105.74:6443: connect: connection refused Aug 13 07:08:48.943252 kubelet[2131]: E0813 07:08:48.943218 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.105.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.105.74:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:48.943487 kubelet[2131]: I0813 07:08:48.943464 2131 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:08:48.943601 kubelet[2131]: I0813 07:08:48.943584 2131 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:08:48.943725 kubelet[2131]: I0813 07:08:48.943707 2131 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:08:48.949341 kubelet[2131]: I0813 07:08:48.949302 2131 policy_none.go:49] "None policy: Start" Aug 13 07:08:48.949628 kubelet[2131]: I0813 07:08:48.949599 2131 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:08:48.949767 kubelet[2131]: I0813 07:08:48.949752 2131 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:08:48.958107 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:08:48.971036 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:08:48.986189 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:08:48.989288 kubelet[2131]: I0813 07:08:48.989238 2131 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:08:48.989886 kubelet[2131]: I0813 07:08:48.989721 2131 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:08:48.989886 kubelet[2131]: I0813 07:08:48.989750 2131 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:08:48.991031 kubelet[2131]: I0813 07:08:48.990765 2131 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:08:48.993290 kubelet[2131]: E0813 07:08:48.993155 2131 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:08:48.993290 kubelet[2131]: E0813 07:08:48.993231 2131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-9-a0c30e4e4a\" not found" Aug 13 07:08:49.054836 systemd[1]: Created slice kubepods-burstable-pod0db3be4d2bde5030350e0b52be479c8e.slice - libcontainer container kubepods-burstable-pod0db3be4d2bde5030350e0b52be479c8e.slice. Aug 13 07:08:49.071713 kubelet[2131]: E0813 07:08:49.071659 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.075717 systemd[1]: Created slice kubepods-burstable-pod34620e139dfc94dffaad48ca9d3791bf.slice - libcontainer container kubepods-burstable-pod34620e139dfc94dffaad48ca9d3791bf.slice. Aug 13 07:08:49.079540 kubelet[2131]: E0813 07:08:49.079497 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.083417 systemd[1]: Created slice kubepods-burstable-podb8bffb8f17f7ff8a42c155d662ae055a.slice - libcontainer container kubepods-burstable-podb8bffb8f17f7ff8a42c155d662ae055a.slice. Aug 13 07:08:49.086018 kubelet[2131]: E0813 07:08:49.085964 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.091389 kubelet[2131]: I0813 07:08:49.091336 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.092067 kubelet[2131]: E0813 07:08:49.091952 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.105.74:6443/api/v1/nodes\": dial tcp 64.227.105.74:6443: connect: connection refused" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.101916 kubelet[2131]: I0813 07:08:49.101799 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.101916 kubelet[2131]: I0813 07:08:49.101864 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34620e139dfc94dffaad48ca9d3791bf-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"34620e139dfc94dffaad48ca9d3791bf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.101916 kubelet[2131]: I0813 07:08:49.101896 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34620e139dfc94dffaad48ca9d3791bf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"34620e139dfc94dffaad48ca9d3791bf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.101916 kubelet[2131]: I0813 07:08:49.101925 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.102261 kubelet[2131]: I0813 07:08:49.101955 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.102261 kubelet[2131]: I0813 07:08:49.101981 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.102261 kubelet[2131]: I0813 07:08:49.102009 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8bffb8f17f7ff8a42c155d662ae055a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"b8bffb8f17f7ff8a42c155d662ae055a\") " pod="kube-system/kube-scheduler-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.102261 kubelet[2131]: I0813 07:08:49.102056 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34620e139dfc94dffaad48ca9d3791bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"34620e139dfc94dffaad48ca9d3791bf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.102261 kubelet[2131]: I0813 07:08:49.102083 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.106403 kubelet[2131]: E0813 07:08:49.106218 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.105.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-9-a0c30e4e4a?timeout=10s\": dial tcp 64.227.105.74:6443: connect: connection refused" interval="400ms" Aug 13 07:08:49.294109 kubelet[2131]: I0813 07:08:49.294053 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.294840 kubelet[2131]: E0813 07:08:49.294801 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.105.74:6443/api/v1/nodes\": dial tcp 64.227.105.74:6443: connect: connection refused" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.372965 kubelet[2131]: E0813 07:08:49.372790 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:49.373861 containerd[1475]: time="2025-08-13T07:08:49.373807078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a,Uid:0db3be4d2bde5030350e0b52be479c8e,Namespace:kube-system,Attempt:0,}" Aug 13 07:08:49.375633 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 13 07:08:49.381279 kubelet[2131]: E0813 07:08:49.380886 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:49.386870 kubelet[2131]: E0813 07:08:49.386545 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:49.388972 containerd[1475]: time="2025-08-13T07:08:49.388592925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-9-a0c30e4e4a,Uid:34620e139dfc94dffaad48ca9d3791bf,Namespace:kube-system,Attempt:0,}" Aug 13 07:08:49.390792 containerd[1475]: time="2025-08-13T07:08:49.388593129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-9-a0c30e4e4a,Uid:b8bffb8f17f7ff8a42c155d662ae055a,Namespace:kube-system,Attempt:0,}" Aug 13 07:08:49.507144 kubelet[2131]: E0813 07:08:49.507083 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.105.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-9-a0c30e4e4a?timeout=10s\": dial tcp 64.227.105.74:6443: connect: connection refused" interval="800ms" Aug 13 07:08:49.696338 kubelet[2131]: I0813 07:08:49.696264 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.696838 kubelet[2131]: E0813 07:08:49.696777 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.105.74:6443/api/v1/nodes\": dial tcp 64.227.105.74:6443: connect: connection refused" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:49.719399 kubelet[2131]: W0813 07:08:49.719235 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.105.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-9-a0c30e4e4a&limit=500&resourceVersion=0": dial tcp 64.227.105.74:6443: connect: connection refused Aug 13 07:08:49.719399 kubelet[2131]: E0813 07:08:49.719322 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.105.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-9-a0c30e4e4a&limit=500&resourceVersion=0\": dial tcp 64.227.105.74:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:49.874007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount972270593.mount: Deactivated successfully. Aug 13 07:08:49.878580 containerd[1475]: time="2025-08-13T07:08:49.878076102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:49.878938 containerd[1475]: time="2025-08-13T07:08:49.878880384Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:08:49.880392 containerd[1475]: time="2025-08-13T07:08:49.879839085Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:49.880550 containerd[1475]: time="2025-08-13T07:08:49.880516571Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:08:49.881004 containerd[1475]: time="2025-08-13T07:08:49.880976531Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:49.881184 containerd[1475]: time="2025-08-13T07:08:49.881155675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:08:49.881871 containerd[1475]: time="2025-08-13T07:08:49.881842342Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:49.885606 containerd[1475]: time="2025-08-13T07:08:49.885560813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:49.887211 containerd[1475]: time="2025-08-13T07:08:49.887157540Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.620604ms" Aug 13 07:08:49.901538 containerd[1475]: time="2025-08-13T07:08:49.900993650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.91584ms" Aug 13 07:08:49.904548 kubelet[2131]: W0813 07:08:49.904459 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.105.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.105.74:6443: connect: connection refused Aug 13 07:08:49.904548 kubelet[2131]: E0813 07:08:49.904509 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.105.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.105.74:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:49.916912 containerd[1475]: time="2025-08-13T07:08:49.916540127Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.846449ms" Aug 13 07:08:50.036754 kubelet[2131]: W0813 07:08:50.036512 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.105.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.105.74:6443: connect: connection refused Aug 13 07:08:50.036754 kubelet[2131]: E0813 07:08:50.036596 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.105.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.105.74:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:50.067203 containerd[1475]: time="2025-08-13T07:08:50.067091814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:50.067916 containerd[1475]: time="2025-08-13T07:08:50.067636703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:50.067916 containerd[1475]: time="2025-08-13T07:08:50.067803225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:50.070251 containerd[1475]: time="2025-08-13T07:08:50.070142116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:50.078195 containerd[1475]: time="2025-08-13T07:08:50.077691822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:50.078573 containerd[1475]: time="2025-08-13T07:08:50.078494170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:50.078805 containerd[1475]: time="2025-08-13T07:08:50.078535868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:50.079315 containerd[1475]: time="2025-08-13T07:08:50.079244564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:50.083021 containerd[1475]: time="2025-08-13T07:08:50.082875382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:50.083021 containerd[1475]: time="2025-08-13T07:08:50.082937707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:50.083021 containerd[1475]: time="2025-08-13T07:08:50.082952831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:50.087780 containerd[1475]: time="2025-08-13T07:08:50.087457015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:50.115622 systemd[1]: Started cri-containerd-3f5519e758c4093b64039197177caef1a7896945dee2266882672e4bd6e8ca9b.scope - libcontainer container 3f5519e758c4093b64039197177caef1a7896945dee2266882672e4bd6e8ca9b. Aug 13 07:08:50.127816 systemd[1]: Started cri-containerd-5ddb9c5d82d0fc83c300127cc9d52529998ac4a7a31988fb095b5c033afee044.scope - libcontainer container 5ddb9c5d82d0fc83c300127cc9d52529998ac4a7a31988fb095b5c033afee044. Aug 13 07:08:50.140706 systemd[1]: Started cri-containerd-d3131da96eabe9647a32528f2e3957285d59cec3b36ccde74558071da4dcacb1.scope - libcontainer container d3131da96eabe9647a32528f2e3957285d59cec3b36ccde74558071da4dcacb1. Aug 13 07:08:50.206093 containerd[1475]: time="2025-08-13T07:08:50.205902405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a,Uid:0db3be4d2bde5030350e0b52be479c8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ddb9c5d82d0fc83c300127cc9d52529998ac4a7a31988fb095b5c033afee044\"" Aug 13 07:08:50.210989 kubelet[2131]: E0813 07:08:50.210518 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:50.222548 containerd[1475]: time="2025-08-13T07:08:50.222345100Z" level=info msg="CreateContainer within sandbox \"5ddb9c5d82d0fc83c300127cc9d52529998ac4a7a31988fb095b5c033afee044\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:08:50.245411 containerd[1475]: time="2025-08-13T07:08:50.244089696Z" level=info msg="CreateContainer within sandbox \"5ddb9c5d82d0fc83c300127cc9d52529998ac4a7a31988fb095b5c033afee044\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55abe7e4eb5a7b3853e26b0e5aa92ba30e628df7e5aaafbee2ed8d3fa68c0f7b\"" Aug 13 07:08:50.248955 containerd[1475]: time="2025-08-13T07:08:50.247932719Z" level=info msg="StartContainer for \"55abe7e4eb5a7b3853e26b0e5aa92ba30e628df7e5aaafbee2ed8d3fa68c0f7b\"" Aug 13 07:08:50.281179 containerd[1475]: time="2025-08-13T07:08:50.281125330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-9-a0c30e4e4a,Uid:34620e139dfc94dffaad48ca9d3791bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f5519e758c4093b64039197177caef1a7896945dee2266882672e4bd6e8ca9b\"" Aug 13 07:08:50.282027 containerd[1475]: time="2025-08-13T07:08:50.281972348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-9-a0c30e4e4a,Uid:b8bffb8f17f7ff8a42c155d662ae055a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3131da96eabe9647a32528f2e3957285d59cec3b36ccde74558071da4dcacb1\"" Aug 13 07:08:50.283150 kubelet[2131]: E0813 07:08:50.283123 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:50.286413 kubelet[2131]: E0813 07:08:50.283819 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:50.289424 containerd[1475]: time="2025-08-13T07:08:50.289206959Z" level=info msg="CreateContainer within sandbox \"d3131da96eabe9647a32528f2e3957285d59cec3b36ccde74558071da4dcacb1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:08:50.289809 containerd[1475]: time="2025-08-13T07:08:50.289775651Z" level=info msg="CreateContainer within sandbox \"3f5519e758c4093b64039197177caef1a7896945dee2266882672e4bd6e8ca9b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:08:50.308641 kubelet[2131]: E0813 07:08:50.308571 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.105.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-9-a0c30e4e4a?timeout=10s\": dial tcp 64.227.105.74:6443: connect: connection refused" interval="1.6s" Aug 13 07:08:50.310653 containerd[1475]: time="2025-08-13T07:08:50.310019916Z" level=info msg="CreateContainer within sandbox \"3f5519e758c4093b64039197177caef1a7896945dee2266882672e4bd6e8ca9b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b43a739b758cfdb0793e1c331546175e8abcc1a85b1c433b39d41a785a2b68a2\"" Aug 13 07:08:50.310633 systemd[1]: Started cri-containerd-55abe7e4eb5a7b3853e26b0e5aa92ba30e628df7e5aaafbee2ed8d3fa68c0f7b.scope - libcontainer container 55abe7e4eb5a7b3853e26b0e5aa92ba30e628df7e5aaafbee2ed8d3fa68c0f7b. Aug 13 07:08:50.311731 containerd[1475]: time="2025-08-13T07:08:50.311691220Z" level=info msg="StartContainer for \"b43a739b758cfdb0793e1c331546175e8abcc1a85b1c433b39d41a785a2b68a2\"" Aug 13 07:08:50.316864 containerd[1475]: time="2025-08-13T07:08:50.315842190Z" level=info msg="CreateContainer within sandbox \"d3131da96eabe9647a32528f2e3957285d59cec3b36ccde74558071da4dcacb1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5a7ef5ba285d1d31f1bcc61d4122aeefb6a5a7dacfc30b8618d2cc1c0b2f6e94\"" Aug 13 07:08:50.317647 containerd[1475]: time="2025-08-13T07:08:50.317607259Z" level=info msg="StartContainer for \"5a7ef5ba285d1d31f1bcc61d4122aeefb6a5a7dacfc30b8618d2cc1c0b2f6e94\"" Aug 13 07:08:50.380815 systemd[1]: Started cri-containerd-5a7ef5ba285d1d31f1bcc61d4122aeefb6a5a7dacfc30b8618d2cc1c0b2f6e94.scope - libcontainer container 5a7ef5ba285d1d31f1bcc61d4122aeefb6a5a7dacfc30b8618d2cc1c0b2f6e94. Aug 13 07:08:50.390979 kubelet[2131]: W0813 07:08:50.390250 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.105.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.105.74:6443: connect: connection refused Aug 13 07:08:50.390979 kubelet[2131]: E0813 07:08:50.390330 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.105.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.105.74:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:50.390628 systemd[1]: Started cri-containerd-b43a739b758cfdb0793e1c331546175e8abcc1a85b1c433b39d41a785a2b68a2.scope - libcontainer container b43a739b758cfdb0793e1c331546175e8abcc1a85b1c433b39d41a785a2b68a2. Aug 13 07:08:50.418751 containerd[1475]: time="2025-08-13T07:08:50.418710058Z" level=info msg="StartContainer for \"55abe7e4eb5a7b3853e26b0e5aa92ba30e628df7e5aaafbee2ed8d3fa68c0f7b\" returns successfully" Aug 13 07:08:50.458558 containerd[1475]: time="2025-08-13T07:08:50.458045675Z" level=info msg="StartContainer for \"b43a739b758cfdb0793e1c331546175e8abcc1a85b1c433b39d41a785a2b68a2\" returns successfully" Aug 13 07:08:50.499629 kubelet[2131]: I0813 07:08:50.499262 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:50.501864 kubelet[2131]: E0813 07:08:50.501691 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.227.105.74:6443/api/v1/nodes\": dial tcp 64.227.105.74:6443: connect: connection refused" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:50.510451 containerd[1475]: time="2025-08-13T07:08:50.510346736Z" level=info msg="StartContainer for \"5a7ef5ba285d1d31f1bcc61d4122aeefb6a5a7dacfc30b8618d2cc1c0b2f6e94\" returns successfully" Aug 13 07:08:50.953383 kubelet[2131]: E0813 07:08:50.952699 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:50.953383 kubelet[2131]: E0813 07:08:50.952898 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:50.959077 kubelet[2131]: E0813 07:08:50.958817 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:50.959077 kubelet[2131]: E0813 07:08:50.958969 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:50.964401 kubelet[2131]: E0813 07:08:50.962528 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:50.964401 kubelet[2131]: E0813 07:08:50.962701 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:51.968308 kubelet[2131]: E0813 07:08:51.968266 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:51.968861 kubelet[2131]: E0813 07:08:51.968540 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:51.968964 kubelet[2131]: E0813 07:08:51.968937 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:51.969135 kubelet[2131]: E0813 07:08:51.969108 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:52.103607 kubelet[2131]: I0813 07:08:52.103486 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:52.969341 kubelet[2131]: E0813 07:08:52.969230 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:52.970398 kubelet[2131]: E0813 07:08:52.970263 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:53.162724 kubelet[2131]: E0813 07:08:53.162682 2131 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-9-a0c30e4e4a\" not found" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:53.316055 kubelet[2131]: I0813 07:08:53.315085 2131 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:53.401389 kubelet[2131]: I0813 07:08:53.400149 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:53.416254 kubelet[2131]: E0813 07:08:53.415822 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-9-a0c30e4e4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:53.416254 kubelet[2131]: I0813 07:08:53.415873 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:53.422002 kubelet[2131]: E0813 07:08:53.421679 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:53.422002 kubelet[2131]: I0813 07:08:53.421725 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:53.428086 kubelet[2131]: E0813 07:08:53.428007 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-9-a0c30e4e4a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:53.872444 kubelet[2131]: I0813 07:08:53.872144 2131 apiserver.go:52] "Watching apiserver" Aug 13 07:08:53.901103 kubelet[2131]: I0813 07:08:53.901038 2131 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:08:55.616389 systemd[1]: Reloading requested from client PID 2412 ('systemctl') (unit session-7.scope)... Aug 13 07:08:55.616757 systemd[1]: Reloading... Aug 13 07:08:55.717392 zram_generator::config[2448]: No configuration found. Aug 13 07:08:55.875608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:08:56.013597 systemd[1]: Reloading finished in 396 ms. Aug 13 07:08:56.069378 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:56.084516 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:08:56.084894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:56.085030 systemd[1]: kubelet.service: Consumed 1.251s CPU time, 128.0M memory peak, 0B memory swap peak. Aug 13 07:08:56.091825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:56.305327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:56.318911 (kubelet)[2502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:08:56.404980 kubelet[2502]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:08:56.407467 kubelet[2502]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:08:56.407467 kubelet[2502]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:08:56.407467 kubelet[2502]: I0813 07:08:56.405559 2502 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:08:56.416291 kubelet[2502]: I0813 07:08:56.416227 2502 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:08:56.418387 kubelet[2502]: I0813 07:08:56.416542 2502 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:08:56.418387 kubelet[2502]: I0813 07:08:56.417282 2502 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:08:56.419818 kubelet[2502]: I0813 07:08:56.419781 2502 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:08:56.430411 kubelet[2502]: I0813 07:08:56.429131 2502 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:08:56.442023 kubelet[2502]: E0813 07:08:56.441958 2502 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:08:56.442322 kubelet[2502]: I0813 07:08:56.442291 2502 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:08:56.448505 kubelet[2502]: I0813 07:08:56.448463 2502 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:08:56.449872 kubelet[2502]: I0813 07:08:56.449798 2502 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:08:56.450440 kubelet[2502]: I0813 07:08:56.450079 2502 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-9-a0c30e4e4a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:08:56.450725 kubelet[2502]: I0813 07:08:56.450701 2502 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:08:56.450828 kubelet[2502]: I0813 07:08:56.450808 2502 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:08:56.451048 kubelet[2502]: I0813 07:08:56.451027 2502 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:08:56.451458 kubelet[2502]: I0813 07:08:56.451438 2502 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:08:56.451590 kubelet[2502]: I0813 07:08:56.451575 2502 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:08:56.451684 kubelet[2502]: I0813 07:08:56.451673 2502 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:08:56.451864 kubelet[2502]: I0813 07:08:56.451846 2502 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:08:56.456183 kubelet[2502]: I0813 07:08:56.456147 2502 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:08:56.458009 kubelet[2502]: I0813 07:08:56.457896 2502 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:08:56.459971 kubelet[2502]: I0813 07:08:56.459939 2502 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:08:56.460177 kubelet[2502]: I0813 07:08:56.460010 2502 server.go:1287] "Started kubelet" Aug 13 07:08:56.463623 kubelet[2502]: I0813 07:08:56.462900 2502 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:08:56.474167 kubelet[2502]: I0813 07:08:56.474072 2502 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:08:56.478910 kubelet[2502]: I0813 07:08:56.478867 2502 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:08:56.484696 kubelet[2502]: I0813 07:08:56.484572 2502 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:08:56.484963 kubelet[2502]: I0813 07:08:56.484943 2502 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:08:56.485212 kubelet[2502]: I0813 07:08:56.485193 2502 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:08:56.486925 kubelet[2502]: I0813 07:08:56.486667 2502 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:08:56.487126 kubelet[2502]: E0813 07:08:56.487084 2502 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-9-a0c30e4e4a\" not found" Aug 13 07:08:56.487852 kubelet[2502]: I0813 07:08:56.487814 2502 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:08:56.488071 kubelet[2502]: I0813 07:08:56.488052 2502 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:08:56.500607 kubelet[2502]: I0813 07:08:56.496752 2502 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:08:56.504528 kubelet[2502]: E0813 07:08:56.504495 2502 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:08:56.509812 kubelet[2502]: I0813 07:08:56.509581 2502 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:08:56.510872 kubelet[2502]: I0813 07:08:56.510794 2502 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:08:56.533379 kubelet[2502]: I0813 07:08:56.532618 2502 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:08:56.540430 kubelet[2502]: I0813 07:08:56.540344 2502 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:08:56.540430 kubelet[2502]: I0813 07:08:56.540434 2502 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:08:56.541995 kubelet[2502]: I0813 07:08:56.541953 2502 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:08:56.541995 kubelet[2502]: I0813 07:08:56.541982 2502 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:08:56.542288 kubelet[2502]: E0813 07:08:56.542061 2502 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:08:56.611298 kubelet[2502]: I0813 07:08:56.611189 2502 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:08:56.611461 kubelet[2502]: I0813 07:08:56.611447 2502 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:08:56.612347 kubelet[2502]: I0813 07:08:56.612330 2502 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:08:56.613764 kubelet[2502]: I0813 07:08:56.612769 2502 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:08:56.613764 kubelet[2502]: I0813 07:08:56.612786 2502 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:08:56.613764 kubelet[2502]: I0813 07:08:56.612805 2502 policy_none.go:49] "None policy: Start" Aug 13 07:08:56.613764 kubelet[2502]: I0813 07:08:56.612817 2502 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:08:56.613764 kubelet[2502]: I0813 07:08:56.612839 2502 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:08:56.613764 kubelet[2502]: I0813 07:08:56.612965 2502 state_mem.go:75] "Updated machine memory state" Aug 13 07:08:56.621713 kubelet[2502]: I0813 07:08:56.620971 2502 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:08:56.621713 kubelet[2502]: I0813 07:08:56.621259 2502 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:08:56.621713 kubelet[2502]: I0813 07:08:56.621279 2502 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:08:56.622796 kubelet[2502]: I0813 07:08:56.622203 2502 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:08:56.639537 kubelet[2502]: E0813 07:08:56.633266 2502 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:08:56.642987 kubelet[2502]: I0813 07:08:56.642883 2502 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.647433 sudo[2535]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 07:08:56.648021 sudo[2535]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 07:08:56.650127 kubelet[2502]: I0813 07:08:56.648715 2502 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.651305 kubelet[2502]: I0813 07:08:56.650466 2502 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.669693 kubelet[2502]: W0813 07:08:56.668060 2502 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:08:56.669693 kubelet[2502]: W0813 07:08:56.669627 2502 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:08:56.671596 kubelet[2502]: W0813 07:08:56.670571 2502 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:08:56.690917 kubelet[2502]: I0813 07:08:56.690554 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8bffb8f17f7ff8a42c155d662ae055a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"b8bffb8f17f7ff8a42c155d662ae055a\") " pod="kube-system/kube-scheduler-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.690917 kubelet[2502]: I0813 07:08:56.690632 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34620e139dfc94dffaad48ca9d3791bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"34620e139dfc94dffaad48ca9d3791bf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.690917 kubelet[2502]: I0813 07:08:56.690669 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.690917 kubelet[2502]: I0813 07:08:56.690695 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.690917 kubelet[2502]: I0813 07:08:56.690721 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.691258 kubelet[2502]: I0813 07:08:56.690742 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34620e139dfc94dffaad48ca9d3791bf-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"34620e139dfc94dffaad48ca9d3791bf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.691258 kubelet[2502]: I0813 07:08:56.690763 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34620e139dfc94dffaad48ca9d3791bf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"34620e139dfc94dffaad48ca9d3791bf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.691258 kubelet[2502]: I0813 07:08:56.690783 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.691258 kubelet[2502]: I0813 07:08:56.690807 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0db3be4d2bde5030350e0b52be479c8e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a\" (UID: \"0db3be4d2bde5030350e0b52be479c8e\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.738599 kubelet[2502]: I0813 07:08:56.737761 2502 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.762748 kubelet[2502]: I0813 07:08:56.762699 2502 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.763395 kubelet[2502]: I0813 07:08:56.763068 2502 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:56.972389 kubelet[2502]: E0813 07:08:56.970970 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:56.972389 kubelet[2502]: E0813 07:08:56.971089 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:56.972389 kubelet[2502]: E0813 07:08:56.971228 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:57.453688 kubelet[2502]: I0813 07:08:57.453282 2502 apiserver.go:52] "Watching apiserver" Aug 13 07:08:57.472635 sudo[2535]: pam_unix(sudo:session): session closed for user root Aug 13 07:08:57.488646 kubelet[2502]: I0813 07:08:57.488557 2502 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:08:57.576070 kubelet[2502]: I0813 07:08:57.575277 2502 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:57.576070 kubelet[2502]: E0813 07:08:57.575390 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:57.576070 kubelet[2502]: I0813 07:08:57.575990 2502 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:57.588419 kubelet[2502]: W0813 07:08:57.587645 2502 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:08:57.588419 kubelet[2502]: E0813 07:08:57.587775 2502 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-9-a0c30e4e4a\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:57.588419 kubelet[2502]: E0813 07:08:57.588046 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:57.588923 kubelet[2502]: W0813 07:08:57.588884 2502 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:08:57.589004 kubelet[2502]: E0813 07:08:57.588971 2502 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-9-a0c30e4e4a\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" Aug 13 07:08:57.589248 kubelet[2502]: E0813 07:08:57.589215 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:57.649046 kubelet[2502]: I0813 07:08:57.648875 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-9-a0c30e4e4a" podStartSLOduration=1.64885233 podStartE2EDuration="1.64885233s" podCreationTimestamp="2025-08-13 07:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:08:57.648379632 +0000 UTC m=+1.322611301" watchObservedRunningTime="2025-08-13 07:08:57.64885233 +0000 UTC m=+1.323084024" Aug 13 07:08:57.683251 kubelet[2502]: I0813 07:08:57.682479 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-9-a0c30e4e4a" podStartSLOduration=1.6823513380000001 podStartE2EDuration="1.682351338s" podCreationTimestamp="2025-08-13 07:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:08:57.681544251 +0000 UTC m=+1.355775920" watchObservedRunningTime="2025-08-13 07:08:57.682351338 +0000 UTC m=+1.356583006" Aug 13 07:08:57.683990 kubelet[2502]: I0813 07:08:57.682799 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-9-a0c30e4e4a" podStartSLOduration=1.682779684 podStartE2EDuration="1.682779684s" podCreationTimestamp="2025-08-13 07:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:08:57.665722727 +0000 UTC m=+1.339954396" watchObservedRunningTime="2025-08-13 07:08:57.682779684 +0000 UTC m=+1.357011355" Aug 13 07:08:58.577022 kubelet[2502]: E0813 07:08:58.576956 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:58.578199 kubelet[2502]: E0813 07:08:58.577679 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:59.590083 kubelet[2502]: E0813 07:08:59.587198 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:08:59.647653 sudo[1655]: pam_unix(sudo:session): session closed for user root Aug 13 07:08:59.654186 sshd[1652]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:59.659938 systemd[1]: sshd@6-64.227.105.74:22-139.178.89.65:58834.service: Deactivated successfully. Aug 13 07:08:59.663645 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:08:59.664112 systemd[1]: session-7.scope: Consumed 5.693s CPU time, 143.4M memory peak, 0B memory swap peak. Aug 13 07:08:59.666465 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:08:59.668478 systemd-logind[1454]: Removed session 7. Aug 13 07:08:59.820183 kubelet[2502]: I0813 07:08:59.820126 2502 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:08:59.824192 containerd[1475]: time="2025-08-13T07:08:59.824107004Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:08:59.826159 kubelet[2502]: I0813 07:08:59.825292 2502 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:09:00.707832 systemd[1]: Created slice kubepods-besteffort-pod9e36f9a3_c0c3_4b98_af91_9c3e4d67f15c.slice - libcontainer container kubepods-besteffort-pod9e36f9a3_c0c3_4b98_af91_9c3e4d67f15c.slice. Aug 13 07:09:00.721776 kubelet[2502]: I0813 07:09:00.719595 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c-kube-proxy\") pod \"kube-proxy-lgl6h\" (UID: \"9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c\") " pod="kube-system/kube-proxy-lgl6h" Aug 13 07:09:00.721776 kubelet[2502]: I0813 07:09:00.719717 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c-xtables-lock\") pod \"kube-proxy-lgl6h\" (UID: \"9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c\") " pod="kube-system/kube-proxy-lgl6h" Aug 13 07:09:00.721776 kubelet[2502]: I0813 07:09:00.721600 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c-lib-modules\") pod \"kube-proxy-lgl6h\" (UID: \"9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c\") " pod="kube-system/kube-proxy-lgl6h" Aug 13 07:09:00.721776 kubelet[2502]: I0813 07:09:00.721680 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64wrm\" (UniqueName: \"kubernetes.io/projected/9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c-kube-api-access-64wrm\") pod \"kube-proxy-lgl6h\" (UID: \"9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c\") " pod="kube-system/kube-proxy-lgl6h" Aug 13 07:09:00.750761 systemd[1]: Created slice kubepods-burstable-pod9dd73a7b_11ee_4504_a654_6aed087799ac.slice - libcontainer container kubepods-burstable-pod9dd73a7b_11ee_4504_a654_6aed087799ac.slice. Aug 13 07:09:00.823510 kubelet[2502]: I0813 07:09:00.822556 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-run\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823510 kubelet[2502]: I0813 07:09:00.822630 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-hostproc\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823510 kubelet[2502]: I0813 07:09:00.822657 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-etc-cni-netd\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823510 kubelet[2502]: I0813 07:09:00.822682 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cni-path\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823510 kubelet[2502]: I0813 07:09:00.822709 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-lib-modules\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823510 kubelet[2502]: I0813 07:09:00.822731 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9dd73a7b-11ee-4504-a654-6aed087799ac-clustermesh-secrets\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823999 kubelet[2502]: I0813 07:09:00.822758 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-config-path\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823999 kubelet[2502]: I0813 07:09:00.822811 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9dd73a7b-11ee-4504-a654-6aed087799ac-hubble-tls\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823999 kubelet[2502]: I0813 07:09:00.822871 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-cgroup\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823999 kubelet[2502]: I0813 07:09:00.822896 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-xtables-lock\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823999 kubelet[2502]: I0813 07:09:00.822939 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-host-proc-sys-kernel\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.823999 kubelet[2502]: I0813 07:09:00.822962 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-bpf-maps\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.824265 kubelet[2502]: I0813 07:09:00.822987 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-host-proc-sys-net\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.824265 kubelet[2502]: I0813 07:09:00.823012 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xlrp\" (UniqueName: \"kubernetes.io/projected/9dd73a7b-11ee-4504-a654-6aed087799ac-kube-api-access-7xlrp\") pod \"cilium-268nh\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " pod="kube-system/cilium-268nh" Aug 13 07:09:00.931675 kubelet[2502]: I0813 07:09:00.931425 2502 status_manager.go:890] "Failed to get status for pod" podUID="3ae614ce-8cea-42f7-bec1-3855c790bfa5" pod="kube-system/cilium-operator-6c4d7847fc-mf2nh" err="pods \"cilium-operator-6c4d7847fc-mf2nh\" is forbidden: User \"system:node:ci-4081.3.5-9-a0c30e4e4a\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.5-9-a0c30e4e4a' and this object" Aug 13 07:09:00.977966 systemd[1]: Created slice kubepods-besteffort-pod3ae614ce_8cea_42f7_bec1_3855c790bfa5.slice - libcontainer container kubepods-besteffort-pod3ae614ce_8cea_42f7_bec1_3855c790bfa5.slice. Aug 13 07:09:01.020792 kubelet[2502]: E0813 07:09:01.020160 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:01.021163 containerd[1475]: time="2025-08-13T07:09:01.021089941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lgl6h,Uid:9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:01.025700 kubelet[2502]: I0813 07:09:01.025437 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ae614ce-8cea-42f7-bec1-3855c790bfa5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mf2nh\" (UID: \"3ae614ce-8cea-42f7-bec1-3855c790bfa5\") " pod="kube-system/cilium-operator-6c4d7847fc-mf2nh" Aug 13 07:09:01.025700 kubelet[2502]: I0813 07:09:01.025502 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7q5j\" (UniqueName: \"kubernetes.io/projected/3ae614ce-8cea-42f7-bec1-3855c790bfa5-kube-api-access-l7q5j\") pod \"cilium-operator-6c4d7847fc-mf2nh\" (UID: \"3ae614ce-8cea-42f7-bec1-3855c790bfa5\") " pod="kube-system/cilium-operator-6c4d7847fc-mf2nh" Aug 13 07:09:01.060473 kubelet[2502]: E0813 07:09:01.059892 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:01.062923 containerd[1475]: time="2025-08-13T07:09:01.061872102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-268nh,Uid:9dd73a7b-11ee-4504-a654-6aed087799ac,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:01.092393 containerd[1475]: time="2025-08-13T07:09:01.091725561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:01.092393 containerd[1475]: time="2025-08-13T07:09:01.091821906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:01.092393 containerd[1475]: time="2025-08-13T07:09:01.091858846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:01.092393 containerd[1475]: time="2025-08-13T07:09:01.092016679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:01.106765 containerd[1475]: time="2025-08-13T07:09:01.106183930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:01.107153 containerd[1475]: time="2025-08-13T07:09:01.107043191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:01.107153 containerd[1475]: time="2025-08-13T07:09:01.107083779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:01.108606 containerd[1475]: time="2025-08-13T07:09:01.107499532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:01.162796 systemd[1]: Started cri-containerd-a78dc18323562a3160bdc97c8f0473cd96f80c132b0b00ee0ef3f316735df6b8.scope - libcontainer container a78dc18323562a3160bdc97c8f0473cd96f80c132b0b00ee0ef3f316735df6b8. Aug 13 07:09:01.212737 systemd[1]: Started cri-containerd-f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d.scope - libcontainer container f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d. Aug 13 07:09:01.242081 containerd[1475]: time="2025-08-13T07:09:01.241010718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lgl6h,Uid:9e36f9a3-c0c3-4b98-af91-9c3e4d67f15c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a78dc18323562a3160bdc97c8f0473cd96f80c132b0b00ee0ef3f316735df6b8\"" Aug 13 07:09:01.246499 kubelet[2502]: E0813 07:09:01.244679 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:01.256403 containerd[1475]: time="2025-08-13T07:09:01.256100476Z" level=info msg="CreateContainer within sandbox \"a78dc18323562a3160bdc97c8f0473cd96f80c132b0b00ee0ef3f316735df6b8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:09:01.282541 containerd[1475]: time="2025-08-13T07:09:01.282427067Z" level=info msg="CreateContainer within sandbox \"a78dc18323562a3160bdc97c8f0473cd96f80c132b0b00ee0ef3f316735df6b8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3acb6970b799fbc542a721950cf89b1cb5959c972f49dfcf4bfa090e47a92837\"" Aug 13 07:09:01.285167 containerd[1475]: time="2025-08-13T07:09:01.285027776Z" level=info msg="StartContainer for \"3acb6970b799fbc542a721950cf89b1cb5959c972f49dfcf4bfa090e47a92837\"" Aug 13 07:09:01.290767 kubelet[2502]: E0813 07:09:01.290583 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:01.292393 containerd[1475]: time="2025-08-13T07:09:01.292014105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mf2nh,Uid:3ae614ce-8cea-42f7-bec1-3855c790bfa5,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:01.322036 containerd[1475]: time="2025-08-13T07:09:01.321884286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-268nh,Uid:9dd73a7b-11ee-4504-a654-6aed087799ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\"" Aug 13 07:09:01.330532 kubelet[2502]: E0813 07:09:01.330028 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:01.335471 containerd[1475]: time="2025-08-13T07:09:01.335411524Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 07:09:01.374595 containerd[1475]: time="2025-08-13T07:09:01.374460561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:01.374863 containerd[1475]: time="2025-08-13T07:09:01.374566960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:01.375023 containerd[1475]: time="2025-08-13T07:09:01.374968342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:01.375397 containerd[1475]: time="2025-08-13T07:09:01.375287728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:01.375956 systemd[1]: Started cri-containerd-3acb6970b799fbc542a721950cf89b1cb5959c972f49dfcf4bfa090e47a92837.scope - libcontainer container 3acb6970b799fbc542a721950cf89b1cb5959c972f49dfcf4bfa090e47a92837. Aug 13 07:09:01.409441 systemd[1]: Started cri-containerd-352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18.scope - libcontainer container 352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18. Aug 13 07:09:01.438040 containerd[1475]: time="2025-08-13T07:09:01.437963563Z" level=info msg="StartContainer for \"3acb6970b799fbc542a721950cf89b1cb5959c972f49dfcf4bfa090e47a92837\" returns successfully" Aug 13 07:09:01.486738 containerd[1475]: time="2025-08-13T07:09:01.486665129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mf2nh,Uid:3ae614ce-8cea-42f7-bec1-3855c790bfa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\"" Aug 13 07:09:01.489523 kubelet[2502]: E0813 07:09:01.488713 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:01.605095 kubelet[2502]: E0813 07:09:01.604891 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:01.654230 kubelet[2502]: I0813 07:09:01.653740 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lgl6h" podStartSLOduration=1.6536924160000002 podStartE2EDuration="1.653692416s" podCreationTimestamp="2025-08-13 07:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:01.629639525 +0000 UTC m=+5.303871239" watchObservedRunningTime="2025-08-13 07:09:01.653692416 +0000 UTC m=+5.327924095" Aug 13 07:09:02.556149 kubelet[2502]: E0813 07:09:02.555671 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:02.601624 kubelet[2502]: E0813 07:09:02.601484 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:04.001469 systemd-resolved[1324]: Clock change detected. Flushing caches. Aug 13 07:09:04.002644 systemd-timesyncd[1338]: Contacted time server 23.186.168.128:123 (2.flatcar.pool.ntp.org). Aug 13 07:09:04.004280 systemd-timesyncd[1338]: Initial clock synchronization to Wed 2025-08-13 07:09:04.001271 UTC. Aug 13 07:09:04.755570 kubelet[2502]: E0813 07:09:04.755510 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:05.504291 kubelet[2502]: E0813 07:09:05.503972 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:06.695017 kubelet[2502]: E0813 07:09:06.694759 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:07.525247 kubelet[2502]: E0813 07:09:07.524891 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:07.836468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471477944.mount: Deactivated successfully. Aug 13 07:09:08.517706 kubelet[2502]: E0813 07:09:08.517598 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:10.486989 containerd[1475]: time="2025-08-13T07:09:10.486914734Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:10.490637 containerd[1475]: time="2025-08-13T07:09:10.490519448Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 07:09:10.494041 containerd[1475]: time="2025-08-13T07:09:10.493980695Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.267910961s" Aug 13 07:09:10.494461 containerd[1475]: time="2025-08-13T07:09:10.494276396Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 07:09:10.498945 containerd[1475]: time="2025-08-13T07:09:10.497763107Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 07:09:10.508111 containerd[1475]: time="2025-08-13T07:09:10.508032176Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:10.510580 containerd[1475]: time="2025-08-13T07:09:10.510083522Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:09:10.611601 containerd[1475]: time="2025-08-13T07:09:10.611414468Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\"" Aug 13 07:09:10.614776 containerd[1475]: time="2025-08-13T07:09:10.612684243Z" level=info msg="StartContainer for \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\"" Aug 13 07:09:10.744467 systemd[1]: Started cri-containerd-4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af.scope - libcontainer container 4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af. Aug 13 07:09:10.794387 containerd[1475]: time="2025-08-13T07:09:10.794042130Z" level=info msg="StartContainer for \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\" returns successfully" Aug 13 07:09:10.814734 systemd[1]: cri-containerd-4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af.scope: Deactivated successfully. Aug 13 07:09:10.892197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af-rootfs.mount: Deactivated successfully. Aug 13 07:09:10.901465 containerd[1475]: time="2025-08-13T07:09:10.896314387Z" level=info msg="shim disconnected" id=4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af namespace=k8s.io Aug 13 07:09:10.901781 containerd[1475]: time="2025-08-13T07:09:10.901469009Z" level=warning msg="cleaning up after shim disconnected" id=4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af namespace=k8s.io Aug 13 07:09:10.901781 containerd[1475]: time="2025-08-13T07:09:10.901505323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:11.534488 kubelet[2502]: E0813 07:09:11.534161 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:11.541244 containerd[1475]: time="2025-08-13T07:09:11.540997247Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:09:11.560774 containerd[1475]: time="2025-08-13T07:09:11.560067009Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\"" Aug 13 07:09:11.563788 containerd[1475]: time="2025-08-13T07:09:11.562559383Z" level=info msg="StartContainer for \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\"" Aug 13 07:09:11.609664 systemd[1]: Started cri-containerd-2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750.scope - libcontainer container 2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750. Aug 13 07:09:11.704060 containerd[1475]: time="2025-08-13T07:09:11.700611612Z" level=info msg="StartContainer for \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\" returns successfully" Aug 13 07:09:11.730133 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:09:11.730752 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:09:11.730920 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:09:11.739885 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:09:11.749779 systemd[1]: cri-containerd-2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750.scope: Deactivated successfully. Aug 13 07:09:11.799268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234076619.mount: Deactivated successfully. Aug 13 07:09:11.802885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:09:11.880841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750-rootfs.mount: Deactivated successfully. Aug 13 07:09:11.882177 containerd[1475]: time="2025-08-13T07:09:11.882078076Z" level=info msg="shim disconnected" id=2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750 namespace=k8s.io Aug 13 07:09:11.882897 containerd[1475]: time="2025-08-13T07:09:11.882488399Z" level=warning msg="cleaning up after shim disconnected" id=2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750 namespace=k8s.io Aug 13 07:09:11.882897 containerd[1475]: time="2025-08-13T07:09:11.882522831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:12.446208 update_engine[1456]: I20250813 07:09:12.446020 1456 update_attempter.cc:509] Updating boot flags... Aug 13 07:09:12.519727 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3040) Aug 13 07:09:12.559561 kubelet[2502]: E0813 07:09:12.558305 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:12.597002 containerd[1475]: time="2025-08-13T07:09:12.596781944Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:09:12.657282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025469376.mount: Deactivated successfully. Aug 13 07:09:12.674595 containerd[1475]: time="2025-08-13T07:09:12.674521863Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\"" Aug 13 07:09:12.677351 containerd[1475]: time="2025-08-13T07:09:12.676267678Z" level=info msg="StartContainer for \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\"" Aug 13 07:09:12.715378 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3040) Aug 13 07:09:12.855612 systemd[1]: Started cri-containerd-3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942.scope - libcontainer container 3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942. Aug 13 07:09:12.907623 containerd[1475]: time="2025-08-13T07:09:12.907573740Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:12.907977 containerd[1475]: time="2025-08-13T07:09:12.907859304Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 07:09:12.908695 containerd[1475]: time="2025-08-13T07:09:12.908410087Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:12.914793 containerd[1475]: time="2025-08-13T07:09:12.914727922Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.41691077s" Aug 13 07:09:12.914793 containerd[1475]: time="2025-08-13T07:09:12.914792637Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 07:09:12.919006 containerd[1475]: time="2025-08-13T07:09:12.918951264Z" level=info msg="CreateContainer within sandbox \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 07:09:12.935774 containerd[1475]: time="2025-08-13T07:09:12.935566541Z" level=info msg="CreateContainer within sandbox \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\"" Aug 13 07:09:12.938441 containerd[1475]: time="2025-08-13T07:09:12.937756737Z" level=info msg="StartContainer for \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\"" Aug 13 07:09:12.945399 containerd[1475]: time="2025-08-13T07:09:12.945317602Z" level=info msg="StartContainer for \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\" returns successfully" Aug 13 07:09:12.950970 systemd[1]: cri-containerd-3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942.scope: Deactivated successfully. Aug 13 07:09:12.992899 systemd[1]: Started cri-containerd-f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73.scope - libcontainer container f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73. Aug 13 07:09:13.056486 containerd[1475]: time="2025-08-13T07:09:13.056188949Z" level=info msg="StartContainer for \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\" returns successfully" Aug 13 07:09:13.057806 containerd[1475]: time="2025-08-13T07:09:13.057696888Z" level=info msg="shim disconnected" id=3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942 namespace=k8s.io Aug 13 07:09:13.057806 containerd[1475]: time="2025-08-13T07:09:13.057818272Z" level=warning msg="cleaning up after shim disconnected" id=3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942 namespace=k8s.io Aug 13 07:09:13.058033 containerd[1475]: time="2025-08-13T07:09:13.057836405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:13.572490 kubelet[2502]: E0813 07:09:13.572185 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:13.579499 containerd[1475]: time="2025-08-13T07:09:13.578072551Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:09:13.580325 kubelet[2502]: E0813 07:09:13.579170 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:13.596274 containerd[1475]: time="2025-08-13T07:09:13.595062723Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\"" Aug 13 07:09:13.597683 containerd[1475]: time="2025-08-13T07:09:13.597492843Z" level=info msg="StartContainer for \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\"" Aug 13 07:09:13.655550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942-rootfs.mount: Deactivated successfully. Aug 13 07:09:13.705494 systemd[1]: Started cri-containerd-c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900.scope - libcontainer container c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900. Aug 13 07:09:13.810110 containerd[1475]: time="2025-08-13T07:09:13.809826348Z" level=info msg="StartContainer for \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\" returns successfully" Aug 13 07:09:13.810866 systemd[1]: cri-containerd-c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900.scope: Deactivated successfully. Aug 13 07:09:13.821254 kubelet[2502]: I0813 07:09:13.820710 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mf2nh" podStartSLOduration=3.28645626 podStartE2EDuration="13.8206676s" podCreationTimestamp="2025-08-13 07:09:00 +0000 UTC" firstStartedPulling="2025-08-13 07:09:01.491549626 +0000 UTC m=+5.165781278" lastFinishedPulling="2025-08-13 07:09:12.915617543 +0000 UTC m=+15.699992618" observedRunningTime="2025-08-13 07:09:13.820488357 +0000 UTC m=+16.604863453" watchObservedRunningTime="2025-08-13 07:09:13.8206676 +0000 UTC m=+16.605042702" Aug 13 07:09:13.886440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900-rootfs.mount: Deactivated successfully. Aug 13 07:09:13.890848 containerd[1475]: time="2025-08-13T07:09:13.890761577Z" level=info msg="shim disconnected" id=c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900 namespace=k8s.io Aug 13 07:09:13.890848 containerd[1475]: time="2025-08-13T07:09:13.890843787Z" level=warning msg="cleaning up after shim disconnected" id=c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900 namespace=k8s.io Aug 13 07:09:13.890848 containerd[1475]: time="2025-08-13T07:09:13.890857604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:14.590806 kubelet[2502]: E0813 07:09:14.590725 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:14.593258 kubelet[2502]: E0813 07:09:14.593206 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:14.597714 containerd[1475]: time="2025-08-13T07:09:14.597527418Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:09:14.628841 containerd[1475]: time="2025-08-13T07:09:14.626559464Z" level=info msg="CreateContainer within sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\"" Aug 13 07:09:14.630054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308669095.mount: Deactivated successfully. Aug 13 07:09:14.636495 containerd[1475]: time="2025-08-13T07:09:14.635882414Z" level=info msg="StartContainer for \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\"" Aug 13 07:09:14.701605 systemd[1]: Started cri-containerd-ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65.scope - libcontainer container ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65. Aug 13 07:09:14.741104 containerd[1475]: time="2025-08-13T07:09:14.741047394Z" level=info msg="StartContainer for \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\" returns successfully" Aug 13 07:09:14.842392 systemd[1]: run-containerd-runc-k8s.io-ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65-runc.VEjOFJ.mount: Deactivated successfully. Aug 13 07:09:15.006800 kubelet[2502]: I0813 07:09:15.005493 2502 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:09:15.070547 systemd[1]: Created slice kubepods-burstable-pod2ad0d47b_a894_4f78_9f93_85bb0db5f798.slice - libcontainer container kubepods-burstable-pod2ad0d47b_a894_4f78_9f93_85bb0db5f798.slice. Aug 13 07:09:15.079700 systemd[1]: Created slice kubepods-burstable-poddc325ca4_a5ab_4236_a5df_e1c60da433fb.slice - libcontainer container kubepods-burstable-poddc325ca4_a5ab_4236_a5df_e1c60da433fb.slice. Aug 13 07:09:15.127629 kubelet[2502]: I0813 07:09:15.127150 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z5n7\" (UniqueName: \"kubernetes.io/projected/2ad0d47b-a894-4f78-9f93-85bb0db5f798-kube-api-access-2z5n7\") pod \"coredns-668d6bf9bc-gnjq2\" (UID: \"2ad0d47b-a894-4f78-9f93-85bb0db5f798\") " pod="kube-system/coredns-668d6bf9bc-gnjq2" Aug 13 07:09:15.127629 kubelet[2502]: I0813 07:09:15.127198 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jmms\" (UniqueName: \"kubernetes.io/projected/dc325ca4-a5ab-4236-a5df-e1c60da433fb-kube-api-access-9jmms\") pod \"coredns-668d6bf9bc-8rxgp\" (UID: \"dc325ca4-a5ab-4236-a5df-e1c60da433fb\") " pod="kube-system/coredns-668d6bf9bc-8rxgp" Aug 13 07:09:15.127629 kubelet[2502]: I0813 07:09:15.127234 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc325ca4-a5ab-4236-a5df-e1c60da433fb-config-volume\") pod \"coredns-668d6bf9bc-8rxgp\" (UID: \"dc325ca4-a5ab-4236-a5df-e1c60da433fb\") " pod="kube-system/coredns-668d6bf9bc-8rxgp" Aug 13 07:09:15.127629 kubelet[2502]: I0813 07:09:15.127256 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ad0d47b-a894-4f78-9f93-85bb0db5f798-config-volume\") pod \"coredns-668d6bf9bc-gnjq2\" (UID: \"2ad0d47b-a894-4f78-9f93-85bb0db5f798\") " pod="kube-system/coredns-668d6bf9bc-gnjq2" Aug 13 07:09:15.377797 kubelet[2502]: E0813 07:09:15.377648 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:15.380722 containerd[1475]: time="2025-08-13T07:09:15.380293616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gnjq2,Uid:2ad0d47b-a894-4f78-9f93-85bb0db5f798,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:15.386521 kubelet[2502]: E0813 07:09:15.386472 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:15.387836 containerd[1475]: time="2025-08-13T07:09:15.387786916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8rxgp,Uid:dc325ca4-a5ab-4236-a5df-e1c60da433fb,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:15.604388 kubelet[2502]: E0813 07:09:15.603159 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:15.652634 kubelet[2502]: I0813 07:09:15.651456 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-268nh" podStartSLOduration=7.379225664 podStartE2EDuration="15.651434392s" podCreationTimestamp="2025-08-13 07:09:00 +0000 UTC" firstStartedPulling="2025-08-13 07:09:01.334517097 +0000 UTC m=+5.008748748" lastFinishedPulling="2025-08-13 07:09:10.496582406 +0000 UTC m=+13.280957476" observedRunningTime="2025-08-13 07:09:15.650283929 +0000 UTC m=+18.434659032" watchObservedRunningTime="2025-08-13 07:09:15.651434392 +0000 UTC m=+18.435809531" Aug 13 07:09:16.608663 kubelet[2502]: E0813 07:09:16.608607 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:17.379603 systemd-networkd[1371]: cilium_host: Link UP Aug 13 07:09:17.379879 systemd-networkd[1371]: cilium_net: Link UP Aug 13 07:09:17.380072 systemd-networkd[1371]: cilium_net: Gained carrier Aug 13 07:09:17.381107 systemd-networkd[1371]: cilium_host: Gained carrier Aug 13 07:09:17.388525 systemd-networkd[1371]: cilium_net: Gained IPv6LL Aug 13 07:09:17.570006 systemd-networkd[1371]: cilium_vxlan: Link UP Aug 13 07:09:17.570019 systemd-networkd[1371]: cilium_vxlan: Gained carrier Aug 13 07:09:17.611547 kubelet[2502]: E0813 07:09:17.611474 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:18.051256 kernel: NET: Registered PF_ALG protocol family Aug 13 07:09:18.103584 systemd-networkd[1371]: cilium_host: Gained IPv6LL Aug 13 07:09:19.148951 systemd-networkd[1371]: lxc_health: Link UP Aug 13 07:09:19.157734 systemd-networkd[1371]: lxc_health: Gained carrier Aug 13 07:09:19.511454 systemd-networkd[1371]: cilium_vxlan: Gained IPv6LL Aug 13 07:09:19.560594 kernel: eth0: renamed from tmpd816e Aug 13 07:09:19.557123 systemd-networkd[1371]: lxcd563d619a783: Link UP Aug 13 07:09:19.569036 systemd-networkd[1371]: lxcd563d619a783: Gained carrier Aug 13 07:09:19.604817 kernel: eth0: renamed from tmp2117c Aug 13 07:09:19.611962 systemd-networkd[1371]: lxcf2f813072f62: Link UP Aug 13 07:09:19.618801 systemd-networkd[1371]: lxcf2f813072f62: Gained carrier Aug 13 07:09:19.954152 kubelet[2502]: E0813 07:09:19.954100 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:20.633034 kubelet[2502]: E0813 07:09:20.631413 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:21.111507 systemd-networkd[1371]: lxc_health: Gained IPv6LL Aug 13 07:09:21.111895 systemd-networkd[1371]: lxcf2f813072f62: Gained IPv6LL Aug 13 07:09:21.240274 systemd-networkd[1371]: lxcd563d619a783: Gained IPv6LL Aug 13 07:09:21.634405 kubelet[2502]: E0813 07:09:21.633872 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:24.825089 containerd[1475]: time="2025-08-13T07:09:24.824935894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:24.825691 containerd[1475]: time="2025-08-13T07:09:24.825047164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:24.825691 containerd[1475]: time="2025-08-13T07:09:24.825092192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:24.826425 containerd[1475]: time="2025-08-13T07:09:24.825269870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:24.852956 containerd[1475]: time="2025-08-13T07:09:24.852659755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:24.856424 containerd[1475]: time="2025-08-13T07:09:24.853821095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:24.856424 containerd[1475]: time="2025-08-13T07:09:24.853901591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:24.856424 containerd[1475]: time="2025-08-13T07:09:24.854378118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:24.894588 systemd[1]: Started cri-containerd-2117ce75c9ffee83032f360c31f8582a0ecd7697fe7fc0ef7361be5afeeccc2f.scope - libcontainer container 2117ce75c9ffee83032f360c31f8582a0ecd7697fe7fc0ef7361be5afeeccc2f. Aug 13 07:09:24.915489 systemd[1]: Started cri-containerd-d816e1ce634afcf11fb43695e9787411c74f839cd51d201b53ce765388346c23.scope - libcontainer container d816e1ce634afcf11fb43695e9787411c74f839cd51d201b53ce765388346c23. Aug 13 07:09:24.979122 containerd[1475]: time="2025-08-13T07:09:24.979043709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gnjq2,Uid:2ad0d47b-a894-4f78-9f93-85bb0db5f798,Namespace:kube-system,Attempt:0,} returns sandbox id \"2117ce75c9ffee83032f360c31f8582a0ecd7697fe7fc0ef7361be5afeeccc2f\"" Aug 13 07:09:24.981719 kubelet[2502]: E0813 07:09:24.981677 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:24.990206 containerd[1475]: time="2025-08-13T07:09:24.989184572Z" level=info msg="CreateContainer within sandbox \"2117ce75c9ffee83032f360c31f8582a0ecd7697fe7fc0ef7361be5afeeccc2f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:09:25.013137 containerd[1475]: time="2025-08-13T07:09:25.013041632Z" level=info msg="CreateContainer within sandbox \"2117ce75c9ffee83032f360c31f8582a0ecd7697fe7fc0ef7361be5afeeccc2f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8f91958f85e57e6923021a91971957f12bd0ab27652cf7e56366d869093369d\"" Aug 13 07:09:25.014457 containerd[1475]: time="2025-08-13T07:09:25.014425734Z" level=info msg="StartContainer for \"a8f91958f85e57e6923021a91971957f12bd0ab27652cf7e56366d869093369d\"" Aug 13 07:09:25.087464 systemd[1]: Started cri-containerd-a8f91958f85e57e6923021a91971957f12bd0ab27652cf7e56366d869093369d.scope - libcontainer container a8f91958f85e57e6923021a91971957f12bd0ab27652cf7e56366d869093369d. Aug 13 07:09:25.097233 containerd[1475]: time="2025-08-13T07:09:25.097014454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8rxgp,Uid:dc325ca4-a5ab-4236-a5df-e1c60da433fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d816e1ce634afcf11fb43695e9787411c74f839cd51d201b53ce765388346c23\"" Aug 13 07:09:25.098759 kubelet[2502]: E0813 07:09:25.098713 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:25.106818 containerd[1475]: time="2025-08-13T07:09:25.106759050Z" level=info msg="CreateContainer within sandbox \"d816e1ce634afcf11fb43695e9787411c74f839cd51d201b53ce765388346c23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:09:25.125174 containerd[1475]: time="2025-08-13T07:09:25.125098703Z" level=info msg="CreateContainer within sandbox \"d816e1ce634afcf11fb43695e9787411c74f839cd51d201b53ce765388346c23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f82c5c7c852f55cc7d1c2472e18ccd59bc598726f64167d36a24cd4a7e0b6c27\"" Aug 13 07:09:25.127660 containerd[1475]: time="2025-08-13T07:09:25.127606570Z" level=info msg="StartContainer for \"f82c5c7c852f55cc7d1c2472e18ccd59bc598726f64167d36a24cd4a7e0b6c27\"" Aug 13 07:09:25.154981 containerd[1475]: time="2025-08-13T07:09:25.154914313Z" level=info msg="StartContainer for \"a8f91958f85e57e6923021a91971957f12bd0ab27652cf7e56366d869093369d\" returns successfully" Aug 13 07:09:25.183545 systemd[1]: Started cri-containerd-f82c5c7c852f55cc7d1c2472e18ccd59bc598726f64167d36a24cd4a7e0b6c27.scope - libcontainer container f82c5c7c852f55cc7d1c2472e18ccd59bc598726f64167d36a24cd4a7e0b6c27. Aug 13 07:09:25.224949 containerd[1475]: time="2025-08-13T07:09:25.224893276Z" level=info msg="StartContainer for \"f82c5c7c852f55cc7d1c2472e18ccd59bc598726f64167d36a24cd4a7e0b6c27\" returns successfully" Aug 13 07:09:25.647385 kubelet[2502]: E0813 07:09:25.647092 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:25.651515 kubelet[2502]: E0813 07:09:25.651478 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:25.666079 kubelet[2502]: I0813 07:09:25.665994 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8rxgp" podStartSLOduration=25.665967836 podStartE2EDuration="25.665967836s" podCreationTimestamp="2025-08-13 07:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:25.663691424 +0000 UTC m=+28.448066515" watchObservedRunningTime="2025-08-13 07:09:25.665967836 +0000 UTC m=+28.450342931" Aug 13 07:09:25.702609 kubelet[2502]: I0813 07:09:25.702534 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gnjq2" podStartSLOduration=25.702515392 podStartE2EDuration="25.702515392s" podCreationTimestamp="2025-08-13 07:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:25.701666055 +0000 UTC m=+28.486041139" watchObservedRunningTime="2025-08-13 07:09:25.702515392 +0000 UTC m=+28.486890483" Aug 13 07:09:26.654162 kubelet[2502]: E0813 07:09:26.654036 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:26.654162 kubelet[2502]: E0813 07:09:26.654081 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:27.657192 kubelet[2502]: E0813 07:09:27.655537 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:27.657192 kubelet[2502]: E0813 07:09:27.657065 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:09:38.580718 systemd[1]: Started sshd@7-64.227.105.74:22-139.178.89.65:47038.service - OpenSSH per-connection server daemon (139.178.89.65:47038). Aug 13 07:09:38.647542 sshd[3896]: Accepted publickey for core from 139.178.89.65 port 47038 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:38.650063 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:38.658722 systemd-logind[1454]: New session 8 of user core. Aug 13 07:09:38.672926 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:09:39.352644 sshd[3896]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:39.362968 systemd[1]: sshd@7-64.227.105.74:22-139.178.89.65:47038.service: Deactivated successfully. Aug 13 07:09:39.366634 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:09:39.369798 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:09:39.371914 systemd-logind[1454]: Removed session 8. Aug 13 07:09:44.373893 systemd[1]: Started sshd@8-64.227.105.74:22-139.178.89.65:36618.service - OpenSSH per-connection server daemon (139.178.89.65:36618). Aug 13 07:09:44.420081 sshd[3912]: Accepted publickey for core from 139.178.89.65 port 36618 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:44.422256 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:44.427748 systemd-logind[1454]: New session 9 of user core. Aug 13 07:09:44.433610 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:09:44.592074 sshd[3912]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:44.596610 systemd[1]: sshd@8-64.227.105.74:22-139.178.89.65:36618.service: Deactivated successfully. Aug 13 07:09:44.598982 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:09:44.600151 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:09:44.601548 systemd-logind[1454]: Removed session 9. Aug 13 07:09:49.611853 systemd[1]: Started sshd@9-64.227.105.74:22-139.178.89.65:57018.service - OpenSSH per-connection server daemon (139.178.89.65:57018). Aug 13 07:09:49.661008 sshd[3926]: Accepted publickey for core from 139.178.89.65 port 57018 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:49.663129 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:49.669633 systemd-logind[1454]: New session 10 of user core. Aug 13 07:09:49.677601 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:09:49.826928 sshd[3926]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:49.833541 systemd[1]: sshd@9-64.227.105.74:22-139.178.89.65:57018.service: Deactivated successfully. Aug 13 07:09:49.836326 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:09:49.837453 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:09:49.839192 systemd-logind[1454]: Removed session 10. Aug 13 07:09:54.851807 systemd[1]: Started sshd@10-64.227.105.74:22-139.178.89.65:57026.service - OpenSSH per-connection server daemon (139.178.89.65:57026). Aug 13 07:09:54.903988 sshd[3939]: Accepted publickey for core from 139.178.89.65 port 57026 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:54.906409 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:54.913519 systemd-logind[1454]: New session 11 of user core. Aug 13 07:09:54.919495 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:09:55.073265 sshd[3939]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:55.089770 systemd[1]: sshd@10-64.227.105.74:22-139.178.89.65:57026.service: Deactivated successfully. Aug 13 07:09:55.092033 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:09:55.092824 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:09:55.099622 systemd[1]: Started sshd@11-64.227.105.74:22-139.178.89.65:57040.service - OpenSSH per-connection server daemon (139.178.89.65:57040). Aug 13 07:09:55.102239 systemd-logind[1454]: Removed session 11. Aug 13 07:09:55.156276 sshd[3953]: Accepted publickey for core from 139.178.89.65 port 57040 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:55.158451 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:55.164727 systemd-logind[1454]: New session 12 of user core. Aug 13 07:09:55.171503 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:09:55.356561 sshd[3953]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:55.372596 systemd[1]: sshd@11-64.227.105.74:22-139.178.89.65:57040.service: Deactivated successfully. Aug 13 07:09:55.377527 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:09:55.382804 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:09:55.393871 systemd[1]: Started sshd@12-64.227.105.74:22-139.178.89.65:57048.service - OpenSSH per-connection server daemon (139.178.89.65:57048). Aug 13 07:09:55.397587 systemd-logind[1454]: Removed session 12. Aug 13 07:09:55.453557 sshd[3964]: Accepted publickey for core from 139.178.89.65 port 57048 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:55.455631 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:55.461843 systemd-logind[1454]: New session 13 of user core. Aug 13 07:09:55.473585 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:09:55.606670 sshd[3964]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:55.610833 systemd[1]: sshd@12-64.227.105.74:22-139.178.89.65:57048.service: Deactivated successfully. Aug 13 07:09:55.614935 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:09:55.616669 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:09:55.618616 systemd-logind[1454]: Removed session 13. Aug 13 07:10:00.626759 systemd[1]: Started sshd@13-64.227.105.74:22-139.178.89.65:57672.service - OpenSSH per-connection server daemon (139.178.89.65:57672). Aug 13 07:10:00.682345 sshd[3980]: Accepted publickey for core from 139.178.89.65 port 57672 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:00.684456 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:00.692318 systemd-logind[1454]: New session 14 of user core. Aug 13 07:10:00.697589 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:10:00.851645 sshd[3980]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:00.856814 systemd[1]: sshd@13-64.227.105.74:22-139.178.89.65:57672.service: Deactivated successfully. Aug 13 07:10:00.860204 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:10:00.861793 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:10:00.862993 systemd-logind[1454]: Removed session 14. Aug 13 07:10:05.874117 systemd[1]: Started sshd@14-64.227.105.74:22-139.178.89.65:57684.service - OpenSSH per-connection server daemon (139.178.89.65:57684). Aug 13 07:10:05.931266 sshd[3996]: Accepted publickey for core from 139.178.89.65 port 57684 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:05.934401 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:05.942641 systemd-logind[1454]: New session 15 of user core. Aug 13 07:10:05.952615 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:10:06.128405 sshd[3996]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:06.135991 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:10:06.136719 systemd[1]: sshd@14-64.227.105.74:22-139.178.89.65:57684.service: Deactivated successfully. Aug 13 07:10:06.141670 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:10:06.147145 systemd-logind[1454]: Removed session 15. Aug 13 07:10:11.151791 systemd[1]: Started sshd@15-64.227.105.74:22-139.178.89.65:47864.service - OpenSSH per-connection server daemon (139.178.89.65:47864). Aug 13 07:10:11.198146 sshd[4008]: Accepted publickey for core from 139.178.89.65 port 47864 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:11.200771 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:11.206959 systemd-logind[1454]: New session 16 of user core. Aug 13 07:10:11.211555 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:10:11.376935 sshd[4008]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:11.393832 systemd[1]: sshd@15-64.227.105.74:22-139.178.89.65:47864.service: Deactivated successfully. Aug 13 07:10:11.398453 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:10:11.402796 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:10:11.410891 systemd[1]: Started sshd@16-64.227.105.74:22-139.178.89.65:47866.service - OpenSSH per-connection server daemon (139.178.89.65:47866). Aug 13 07:10:11.413090 systemd-logind[1454]: Removed session 16. Aug 13 07:10:11.434080 kubelet[2502]: E0813 07:10:11.434019 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:11.469854 sshd[4021]: Accepted publickey for core from 139.178.89.65 port 47866 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:11.472219 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:11.481813 systemd-logind[1454]: New session 17 of user core. Aug 13 07:10:11.488595 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:10:11.874639 sshd[4021]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:11.885655 systemd[1]: sshd@16-64.227.105.74:22-139.178.89.65:47866.service: Deactivated successfully. Aug 13 07:10:11.889084 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:10:11.892889 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:10:11.905871 systemd[1]: Started sshd@17-64.227.105.74:22-139.178.89.65:47868.service - OpenSSH per-connection server daemon (139.178.89.65:47868). Aug 13 07:10:11.908457 systemd-logind[1454]: Removed session 17. Aug 13 07:10:11.953276 sshd[4032]: Accepted publickey for core from 139.178.89.65 port 47868 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:11.955919 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:11.965421 systemd-logind[1454]: New session 18 of user core. Aug 13 07:10:11.970590 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:10:12.884692 sshd[4032]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:12.905165 systemd[1]: sshd@17-64.227.105.74:22-139.178.89.65:47868.service: Deactivated successfully. Aug 13 07:10:12.915193 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:10:12.919018 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:10:12.931860 systemd[1]: Started sshd@18-64.227.105.74:22-139.178.89.65:47878.service - OpenSSH per-connection server daemon (139.178.89.65:47878). Aug 13 07:10:12.938116 systemd-logind[1454]: Removed session 18. Aug 13 07:10:13.024048 sshd[4049]: Accepted publickey for core from 139.178.89.65 port 47878 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:13.026785 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:13.037271 systemd-logind[1454]: New session 19 of user core. Aug 13 07:10:13.041710 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:10:13.436991 kubelet[2502]: E0813 07:10:13.436862 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:13.463083 sshd[4049]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:13.476791 systemd[1]: sshd@18-64.227.105.74:22-139.178.89.65:47878.service: Deactivated successfully. Aug 13 07:10:13.481811 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:10:13.484698 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:10:13.493695 systemd[1]: Started sshd@19-64.227.105.74:22-139.178.89.65:47894.service - OpenSSH per-connection server daemon (139.178.89.65:47894). Aug 13 07:10:13.498943 systemd-logind[1454]: Removed session 19. Aug 13 07:10:13.555803 sshd[4061]: Accepted publickey for core from 139.178.89.65 port 47894 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:13.558805 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:13.569770 systemd-logind[1454]: New session 20 of user core. Aug 13 07:10:13.577613 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:10:13.731397 sshd[4061]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:13.738476 systemd[1]: sshd@19-64.227.105.74:22-139.178.89.65:47894.service: Deactivated successfully. Aug 13 07:10:13.741229 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:10:13.743115 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:10:13.745106 systemd-logind[1454]: Removed session 20. Aug 13 07:10:16.433347 kubelet[2502]: E0813 07:10:16.433095 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:18.760091 systemd[1]: Started sshd@20-64.227.105.74:22-139.178.89.65:47904.service - OpenSSH per-connection server daemon (139.178.89.65:47904). Aug 13 07:10:18.807953 sshd[4074]: Accepted publickey for core from 139.178.89.65 port 47904 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:18.810027 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:18.818322 systemd-logind[1454]: New session 21 of user core. Aug 13 07:10:18.828614 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:10:19.014153 sshd[4074]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:19.022123 systemd[1]: sshd@20-64.227.105.74:22-139.178.89.65:47904.service: Deactivated successfully. Aug 13 07:10:19.025869 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:10:19.030037 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:10:19.032941 systemd-logind[1454]: Removed session 21. Aug 13 07:10:24.036863 systemd[1]: Started sshd@21-64.227.105.74:22-139.178.89.65:46426.service - OpenSSH per-connection server daemon (139.178.89.65:46426). Aug 13 07:10:24.089606 sshd[4088]: Accepted publickey for core from 139.178.89.65 port 46426 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:24.092410 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:24.100452 systemd-logind[1454]: New session 22 of user core. Aug 13 07:10:24.111617 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:10:24.270614 sshd[4088]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:24.275628 systemd[1]: sshd@21-64.227.105.74:22-139.178.89.65:46426.service: Deactivated successfully. Aug 13 07:10:24.278699 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:10:24.281166 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:10:24.283165 systemd-logind[1454]: Removed session 22. Aug 13 07:10:27.437323 kubelet[2502]: E0813 07:10:27.436926 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:29.293355 systemd[1]: Started sshd@22-64.227.105.74:22-139.178.89.65:35984.service - OpenSSH per-connection server daemon (139.178.89.65:35984). Aug 13 07:10:29.357377 sshd[4101]: Accepted publickey for core from 139.178.89.65 port 35984 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:29.359719 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:29.366892 systemd-logind[1454]: New session 23 of user core. Aug 13 07:10:29.373885 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:10:29.536805 sshd[4101]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:29.543290 systemd[1]: sshd@22-64.227.105.74:22-139.178.89.65:35984.service: Deactivated successfully. Aug 13 07:10:29.547822 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:10:29.549604 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:10:29.551893 systemd-logind[1454]: Removed session 23. Aug 13 07:10:34.563514 systemd[1]: Started sshd@23-64.227.105.74:22-139.178.89.65:35992.service - OpenSSH per-connection server daemon (139.178.89.65:35992). Aug 13 07:10:34.644914 sshd[4116]: Accepted publickey for core from 139.178.89.65 port 35992 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:34.648022 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:34.656792 systemd-logind[1454]: New session 24 of user core. Aug 13 07:10:34.669580 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:10:34.863588 sshd[4116]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:34.877521 systemd[1]: sshd@23-64.227.105.74:22-139.178.89.65:35992.service: Deactivated successfully. Aug 13 07:10:34.882074 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:10:34.886475 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:10:34.895948 systemd[1]: Started sshd@24-64.227.105.74:22-139.178.89.65:35998.service - OpenSSH per-connection server daemon (139.178.89.65:35998). Aug 13 07:10:34.900667 systemd-logind[1454]: Removed session 24. Aug 13 07:10:34.967932 sshd[4129]: Accepted publickey for core from 139.178.89.65 port 35998 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:34.970733 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:34.979760 systemd-logind[1454]: New session 25 of user core. Aug 13 07:10:34.985656 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:10:36.961879 containerd[1475]: time="2025-08-13T07:10:36.961792358Z" level=info msg="StopContainer for \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\" with timeout 30 (s)" Aug 13 07:10:36.977027 containerd[1475]: time="2025-08-13T07:10:36.976872646Z" level=info msg="Stop container \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\" with signal terminated" Aug 13 07:10:36.997213 systemd[1]: run-containerd-runc-k8s.io-ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65-runc.g66joN.mount: Deactivated successfully. Aug 13 07:10:37.021244 containerd[1475]: time="2025-08-13T07:10:37.021112110Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:10:37.025482 systemd[1]: cri-containerd-f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73.scope: Deactivated successfully. Aug 13 07:10:37.038141 containerd[1475]: time="2025-08-13T07:10:37.037731434Z" level=info msg="StopContainer for \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\" with timeout 2 (s)" Aug 13 07:10:37.038724 containerd[1475]: time="2025-08-13T07:10:37.038560672Z" level=info msg="Stop container \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\" with signal terminated" Aug 13 07:10:37.070577 systemd-networkd[1371]: lxc_health: Link DOWN Aug 13 07:10:37.070588 systemd-networkd[1371]: lxc_health: Lost carrier Aug 13 07:10:37.108513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73-rootfs.mount: Deactivated successfully. Aug 13 07:10:37.109954 systemd[1]: cri-containerd-ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65.scope: Deactivated successfully. Aug 13 07:10:37.111702 systemd[1]: cri-containerd-ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65.scope: Consumed 9.273s CPU time. Aug 13 07:10:37.119996 containerd[1475]: time="2025-08-13T07:10:37.119900033Z" level=info msg="shim disconnected" id=f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73 namespace=k8s.io Aug 13 07:10:37.120641 containerd[1475]: time="2025-08-13T07:10:37.120514051Z" level=warning msg="cleaning up after shim disconnected" id=f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73 namespace=k8s.io Aug 13 07:10:37.120641 containerd[1475]: time="2025-08-13T07:10:37.120591046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:37.160175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65-rootfs.mount: Deactivated successfully. Aug 13 07:10:37.164764 containerd[1475]: time="2025-08-13T07:10:37.164675314Z" level=info msg="shim disconnected" id=ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65 namespace=k8s.io Aug 13 07:10:37.165707 containerd[1475]: time="2025-08-13T07:10:37.165394055Z" level=warning msg="cleaning up after shim disconnected" id=ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65 namespace=k8s.io Aug 13 07:10:37.165707 containerd[1475]: time="2025-08-13T07:10:37.165444001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:37.182063 containerd[1475]: time="2025-08-13T07:10:37.181985249Z" level=info msg="StopContainer for \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\" returns successfully" Aug 13 07:10:37.183271 containerd[1475]: time="2025-08-13T07:10:37.183176346Z" level=info msg="StopPodSandbox for \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\"" Aug 13 07:10:37.183271 containerd[1475]: time="2025-08-13T07:10:37.183256930Z" level=info msg="Container to stop \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:10:37.187766 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18-shm.mount: Deactivated successfully. Aug 13 07:10:37.211420 systemd[1]: cri-containerd-352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18.scope: Deactivated successfully. Aug 13 07:10:37.221153 containerd[1475]: time="2025-08-13T07:10:37.220834669Z" level=info msg="StopContainer for \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\" returns successfully" Aug 13 07:10:37.233944 containerd[1475]: time="2025-08-13T07:10:37.233857803Z" level=info msg="StopPodSandbox for \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\"" Aug 13 07:10:37.236341 containerd[1475]: time="2025-08-13T07:10:37.234201428Z" level=info msg="Container to stop \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:10:37.236547 containerd[1475]: time="2025-08-13T07:10:37.236206870Z" level=info msg="Container to stop \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:10:37.237063 containerd[1475]: time="2025-08-13T07:10:37.236692451Z" level=info msg="Container to stop \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:10:37.237063 containerd[1475]: time="2025-08-13T07:10:37.236752536Z" level=info msg="Container to stop \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:10:37.237063 containerd[1475]: time="2025-08-13T07:10:37.236781466Z" level=info msg="Container to stop \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:10:37.256070 systemd[1]: cri-containerd-f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d.scope: Deactivated successfully. Aug 13 07:10:37.281330 containerd[1475]: time="2025-08-13T07:10:37.281098019Z" level=info msg="shim disconnected" id=352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18 namespace=k8s.io Aug 13 07:10:37.281637 containerd[1475]: time="2025-08-13T07:10:37.281406374Z" level=warning msg="cleaning up after shim disconnected" id=352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18 namespace=k8s.io Aug 13 07:10:37.281637 containerd[1475]: time="2025-08-13T07:10:37.281433497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:37.306867 containerd[1475]: time="2025-08-13T07:10:37.306321712Z" level=info msg="shim disconnected" id=f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d namespace=k8s.io Aug 13 07:10:37.306867 containerd[1475]: time="2025-08-13T07:10:37.306596613Z" level=warning msg="cleaning up after shim disconnected" id=f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d namespace=k8s.io Aug 13 07:10:37.306867 containerd[1475]: time="2025-08-13T07:10:37.306609426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:37.337313 containerd[1475]: time="2025-08-13T07:10:37.336709588Z" level=info msg="TearDown network for sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" successfully" Aug 13 07:10:37.337313 containerd[1475]: time="2025-08-13T07:10:37.336769060Z" level=info msg="StopPodSandbox for \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" returns successfully" Aug 13 07:10:37.342285 containerd[1475]: time="2025-08-13T07:10:37.341770875Z" level=info msg="TearDown network for sandbox \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\" successfully" Aug 13 07:10:37.342285 containerd[1475]: time="2025-08-13T07:10:37.341856197Z" level=info msg="StopPodSandbox for \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\" returns successfully" Aug 13 07:10:37.482379 kubelet[2502]: I0813 07:10:37.481581 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-xtables-lock\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.482379 kubelet[2502]: I0813 07:10:37.481661 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-cgroup\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.482379 kubelet[2502]: I0813 07:10:37.481703 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-bpf-maps\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.482379 kubelet[2502]: I0813 07:10:37.481731 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-run\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.482379 kubelet[2502]: I0813 07:10:37.481757 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-hostproc\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.482379 kubelet[2502]: I0813 07:10:37.481772 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cni-path\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.483084 kubelet[2502]: I0813 07:10:37.481790 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-host-proc-sys-net\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.483084 kubelet[2502]: I0813 07:10:37.481826 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ae614ce-8cea-42f7-bec1-3855c790bfa5-cilium-config-path\") pod \"3ae614ce-8cea-42f7-bec1-3855c790bfa5\" (UID: \"3ae614ce-8cea-42f7-bec1-3855c790bfa5\") " Aug 13 07:10:37.483084 kubelet[2502]: I0813 07:10:37.481852 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xlrp\" (UniqueName: \"kubernetes.io/projected/9dd73a7b-11ee-4504-a654-6aed087799ac-kube-api-access-7xlrp\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.483084 kubelet[2502]: I0813 07:10:37.481867 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-host-proc-sys-kernel\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.483084 kubelet[2502]: I0813 07:10:37.481886 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-config-path\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.483084 kubelet[2502]: I0813 07:10:37.481901 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-lib-modules\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.483321 kubelet[2502]: I0813 07:10:37.481921 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9dd73a7b-11ee-4504-a654-6aed087799ac-clustermesh-secrets\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.483321 kubelet[2502]: I0813 07:10:37.481938 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-etc-cni-netd\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.483321 kubelet[2502]: I0813 07:10:37.481957 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9dd73a7b-11ee-4504-a654-6aed087799ac-hubble-tls\") pod \"9dd73a7b-11ee-4504-a654-6aed087799ac\" (UID: \"9dd73a7b-11ee-4504-a654-6aed087799ac\") " Aug 13 07:10:37.483321 kubelet[2502]: I0813 07:10:37.481977 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7q5j\" (UniqueName: \"kubernetes.io/projected/3ae614ce-8cea-42f7-bec1-3855c790bfa5-kube-api-access-l7q5j\") pod \"3ae614ce-8cea-42f7-bec1-3855c790bfa5\" (UID: \"3ae614ce-8cea-42f7-bec1-3855c790bfa5\") " Aug 13 07:10:37.484003 kubelet[2502]: I0813 07:10:37.483754 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.484003 kubelet[2502]: I0813 07:10:37.483905 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.484003 kubelet[2502]: I0813 07:10:37.483936 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.484003 kubelet[2502]: I0813 07:10:37.483961 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.484003 kubelet[2502]: I0813 07:10:37.484002 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-hostproc" (OuterVolumeSpecName: "hostproc") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.484718 kubelet[2502]: I0813 07:10:37.484024 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cni-path" (OuterVolumeSpecName: "cni-path") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.484718 kubelet[2502]: I0813 07:10:37.484062 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.486122 kubelet[2502]: I0813 07:10:37.486078 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.487259 kubelet[2502]: I0813 07:10:37.487122 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.490041 kubelet[2502]: I0813 07:10:37.489929 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:10:37.495646 kubelet[2502]: I0813 07:10:37.495557 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae614ce-8cea-42f7-bec1-3855c790bfa5-kube-api-access-l7q5j" (OuterVolumeSpecName: "kube-api-access-l7q5j") pod "3ae614ce-8cea-42f7-bec1-3855c790bfa5" (UID: "3ae614ce-8cea-42f7-bec1-3855c790bfa5"). InnerVolumeSpecName "kube-api-access-l7q5j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:10:37.498564 kubelet[2502]: I0813 07:10:37.498379 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:10:37.499064 kubelet[2502]: I0813 07:10:37.498995 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd73a7b-11ee-4504-a654-6aed087799ac-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:10:37.499893 kubelet[2502]: I0813 07:10:37.499559 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae614ce-8cea-42f7-bec1-3855c790bfa5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3ae614ce-8cea-42f7-bec1-3855c790bfa5" (UID: "3ae614ce-8cea-42f7-bec1-3855c790bfa5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:10:37.500186 kubelet[2502]: I0813 07:10:37.500101 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd73a7b-11ee-4504-a654-6aed087799ac-kube-api-access-7xlrp" (OuterVolumeSpecName: "kube-api-access-7xlrp") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "kube-api-access-7xlrp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:10:37.501708 kubelet[2502]: I0813 07:10:37.501665 2502 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd73a7b-11ee-4504-a654-6aed087799ac-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9dd73a7b-11ee-4504-a654-6aed087799ac" (UID: "9dd73a7b-11ee-4504-a654-6aed087799ac"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:10:37.561178 kubelet[2502]: E0813 07:10:37.561044 2502 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:10:37.582732 kubelet[2502]: I0813 07:10:37.582641 2502 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-lib-modules\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.582732 kubelet[2502]: I0813 07:10:37.582690 2502 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9dd73a7b-11ee-4504-a654-6aed087799ac-clustermesh-secrets\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.582732 kubelet[2502]: I0813 07:10:37.582703 2502 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-etc-cni-netd\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.582732 kubelet[2502]: I0813 07:10:37.582718 2502 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9dd73a7b-11ee-4504-a654-6aed087799ac-hubble-tls\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.582732 kubelet[2502]: I0813 07:10:37.582729 2502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l7q5j\" (UniqueName: \"kubernetes.io/projected/3ae614ce-8cea-42f7-bec1-3855c790bfa5-kube-api-access-l7q5j\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.582732 kubelet[2502]: I0813 07:10:37.582739 2502 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-cgroup\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.582732 kubelet[2502]: I0813 07:10:37.582748 2502 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-xtables-lock\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.582732 kubelet[2502]: I0813 07:10:37.582757 2502 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-run\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.583179 kubelet[2502]: I0813 07:10:37.582765 2502 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-bpf-maps\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.583179 kubelet[2502]: I0813 07:10:37.582773 2502 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-hostproc\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.583179 kubelet[2502]: I0813 07:10:37.582781 2502 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-cni-path\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.583179 kubelet[2502]: I0813 07:10:37.582789 2502 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-host-proc-sys-net\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.583179 kubelet[2502]: I0813 07:10:37.582797 2502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xlrp\" (UniqueName: \"kubernetes.io/projected/9dd73a7b-11ee-4504-a654-6aed087799ac-kube-api-access-7xlrp\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.583179 kubelet[2502]: I0813 07:10:37.582806 2502 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9dd73a7b-11ee-4504-a654-6aed087799ac-host-proc-sys-kernel\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.583179 kubelet[2502]: I0813 07:10:37.582814 2502 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ae614ce-8cea-42f7-bec1-3855c790bfa5-cilium-config-path\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.583179 kubelet[2502]: I0813 07:10:37.582825 2502 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9dd73a7b-11ee-4504-a654-6aed087799ac-cilium-config-path\") on node \"ci-4081.3.5-9-a0c30e4e4a\" DevicePath \"\"" Aug 13 07:10:37.868262 kubelet[2502]: I0813 07:10:37.866946 2502 scope.go:117] "RemoveContainer" containerID="f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73" Aug 13 07:10:37.874263 containerd[1475]: time="2025-08-13T07:10:37.873862836Z" level=info msg="RemoveContainer for \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\"" Aug 13 07:10:37.876492 systemd[1]: Removed slice kubepods-besteffort-pod3ae614ce_8cea_42f7_bec1_3855c790bfa5.slice - libcontainer container kubepods-besteffort-pod3ae614ce_8cea_42f7_bec1_3855c790bfa5.slice. Aug 13 07:10:37.881484 containerd[1475]: time="2025-08-13T07:10:37.881347013Z" level=info msg="RemoveContainer for \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\" returns successfully" Aug 13 07:10:37.883939 kubelet[2502]: I0813 07:10:37.881868 2502 scope.go:117] "RemoveContainer" containerID="f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73" Aug 13 07:10:37.893287 systemd[1]: Removed slice kubepods-burstable-pod9dd73a7b_11ee_4504_a654_6aed087799ac.slice - libcontainer container kubepods-burstable-pod9dd73a7b_11ee_4504_a654_6aed087799ac.slice. Aug 13 07:10:37.893470 systemd[1]: kubepods-burstable-pod9dd73a7b_11ee_4504_a654_6aed087799ac.slice: Consumed 9.412s CPU time. Aug 13 07:10:37.910113 containerd[1475]: time="2025-08-13T07:10:37.883565875Z" level=error msg="ContainerStatus for \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\": not found" Aug 13 07:10:37.910839 kubelet[2502]: E0813 07:10:37.910722 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\": not found" containerID="f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73" Aug 13 07:10:37.916536 kubelet[2502]: I0813 07:10:37.910857 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73"} err="failed to get container status \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2b0916d0d8b7a2a0c4b08c446b50bad76845260271da84ffb7dba29ca700b73\": not found" Aug 13 07:10:37.916536 kubelet[2502]: I0813 07:10:37.911075 2502 scope.go:117] "RemoveContainer" containerID="ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65" Aug 13 07:10:37.922558 containerd[1475]: time="2025-08-13T07:10:37.922180123Z" level=info msg="RemoveContainer for \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\"" Aug 13 07:10:37.941373 containerd[1475]: time="2025-08-13T07:10:37.938876683Z" level=info msg="RemoveContainer for \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\" returns successfully" Aug 13 07:10:37.942101 kubelet[2502]: I0813 07:10:37.942070 2502 scope.go:117] "RemoveContainer" containerID="c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900" Aug 13 07:10:37.952056 containerd[1475]: time="2025-08-13T07:10:37.951963062Z" level=info msg="RemoveContainer for \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\"" Aug 13 07:10:37.955593 containerd[1475]: time="2025-08-13T07:10:37.955463655Z" level=info msg="RemoveContainer for \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\" returns successfully" Aug 13 07:10:37.955906 kubelet[2502]: I0813 07:10:37.955836 2502 scope.go:117] "RemoveContainer" containerID="3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942" Aug 13 07:10:37.958038 containerd[1475]: time="2025-08-13T07:10:37.957991004Z" level=info msg="RemoveContainer for \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\"" Aug 13 07:10:37.961938 containerd[1475]: time="2025-08-13T07:10:37.961867410Z" level=info msg="RemoveContainer for \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\" returns successfully" Aug 13 07:10:37.963566 kubelet[2502]: I0813 07:10:37.962956 2502 scope.go:117] "RemoveContainer" containerID="2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750" Aug 13 07:10:37.965052 containerd[1475]: time="2025-08-13T07:10:37.964861964Z" level=info msg="RemoveContainer for \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\"" Aug 13 07:10:37.968041 containerd[1475]: time="2025-08-13T07:10:37.967986983Z" level=info msg="RemoveContainer for \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\" returns successfully" Aug 13 07:10:37.969261 kubelet[2502]: I0813 07:10:37.968558 2502 scope.go:117] "RemoveContainer" containerID="4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af" Aug 13 07:10:37.972268 containerd[1475]: time="2025-08-13T07:10:37.971613403Z" level=info msg="RemoveContainer for \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\"" Aug 13 07:10:37.976362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18-rootfs.mount: Deactivated successfully. Aug 13 07:10:37.976515 systemd[1]: var-lib-kubelet-pods-3ae614ce\x2d8cea\x2d42f7\x2dbec1\x2d3855c790bfa5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl7q5j.mount: Deactivated successfully. Aug 13 07:10:37.976589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d-rootfs.mount: Deactivated successfully. Aug 13 07:10:37.976652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d-shm.mount: Deactivated successfully. Aug 13 07:10:37.976785 systemd[1]: var-lib-kubelet-pods-9dd73a7b\x2d11ee\x2d4504\x2da654\x2d6aed087799ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7xlrp.mount: Deactivated successfully. Aug 13 07:10:37.977089 systemd[1]: var-lib-kubelet-pods-9dd73a7b\x2d11ee\x2d4504\x2da654\x2d6aed087799ac-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 07:10:37.977346 systemd[1]: var-lib-kubelet-pods-9dd73a7b\x2d11ee\x2d4504\x2da654\x2d6aed087799ac-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 07:10:37.983771 containerd[1475]: time="2025-08-13T07:10:37.983175722Z" level=info msg="RemoveContainer for \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\" returns successfully" Aug 13 07:10:37.983921 kubelet[2502]: I0813 07:10:37.983727 2502 scope.go:117] "RemoveContainer" containerID="ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65" Aug 13 07:10:37.984343 containerd[1475]: time="2025-08-13T07:10:37.984263764Z" level=error msg="ContainerStatus for \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\": not found" Aug 13 07:10:37.984572 kubelet[2502]: E0813 07:10:37.984519 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\": not found" containerID="ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65" Aug 13 07:10:37.984802 kubelet[2502]: I0813 07:10:37.984578 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65"} err="failed to get container status \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba948f55048732f292b472f0da1f6d13238f5c253be5245e4b54ae38adb86d65\": not found" Aug 13 07:10:37.984802 kubelet[2502]: I0813 07:10:37.984623 2502 scope.go:117] "RemoveContainer" containerID="c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900" Aug 13 07:10:37.985470 containerd[1475]: time="2025-08-13T07:10:37.985426048Z" level=error msg="ContainerStatus for \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\": not found" Aug 13 07:10:37.985835 kubelet[2502]: E0813 07:10:37.985799 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\": not found" containerID="c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900" Aug 13 07:10:37.985916 kubelet[2502]: I0813 07:10:37.985851 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900"} err="failed to get container status \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8b22161205b2ca407a269cb4d57295ac46cc0a4839ee4941e590f207da98900\": not found" Aug 13 07:10:37.985916 kubelet[2502]: I0813 07:10:37.985885 2502 scope.go:117] "RemoveContainer" containerID="3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942" Aug 13 07:10:37.986209 containerd[1475]: time="2025-08-13T07:10:37.986145761Z" level=error msg="ContainerStatus for \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\": not found" Aug 13 07:10:37.986458 kubelet[2502]: E0813 07:10:37.986414 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\": not found" containerID="3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942" Aug 13 07:10:37.986525 kubelet[2502]: I0813 07:10:37.986463 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942"} err="failed to get container status \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fee1bb409b3e2f34cfaa089975ec720f2ef54058e5e5e8d65ff64a5999cf942\": not found" Aug 13 07:10:37.986525 kubelet[2502]: I0813 07:10:37.986496 2502 scope.go:117] "RemoveContainer" containerID="2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750" Aug 13 07:10:37.987290 containerd[1475]: time="2025-08-13T07:10:37.987190110Z" level=error msg="ContainerStatus for \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\": not found" Aug 13 07:10:37.987659 kubelet[2502]: E0813 07:10:37.987443 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\": not found" containerID="2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750" Aug 13 07:10:37.987659 kubelet[2502]: I0813 07:10:37.987474 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750"} err="failed to get container status \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f5eaedb578198462bf6d3fd38298c6531704d08d74e72f8dbc2d1e831d31750\": not found" Aug 13 07:10:37.987659 kubelet[2502]: I0813 07:10:37.987499 2502 scope.go:117] "RemoveContainer" containerID="4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af" Aug 13 07:10:37.987867 containerd[1475]: time="2025-08-13T07:10:37.987821986Z" level=error msg="ContainerStatus for \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\": not found" Aug 13 07:10:37.988033 kubelet[2502]: E0813 07:10:37.988001 2502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\": not found" containerID="4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af" Aug 13 07:10:37.988111 kubelet[2502]: I0813 07:10:37.988040 2502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af"} err="failed to get container status \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a98f3ea325dfd326c36f62513c91c986707d2da56e1129fb2be5d55c370c8af\": not found" Aug 13 07:10:38.845817 sshd[4129]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:38.865748 systemd[1]: Started sshd@25-64.227.105.74:22-139.178.89.65:36008.service - OpenSSH per-connection server daemon (139.178.89.65:36008). Aug 13 07:10:38.866462 systemd[1]: sshd@24-64.227.105.74:22-139.178.89.65:35998.service: Deactivated successfully. Aug 13 07:10:38.875209 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:10:38.875470 systemd[1]: session-25.scope: Consumed 1.177s CPU time. Aug 13 07:10:38.877999 systemd-logind[1454]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:10:38.880774 systemd-logind[1454]: Removed session 25. Aug 13 07:10:38.936153 sshd[4289]: Accepted publickey for core from 139.178.89.65 port 36008 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:38.939111 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:38.947370 systemd-logind[1454]: New session 26 of user core. Aug 13 07:10:38.953533 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:10:39.436842 kubelet[2502]: I0813 07:10:39.436784 2502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae614ce-8cea-42f7-bec1-3855c790bfa5" path="/var/lib/kubelet/pods/3ae614ce-8cea-42f7-bec1-3855c790bfa5/volumes" Aug 13 07:10:39.439252 kubelet[2502]: I0813 07:10:39.438039 2502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dd73a7b-11ee-4504-a654-6aed087799ac" path="/var/lib/kubelet/pods/9dd73a7b-11ee-4504-a654-6aed087799ac/volumes" Aug 13 07:10:39.694001 sshd[4289]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:39.708498 systemd[1]: sshd@25-64.227.105.74:22-139.178.89.65:36008.service: Deactivated successfully. Aug 13 07:10:39.713979 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:10:39.718819 systemd-logind[1454]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:10:39.730802 systemd[1]: Started sshd@26-64.227.105.74:22-139.178.89.65:36456.service - OpenSSH per-connection server daemon (139.178.89.65:36456). Aug 13 07:10:39.734302 systemd-logind[1454]: Removed session 26. Aug 13 07:10:39.753702 kubelet[2502]: I0813 07:10:39.751858 2502 memory_manager.go:355] "RemoveStaleState removing state" podUID="9dd73a7b-11ee-4504-a654-6aed087799ac" containerName="cilium-agent" Aug 13 07:10:39.753702 kubelet[2502]: I0813 07:10:39.751900 2502 memory_manager.go:355] "RemoveStaleState removing state" podUID="3ae614ce-8cea-42f7-bec1-3855c790bfa5" containerName="cilium-operator" Aug 13 07:10:39.774787 systemd[1]: Created slice kubepods-burstable-pod82d33246_dfbe_4941_921a_09a223460a25.slice - libcontainer container kubepods-burstable-pod82d33246_dfbe_4941_921a_09a223460a25.slice. Aug 13 07:10:39.813553 sshd[4302]: Accepted publickey for core from 139.178.89.65 port 36456 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:39.816169 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:39.825939 systemd-logind[1454]: New session 27 of user core. Aug 13 07:10:39.834633 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:10:39.900028 kubelet[2502]: I0813 07:10:39.899265 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-host-proc-sys-net\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900028 kubelet[2502]: I0813 07:10:39.899327 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-bpf-maps\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900028 kubelet[2502]: I0813 07:10:39.899364 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/82d33246-dfbe-4941-921a-09a223460a25-cilium-ipsec-secrets\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900028 kubelet[2502]: I0813 07:10:39.899392 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82d33246-dfbe-4941-921a-09a223460a25-hubble-tls\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900028 kubelet[2502]: I0813 07:10:39.899422 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-lib-modules\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900028 kubelet[2502]: I0813 07:10:39.899450 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xhc\" (UniqueName: \"kubernetes.io/projected/82d33246-dfbe-4941-921a-09a223460a25-kube-api-access-95xhc\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900517 kubelet[2502]: I0813 07:10:39.899476 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-cni-path\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900517 kubelet[2502]: I0813 07:10:39.899503 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-etc-cni-netd\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900517 kubelet[2502]: I0813 07:10:39.899530 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-xtables-lock\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900517 kubelet[2502]: I0813 07:10:39.899559 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-cilium-cgroup\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900517 kubelet[2502]: I0813 07:10:39.899583 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-host-proc-sys-kernel\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900517 kubelet[2502]: I0813 07:10:39.899612 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82d33246-dfbe-4941-921a-09a223460a25-clustermesh-secrets\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900958 kubelet[2502]: I0813 07:10:39.899642 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82d33246-dfbe-4941-921a-09a223460a25-cilium-config-path\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900958 kubelet[2502]: I0813 07:10:39.899669 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-cilium-run\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.900958 kubelet[2502]: I0813 07:10:39.899713 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82d33246-dfbe-4941-921a-09a223460a25-hostproc\") pod \"cilium-hbwdn\" (UID: \"82d33246-dfbe-4941-921a-09a223460a25\") " pod="kube-system/cilium-hbwdn" Aug 13 07:10:39.906191 sshd[4302]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:39.917613 systemd[1]: sshd@26-64.227.105.74:22-139.178.89.65:36456.service: Deactivated successfully. Aug 13 07:10:39.921722 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:10:39.925079 systemd-logind[1454]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:10:39.931789 systemd[1]: Started sshd@27-64.227.105.74:22-139.178.89.65:36458.service - OpenSSH per-connection server daemon (139.178.89.65:36458). Aug 13 07:10:39.934725 systemd-logind[1454]: Removed session 27. Aug 13 07:10:39.981578 sshd[4310]: Accepted publickey for core from 139.178.89.65 port 36458 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:39.984007 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:39.992500 systemd-logind[1454]: New session 28 of user core. Aug 13 07:10:39.998588 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 07:10:40.083620 kubelet[2502]: E0813 07:10:40.083571 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:40.087665 containerd[1475]: time="2025-08-13T07:10:40.087067869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hbwdn,Uid:82d33246-dfbe-4941-921a-09a223460a25,Namespace:kube-system,Attempt:0,}" Aug 13 07:10:40.111652 kubelet[2502]: I0813 07:10:40.111595 2502 setters.go:602] "Node became not ready" node="ci-4081.3.5-9-a0c30e4e4a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T07:10:40Z","lastTransitionTime":"2025-08-13T07:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 07:10:40.157827 containerd[1475]: time="2025-08-13T07:10:40.153341777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:40.157827 containerd[1475]: time="2025-08-13T07:10:40.153424718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:40.157827 containerd[1475]: time="2025-08-13T07:10:40.153863845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:40.157827 containerd[1475]: time="2025-08-13T07:10:40.157292016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:40.197688 systemd[1]: Started cri-containerd-63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a.scope - libcontainer container 63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a. Aug 13 07:10:40.261860 containerd[1475]: time="2025-08-13T07:10:40.261267330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hbwdn,Uid:82d33246-dfbe-4941-921a-09a223460a25,Namespace:kube-system,Attempt:0,} returns sandbox id \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\"" Aug 13 07:10:40.262767 kubelet[2502]: E0813 07:10:40.262618 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:40.267338 containerd[1475]: time="2025-08-13T07:10:40.267099734Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:10:40.279604 containerd[1475]: time="2025-08-13T07:10:40.279510255Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a39a29988aa30218ffed4eb253ac31044c3a744b38ffa58b1d329b90b5e1f1a8\"" Aug 13 07:10:40.281269 containerd[1475]: time="2025-08-13T07:10:40.280528441Z" level=info msg="StartContainer for \"a39a29988aa30218ffed4eb253ac31044c3a744b38ffa58b1d329b90b5e1f1a8\"" Aug 13 07:10:40.322561 systemd[1]: Started cri-containerd-a39a29988aa30218ffed4eb253ac31044c3a744b38ffa58b1d329b90b5e1f1a8.scope - libcontainer container a39a29988aa30218ffed4eb253ac31044c3a744b38ffa58b1d329b90b5e1f1a8. Aug 13 07:10:40.367648 containerd[1475]: time="2025-08-13T07:10:40.367503004Z" level=info msg="StartContainer for \"a39a29988aa30218ffed4eb253ac31044c3a744b38ffa58b1d329b90b5e1f1a8\" returns successfully" Aug 13 07:10:40.380443 systemd[1]: cri-containerd-a39a29988aa30218ffed4eb253ac31044c3a744b38ffa58b1d329b90b5e1f1a8.scope: Deactivated successfully. Aug 13 07:10:40.426410 containerd[1475]: time="2025-08-13T07:10:40.426046714Z" level=info msg="shim disconnected" id=a39a29988aa30218ffed4eb253ac31044c3a744b38ffa58b1d329b90b5e1f1a8 namespace=k8s.io Aug 13 07:10:40.426410 containerd[1475]: time="2025-08-13T07:10:40.426127001Z" level=warning msg="cleaning up after shim disconnected" id=a39a29988aa30218ffed4eb253ac31044c3a744b38ffa58b1d329b90b5e1f1a8 namespace=k8s.io Aug 13 07:10:40.426410 containerd[1475]: time="2025-08-13T07:10:40.426141013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:40.447433 containerd[1475]: time="2025-08-13T07:10:40.447335638Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:10:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:10:40.897751 kubelet[2502]: E0813 07:10:40.896409 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:40.903399 containerd[1475]: time="2025-08-13T07:10:40.902332965Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:10:40.917668 containerd[1475]: time="2025-08-13T07:10:40.917596256Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a08c860404c36c3607dd65c80e96771419df6e27dedf38152ef1f338725ce5d8\"" Aug 13 07:10:40.920263 containerd[1475]: time="2025-08-13T07:10:40.918578748Z" level=info msg="StartContainer for \"a08c860404c36c3607dd65c80e96771419df6e27dedf38152ef1f338725ce5d8\"" Aug 13 07:10:40.964634 systemd[1]: Started cri-containerd-a08c860404c36c3607dd65c80e96771419df6e27dedf38152ef1f338725ce5d8.scope - libcontainer container a08c860404c36c3607dd65c80e96771419df6e27dedf38152ef1f338725ce5d8. Aug 13 07:10:41.010055 containerd[1475]: time="2025-08-13T07:10:41.009849501Z" level=info msg="StartContainer for \"a08c860404c36c3607dd65c80e96771419df6e27dedf38152ef1f338725ce5d8\" returns successfully" Aug 13 07:10:41.030715 systemd[1]: cri-containerd-a08c860404c36c3607dd65c80e96771419df6e27dedf38152ef1f338725ce5d8.scope: Deactivated successfully. Aug 13 07:10:41.069436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a08c860404c36c3607dd65c80e96771419df6e27dedf38152ef1f338725ce5d8-rootfs.mount: Deactivated successfully. Aug 13 07:10:41.074104 containerd[1475]: time="2025-08-13T07:10:41.073878131Z" level=info msg="shim disconnected" id=a08c860404c36c3607dd65c80e96771419df6e27dedf38152ef1f338725ce5d8 namespace=k8s.io Aug 13 07:10:41.074626 containerd[1475]: time="2025-08-13T07:10:41.074073465Z" level=warning msg="cleaning up after shim disconnected" id=a08c860404c36c3607dd65c80e96771419df6e27dedf38152ef1f338725ce5d8 namespace=k8s.io Aug 13 07:10:41.074626 containerd[1475]: time="2025-08-13T07:10:41.074437487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:41.899358 systemd[1]: Started sshd@28-64.227.105.74:22-202.53.94.242:54594.service - OpenSSH per-connection server daemon (202.53.94.242:54594). Aug 13 07:10:41.908298 kubelet[2502]: E0813 07:10:41.906479 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:41.919194 containerd[1475]: time="2025-08-13T07:10:41.918589852Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:10:41.957872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340157613.mount: Deactivated successfully. Aug 13 07:10:41.961471 containerd[1475]: time="2025-08-13T07:10:41.959602201Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd944d68d1d8222cfed8dee8abf2e747c22a75348469b43991018b57be11508d\"" Aug 13 07:10:41.961471 containerd[1475]: time="2025-08-13T07:10:41.961267212Z" level=info msg="StartContainer for \"cd944d68d1d8222cfed8dee8abf2e747c22a75348469b43991018b57be11508d\"" Aug 13 07:10:42.005692 systemd[1]: Started cri-containerd-cd944d68d1d8222cfed8dee8abf2e747c22a75348469b43991018b57be11508d.scope - libcontainer container cd944d68d1d8222cfed8dee8abf2e747c22a75348469b43991018b57be11508d. Aug 13 07:10:42.055821 containerd[1475]: time="2025-08-13T07:10:42.055744101Z" level=info msg="StartContainer for \"cd944d68d1d8222cfed8dee8abf2e747c22a75348469b43991018b57be11508d\" returns successfully" Aug 13 07:10:42.069614 systemd[1]: cri-containerd-cd944d68d1d8222cfed8dee8abf2e747c22a75348469b43991018b57be11508d.scope: Deactivated successfully. Aug 13 07:10:42.108442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd944d68d1d8222cfed8dee8abf2e747c22a75348469b43991018b57be11508d-rootfs.mount: Deactivated successfully. Aug 13 07:10:42.113391 containerd[1475]: time="2025-08-13T07:10:42.113092666Z" level=info msg="shim disconnected" id=cd944d68d1d8222cfed8dee8abf2e747c22a75348469b43991018b57be11508d namespace=k8s.io Aug 13 07:10:42.113391 containerd[1475]: time="2025-08-13T07:10:42.113172046Z" level=warning msg="cleaning up after shim disconnected" id=cd944d68d1d8222cfed8dee8abf2e747c22a75348469b43991018b57be11508d namespace=k8s.io Aug 13 07:10:42.113391 containerd[1475]: time="2025-08-13T07:10:42.113182091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:42.563258 kubelet[2502]: E0813 07:10:42.563131 2502 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:10:42.913045 kubelet[2502]: E0813 07:10:42.911651 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:42.921150 containerd[1475]: time="2025-08-13T07:10:42.920797372Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:10:42.944435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200210772.mount: Deactivated successfully. Aug 13 07:10:42.954319 containerd[1475]: time="2025-08-13T07:10:42.953865617Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2346bf475f04410420f4a1ab44448e04e2e9a74cfc8201cd1c20ad17d4063c20\"" Aug 13 07:10:42.954945 containerd[1475]: time="2025-08-13T07:10:42.954893151Z" level=info msg="StartContainer for \"2346bf475f04410420f4a1ab44448e04e2e9a74cfc8201cd1c20ad17d4063c20\"" Aug 13 07:10:42.998590 systemd[1]: Started cri-containerd-2346bf475f04410420f4a1ab44448e04e2e9a74cfc8201cd1c20ad17d4063c20.scope - libcontainer container 2346bf475f04410420f4a1ab44448e04e2e9a74cfc8201cd1c20ad17d4063c20. Aug 13 07:10:43.043106 systemd[1]: cri-containerd-2346bf475f04410420f4a1ab44448e04e2e9a74cfc8201cd1c20ad17d4063c20.scope: Deactivated successfully. Aug 13 07:10:43.045677 containerd[1475]: time="2025-08-13T07:10:43.045626228Z" level=info msg="StartContainer for \"2346bf475f04410420f4a1ab44448e04e2e9a74cfc8201cd1c20ad17d4063c20\" returns successfully" Aug 13 07:10:43.074261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2346bf475f04410420f4a1ab44448e04e2e9a74cfc8201cd1c20ad17d4063c20-rootfs.mount: Deactivated successfully. Aug 13 07:10:43.077791 containerd[1475]: time="2025-08-13T07:10:43.077715095Z" level=info msg="shim disconnected" id=2346bf475f04410420f4a1ab44448e04e2e9a74cfc8201cd1c20ad17d4063c20 namespace=k8s.io Aug 13 07:10:43.078357 containerd[1475]: time="2025-08-13T07:10:43.078076207Z" level=warning msg="cleaning up after shim disconnected" id=2346bf475f04410420f4a1ab44448e04e2e9a74cfc8201cd1c20ad17d4063c20 namespace=k8s.io Aug 13 07:10:43.078357 containerd[1475]: time="2025-08-13T07:10:43.078104080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:10:43.919413 kubelet[2502]: E0813 07:10:43.919353 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:43.926360 containerd[1475]: time="2025-08-13T07:10:43.925407590Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:10:43.969483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910232252.mount: Deactivated successfully. Aug 13 07:10:43.974383 containerd[1475]: time="2025-08-13T07:10:43.974165949Z" level=info msg="CreateContainer within sandbox \"63cb6eeb3418e727ef9d2239de32a3dd2d551e4c68ed64cf2403167899fc017a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"61991a05d71e21ac4c6c03255b4a1f23a8c1d804aeb5b99a6d6ee5ea0050ec55\"" Aug 13 07:10:43.979258 containerd[1475]: time="2025-08-13T07:10:43.975132701Z" level=info msg="StartContainer for \"61991a05d71e21ac4c6c03255b4a1f23a8c1d804aeb5b99a6d6ee5ea0050ec55\"" Aug 13 07:10:44.065626 systemd[1]: run-containerd-runc-k8s.io-61991a05d71e21ac4c6c03255b4a1f23a8c1d804aeb5b99a6d6ee5ea0050ec55-runc.PsTRrW.mount: Deactivated successfully. Aug 13 07:10:44.080519 systemd[1]: Started cri-containerd-61991a05d71e21ac4c6c03255b4a1f23a8c1d804aeb5b99a6d6ee5ea0050ec55.scope - libcontainer container 61991a05d71e21ac4c6c03255b4a1f23a8c1d804aeb5b99a6d6ee5ea0050ec55. Aug 13 07:10:44.204620 containerd[1475]: time="2025-08-13T07:10:44.204446237Z" level=info msg="StartContainer for \"61991a05d71e21ac4c6c03255b4a1f23a8c1d804aeb5b99a6d6ee5ea0050ec55\" returns successfully" Aug 13 07:10:44.822514 sshd[4487]: Invalid user user from 202.53.94.242 port 54594 Aug 13 07:10:44.883016 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 07:10:44.931935 kubelet[2502]: E0813 07:10:44.928329 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:44.966948 kubelet[2502]: I0813 07:10:44.966364 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hbwdn" podStartSLOduration=5.966332017 podStartE2EDuration="5.966332017s" podCreationTimestamp="2025-08-13 07:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:44.964063237 +0000 UTC m=+107.748438334" watchObservedRunningTime="2025-08-13 07:10:44.966332017 +0000 UTC m=+107.750707119" Aug 13 07:10:45.433088 kubelet[2502]: E0813 07:10:45.432561 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-gnjq2" podUID="2ad0d47b-a894-4f78-9f93-85bb0db5f798" Aug 13 07:10:45.488620 sshd[4742]: pam_faillock(sshd:auth): User unknown Aug 13 07:10:45.493139 sshd[4487]: Postponed keyboard-interactive for invalid user user from 202.53.94.242 port 54594 ssh2 [preauth] Aug 13 07:10:46.068451 sshd[4742]: pam_unix(sshd:auth): check pass; user unknown Aug 13 07:10:46.068554 sshd[4742]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.53.94.242 Aug 13 07:10:46.069749 sshd[4742]: pam_faillock(sshd:auth): User unknown Aug 13 07:10:46.086149 kubelet[2502]: E0813 07:10:46.086094 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:47.437741 kubelet[2502]: E0813 07:10:47.437676 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-gnjq2" podUID="2ad0d47b-a894-4f78-9f93-85bb0db5f798" Aug 13 07:10:48.486459 sshd[4487]: PAM: Permission denied for illegal user user from 202.53.94.242 Aug 13 07:10:48.487783 sshd[4487]: Failed keyboard-interactive/pam for invalid user user from 202.53.94.242 port 54594 ssh2 Aug 13 07:10:48.835699 systemd-networkd[1371]: lxc_health: Link UP Aug 13 07:10:48.872735 systemd-networkd[1371]: lxc_health: Gained carrier Aug 13 07:10:49.402979 sshd[4487]: Connection closed by invalid user user 202.53.94.242 port 54594 [preauth] Aug 13 07:10:49.405498 systemd[1]: sshd@28-64.227.105.74:22-202.53.94.242:54594.service: Deactivated successfully. Aug 13 07:10:49.434650 kubelet[2502]: E0813 07:10:49.433995 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:50.088723 kubelet[2502]: E0813 07:10:50.087200 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:50.711533 systemd-networkd[1371]: lxc_health: Gained IPv6LL Aug 13 07:10:50.952794 kubelet[2502]: E0813 07:10:50.951107 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:51.962075 kubelet[2502]: E0813 07:10:51.956747 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:53.439348 kubelet[2502]: E0813 07:10:53.436241 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:10:53.444444 systemd[1]: run-containerd-runc-k8s.io-61991a05d71e21ac4c6c03255b4a1f23a8c1d804aeb5b99a6d6ee5ea0050ec55-runc.dsCRMY.mount: Deactivated successfully. Aug 13 07:10:55.726629 systemd[1]: run-containerd-runc-k8s.io-61991a05d71e21ac4c6c03255b4a1f23a8c1d804aeb5b99a6d6ee5ea0050ec55-runc.QrAQ05.mount: Deactivated successfully. Aug 13 07:10:55.827886 sshd[4310]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:55.834305 systemd-logind[1454]: Session 28 logged out. Waiting for processes to exit. Aug 13 07:10:55.834681 systemd[1]: sshd@27-64.227.105.74:22-139.178.89.65:36458.service: Deactivated successfully. Aug 13 07:10:55.842077 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 07:10:55.848435 systemd-logind[1454]: Removed session 28. Aug 13 07:10:57.401238 containerd[1475]: time="2025-08-13T07:10:57.401132447Z" level=info msg="StopPodSandbox for \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\"" Aug 13 07:10:57.402112 containerd[1475]: time="2025-08-13T07:10:57.401314424Z" level=info msg="TearDown network for sandbox \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\" successfully" Aug 13 07:10:57.402112 containerd[1475]: time="2025-08-13T07:10:57.401335303Z" level=info msg="StopPodSandbox for \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\" returns successfully" Aug 13 07:10:57.402112 containerd[1475]: time="2025-08-13T07:10:57.401985219Z" level=info msg="RemovePodSandbox for \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\"" Aug 13 07:10:57.405707 containerd[1475]: time="2025-08-13T07:10:57.405635723Z" level=info msg="Forcibly stopping sandbox \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\"" Aug 13 07:10:57.405950 containerd[1475]: time="2025-08-13T07:10:57.405808390Z" level=info msg="TearDown network for sandbox \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\" successfully" Aug 13 07:10:57.409709 containerd[1475]: time="2025-08-13T07:10:57.409631947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:10:57.409905 containerd[1475]: time="2025-08-13T07:10:57.409741380Z" level=info msg="RemovePodSandbox \"352af388db0f27a73ff662759da03c3c7b94b6d115fc64f5b82d058b561e0d18\" returns successfully" Aug 13 07:10:57.410761 containerd[1475]: time="2025-08-13T07:10:57.410529148Z" level=info msg="StopPodSandbox for \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\"" Aug 13 07:10:57.410761 containerd[1475]: time="2025-08-13T07:10:57.410638417Z" level=info msg="TearDown network for sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" successfully" Aug 13 07:10:57.410761 containerd[1475]: time="2025-08-13T07:10:57.410657231Z" level=info msg="StopPodSandbox for \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" returns successfully" Aug 13 07:10:57.411807 containerd[1475]: time="2025-08-13T07:10:57.411752765Z" level=info msg="RemovePodSandbox for \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\"" Aug 13 07:10:57.411807 containerd[1475]: time="2025-08-13T07:10:57.411799598Z" level=info msg="Forcibly stopping sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\"" Aug 13 07:10:57.411964 containerd[1475]: time="2025-08-13T07:10:57.411896204Z" level=info msg="TearDown network for sandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" successfully" Aug 13 07:10:57.415597 containerd[1475]: time="2025-08-13T07:10:57.415465436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:10:57.415997 containerd[1475]: time="2025-08-13T07:10:57.415615771Z" level=info msg="RemovePodSandbox \"f577ffb85260e8013e8197cf88d8737b36f1f0350a57eacadac158f14ec00c2d\" returns successfully"