Aug 19 08:12:18.894122 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 18 22:19:37 -00 2025 Aug 19 08:12:18.894169 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:12:18.894184 kernel: BIOS-provided physical RAM map: Aug 19 08:12:18.894196 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 19 08:12:18.894207 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 19 08:12:18.894219 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 19 08:12:18.894233 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 19 08:12:18.894252 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 19 08:12:18.894268 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 19 08:12:18.894278 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 19 08:12:18.894290 kernel: NX (Execute Disable) protection: active Aug 19 08:12:18.894302 kernel: APIC: Static calls initialized Aug 19 08:12:18.894314 kernel: SMBIOS 2.8 present. Aug 19 08:12:18.894325 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 19 08:12:18.894345 kernel: DMI: Memory slots populated: 1/1 Aug 19 08:12:18.894358 kernel: Hypervisor detected: KVM Aug 19 08:12:18.894376 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 19 08:12:18.894388 kernel: kvm-clock: using sched offset of 4364226855 cycles Aug 19 08:12:18.894400 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 19 08:12:18.894413 kernel: tsc: Detected 2494.136 MHz processor Aug 19 08:12:18.894427 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 19 08:12:18.894441 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 19 08:12:18.894454 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 19 08:12:18.894474 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 19 08:12:18.894488 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 19 08:12:18.894501 kernel: ACPI: Early table checksum verification disabled Aug 19 08:12:18.894514 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 19 08:12:18.894526 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:12:18.894537 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:12:18.894550 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:12:18.894564 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 19 08:12:18.894578 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:12:18.895659 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:12:18.895677 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:12:18.895691 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:12:18.895705 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 19 08:12:18.895716 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 19 08:12:18.895729 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 19 08:12:18.895743 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 19 08:12:18.895757 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 19 08:12:18.895787 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 19 08:12:18.895799 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 19 08:12:18.895812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 19 08:12:18.895826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 19 08:12:18.895842 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Aug 19 08:12:18.895862 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Aug 19 08:12:18.895877 kernel: Zone ranges: Aug 19 08:12:18.895892 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 19 08:12:18.895906 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 19 08:12:18.895920 kernel: Normal empty Aug 19 08:12:18.895931 kernel: Device empty Aug 19 08:12:18.895947 kernel: Movable zone start for each node Aug 19 08:12:18.895961 kernel: Early memory node ranges Aug 19 08:12:18.895975 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 19 08:12:18.895989 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 19 08:12:18.896008 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 19 08:12:18.896023 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 19 08:12:18.896038 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 19 08:12:18.896052 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 19 08:12:18.896064 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 19 08:12:18.896078 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 19 08:12:18.896099 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 19 08:12:18.896114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 19 08:12:18.896132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 19 08:12:18.898317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 19 08:12:18.898341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 19 08:12:18.898364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 19 08:12:18.898377 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 19 08:12:18.898389 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 19 08:12:18.898401 kernel: TSC deadline timer available Aug 19 08:12:18.898413 kernel: CPU topo: Max. logical packages: 1 Aug 19 08:12:18.898426 kernel: CPU topo: Max. logical dies: 1 Aug 19 08:12:18.898439 kernel: CPU topo: Max. dies per package: 1 Aug 19 08:12:18.898463 kernel: CPU topo: Max. threads per core: 1 Aug 19 08:12:18.898478 kernel: CPU topo: Num. cores per package: 2 Aug 19 08:12:18.898492 kernel: CPU topo: Num. threads per package: 2 Aug 19 08:12:18.898507 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 19 08:12:18.898522 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 19 08:12:18.898536 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 19 08:12:18.898548 kernel: Booting paravirtualized kernel on KVM Aug 19 08:12:18.898563 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 19 08:12:18.898575 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 19 08:12:18.898625 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 19 08:12:18.898640 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 19 08:12:18.898654 kernel: pcpu-alloc: [0] 0 1 Aug 19 08:12:18.898669 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 19 08:12:18.898684 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:12:18.898699 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 08:12:18.898714 kernel: random: crng init done Aug 19 08:12:18.898729 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 08:12:18.898744 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 19 08:12:18.898763 kernel: Fallback order for Node 0: 0 Aug 19 08:12:18.898777 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Aug 19 08:12:18.898792 kernel: Policy zone: DMA32 Aug 19 08:12:18.898806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 08:12:18.898821 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 19 08:12:18.898834 kernel: Kernel/User page tables isolation: enabled Aug 19 08:12:18.898848 kernel: ftrace: allocating 40101 entries in 157 pages Aug 19 08:12:18.898863 kernel: ftrace: allocated 157 pages with 5 groups Aug 19 08:12:18.898878 kernel: Dynamic Preempt: voluntary Aug 19 08:12:18.898897 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 08:12:18.898914 kernel: rcu: RCU event tracing is enabled. Aug 19 08:12:18.898929 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 19 08:12:18.898944 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 08:12:18.898958 kernel: Rude variant of Tasks RCU enabled. Aug 19 08:12:18.898971 kernel: Tracing variant of Tasks RCU enabled. Aug 19 08:12:18.898984 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 08:12:18.898997 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 19 08:12:18.899011 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 08:12:18.899037 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 08:12:18.899052 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 08:12:18.899066 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 19 08:12:18.899081 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 08:12:18.899095 kernel: Console: colour VGA+ 80x25 Aug 19 08:12:18.899109 kernel: printk: legacy console [tty0] enabled Aug 19 08:12:18.899124 kernel: printk: legacy console [ttyS0] enabled Aug 19 08:12:18.899138 kernel: ACPI: Core revision 20240827 Aug 19 08:12:18.899152 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 19 08:12:18.899187 kernel: APIC: Switch to symmetric I/O mode setup Aug 19 08:12:18.899203 kernel: x2apic enabled Aug 19 08:12:18.899223 kernel: APIC: Switched APIC routing to: physical x2apic Aug 19 08:12:18.899238 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 19 08:12:18.899258 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Aug 19 08:12:18.899272 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Aug 19 08:12:18.899286 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 19 08:12:18.899301 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 19 08:12:18.899317 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 19 08:12:18.899337 kernel: Spectre V2 : Mitigation: Retpolines Aug 19 08:12:18.899353 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 19 08:12:18.899368 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 19 08:12:18.899383 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 19 08:12:18.899400 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 19 08:12:18.899414 kernel: MDS: Mitigation: Clear CPU buffers Aug 19 08:12:18.899428 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 19 08:12:18.899448 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 19 08:12:18.899464 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 19 08:12:18.899479 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 19 08:12:18.899495 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 19 08:12:18.899509 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 19 08:12:18.899525 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 19 08:12:18.899541 kernel: Freeing SMP alternatives memory: 32K Aug 19 08:12:18.899556 kernel: pid_max: default: 32768 minimum: 301 Aug 19 08:12:18.899572 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 08:12:18.899614 kernel: landlock: Up and running. Aug 19 08:12:18.899628 kernel: SELinux: Initializing. Aug 19 08:12:18.899639 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 19 08:12:18.899651 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 19 08:12:18.899664 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 19 08:12:18.899677 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 19 08:12:18.899692 kernel: signal: max sigframe size: 1776 Aug 19 08:12:18.899707 kernel: rcu: Hierarchical SRCU implementation. Aug 19 08:12:18.899723 kernel: rcu: Max phase no-delay instances is 400. Aug 19 08:12:18.899746 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 08:12:18.899773 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 19 08:12:18.899789 kernel: smp: Bringing up secondary CPUs ... Aug 19 08:12:18.899804 kernel: smpboot: x86: Booting SMP configuration: Aug 19 08:12:18.899825 kernel: .... node #0, CPUs: #1 Aug 19 08:12:18.899840 kernel: smp: Brought up 1 node, 2 CPUs Aug 19 08:12:18.899856 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Aug 19 08:12:18.899875 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54040K init, 2928K bss, 125140K reserved, 0K cma-reserved) Aug 19 08:12:18.899890 kernel: devtmpfs: initialized Aug 19 08:12:18.899911 kernel: x86/mm: Memory block size: 128MB Aug 19 08:12:18.899925 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 08:12:18.899940 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 19 08:12:18.899953 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 08:12:18.899968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 08:12:18.899984 kernel: audit: initializing netlink subsys (disabled) Aug 19 08:12:18.900000 kernel: audit: type=2000 audit(1755591135.285:1): state=initialized audit_enabled=0 res=1 Aug 19 08:12:18.900016 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 08:12:18.900031 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 19 08:12:18.900052 kernel: cpuidle: using governor menu Aug 19 08:12:18.900067 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 08:12:18.900080 kernel: dca service started, version 1.12.1 Aug 19 08:12:18.900095 kernel: PCI: Using configuration type 1 for base access Aug 19 08:12:18.900110 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 19 08:12:18.900122 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 08:12:18.900136 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 08:12:18.900150 kernel: ACPI: Added _OSI(Module Device) Aug 19 08:12:18.900166 kernel: ACPI: Added _OSI(Processor Device) Aug 19 08:12:18.900186 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 08:12:18.900201 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 08:12:18.900216 kernel: ACPI: Interpreter enabled Aug 19 08:12:18.900232 kernel: ACPI: PM: (supports S0 S5) Aug 19 08:12:18.900247 kernel: ACPI: Using IOAPIC for interrupt routing Aug 19 08:12:18.900262 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 19 08:12:18.900275 kernel: PCI: Using E820 reservations for host bridge windows Aug 19 08:12:18.900289 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 19 08:12:18.900301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 08:12:18.902694 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 19 08:12:18.902954 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 19 08:12:18.903117 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 19 08:12:18.903140 kernel: acpiphp: Slot [3] registered Aug 19 08:12:18.903157 kernel: acpiphp: Slot [4] registered Aug 19 08:12:18.903173 kernel: acpiphp: Slot [5] registered Aug 19 08:12:18.903189 kernel: acpiphp: Slot [6] registered Aug 19 08:12:18.903218 kernel: acpiphp: Slot [7] registered Aug 19 08:12:18.903234 kernel: acpiphp: Slot [8] registered Aug 19 08:12:18.903250 kernel: acpiphp: Slot [9] registered Aug 19 08:12:18.903265 kernel: acpiphp: Slot [10] registered Aug 19 08:12:18.903280 kernel: acpiphp: Slot [11] registered Aug 19 08:12:18.903292 kernel: acpiphp: Slot [12] registered Aug 19 08:12:18.903307 kernel: acpiphp: Slot [13] registered Aug 19 08:12:18.903321 kernel: acpiphp: Slot [14] registered Aug 19 08:12:18.903336 kernel: acpiphp: Slot [15] registered Aug 19 08:12:18.903351 kernel: acpiphp: Slot [16] registered Aug 19 08:12:18.903372 kernel: acpiphp: Slot [17] registered Aug 19 08:12:18.903388 kernel: acpiphp: Slot [18] registered Aug 19 08:12:18.903403 kernel: acpiphp: Slot [19] registered Aug 19 08:12:18.903415 kernel: acpiphp: Slot [20] registered Aug 19 08:12:18.903429 kernel: acpiphp: Slot [21] registered Aug 19 08:12:18.903445 kernel: acpiphp: Slot [22] registered Aug 19 08:12:18.903460 kernel: acpiphp: Slot [23] registered Aug 19 08:12:18.903475 kernel: acpiphp: Slot [24] registered Aug 19 08:12:18.903489 kernel: acpiphp: Slot [25] registered Aug 19 08:12:18.903508 kernel: acpiphp: Slot [26] registered Aug 19 08:12:18.903520 kernel: acpiphp: Slot [27] registered Aug 19 08:12:18.903534 kernel: acpiphp: Slot [28] registered Aug 19 08:12:18.903549 kernel: acpiphp: Slot [29] registered Aug 19 08:12:18.903565 kernel: acpiphp: Slot [30] registered Aug 19 08:12:18.903581 kernel: acpiphp: Slot [31] registered Aug 19 08:12:18.904047 kernel: PCI host bridge to bus 0000:00 Aug 19 08:12:18.904262 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 19 08:12:18.904410 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 19 08:12:18.904561 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 19 08:12:18.904724 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 19 08:12:18.904860 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 19 08:12:18.904993 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 08:12:18.905192 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Aug 19 08:12:18.905410 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Aug 19 08:12:18.908261 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Aug 19 08:12:18.908507 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Aug 19 08:12:18.908686 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Aug 19 08:12:18.908891 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Aug 19 08:12:18.909053 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Aug 19 08:12:18.909208 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Aug 19 08:12:18.909384 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Aug 19 08:12:18.909560 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Aug 19 08:12:18.909767 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Aug 19 08:12:18.909925 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 19 08:12:18.910077 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 19 08:12:18.910275 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Aug 19 08:12:18.910438 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Aug 19 08:12:18.915381 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Aug 19 08:12:18.915660 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Aug 19 08:12:18.915831 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Aug 19 08:12:18.915996 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 19 08:12:18.916181 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 19 08:12:18.916354 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Aug 19 08:12:18.916532 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Aug 19 08:12:18.916724 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Aug 19 08:12:18.916917 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 19 08:12:18.917081 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Aug 19 08:12:18.917244 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Aug 19 08:12:18.917400 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 19 08:12:18.917581 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 19 08:12:18.918926 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Aug 19 08:12:18.919093 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Aug 19 08:12:18.919250 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 19 08:12:18.919457 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 19 08:12:18.919645 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Aug 19 08:12:18.919806 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Aug 19 08:12:18.919958 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Aug 19 08:12:18.920137 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 19 08:12:18.920309 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Aug 19 08:12:18.920461 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Aug 19 08:12:18.922788 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Aug 19 08:12:18.923023 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 08:12:18.923186 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Aug 19 08:12:18.923357 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 19 08:12:18.923376 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 19 08:12:18.923391 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 19 08:12:18.923407 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 19 08:12:18.923424 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 19 08:12:18.923439 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 19 08:12:18.923453 kernel: iommu: Default domain type: Translated Aug 19 08:12:18.923469 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 19 08:12:18.923485 kernel: PCI: Using ACPI for IRQ routing Aug 19 08:12:18.923509 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 19 08:12:18.923526 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 19 08:12:18.923542 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 19 08:12:18.923733 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 19 08:12:18.923900 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 19 08:12:18.924053 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 19 08:12:18.924075 kernel: vgaarb: loaded Aug 19 08:12:18.924091 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 19 08:12:18.924107 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 19 08:12:18.924131 kernel: clocksource: Switched to clocksource kvm-clock Aug 19 08:12:18.924146 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 08:12:18.924163 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 08:12:18.924179 kernel: pnp: PnP ACPI init Aug 19 08:12:18.924193 kernel: pnp: PnP ACPI: found 4 devices Aug 19 08:12:18.924208 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 19 08:12:18.924221 kernel: NET: Registered PF_INET protocol family Aug 19 08:12:18.924237 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 08:12:18.924252 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 19 08:12:18.924271 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 08:12:18.924286 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 19 08:12:18.924300 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 19 08:12:18.924314 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 19 08:12:18.924329 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 19 08:12:18.924344 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 19 08:12:18.924359 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 08:12:18.924375 kernel: NET: Registered PF_XDP protocol family Aug 19 08:12:18.924540 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 19 08:12:18.926692 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 19 08:12:18.926883 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 19 08:12:18.927036 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 19 08:12:18.927178 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 19 08:12:18.927347 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 19 08:12:18.927513 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 19 08:12:18.927536 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 19 08:12:18.927741 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 25532 usecs Aug 19 08:12:18.927766 kernel: PCI: CLS 0 bytes, default 64 Aug 19 08:12:18.927783 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 19 08:12:18.927799 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Aug 19 08:12:18.927814 kernel: Initialise system trusted keyrings Aug 19 08:12:18.927830 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 19 08:12:18.927845 kernel: Key type asymmetric registered Aug 19 08:12:18.927861 kernel: Asymmetric key parser 'x509' registered Aug 19 08:12:18.927877 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 19 08:12:18.927901 kernel: io scheduler mq-deadline registered Aug 19 08:12:18.927914 kernel: io scheduler kyber registered Aug 19 08:12:18.927930 kernel: io scheduler bfq registered Aug 19 08:12:18.927944 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 19 08:12:18.927960 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 19 08:12:18.927974 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 19 08:12:18.927990 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 19 08:12:18.928005 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 08:12:18.928019 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 19 08:12:18.928040 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 19 08:12:18.928053 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 19 08:12:18.928066 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 19 08:12:18.928308 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 19 08:12:18.928336 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 19 08:12:18.928487 kernel: rtc_cmos 00:03: registered as rtc0 Aug 19 08:12:18.930762 kernel: rtc_cmos 00:03: setting system clock to 2025-08-19T08:12:18 UTC (1755591138) Aug 19 08:12:18.930941 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 19 08:12:18.930975 kernel: intel_pstate: CPU model not supported Aug 19 08:12:18.930991 kernel: NET: Registered PF_INET6 protocol family Aug 19 08:12:18.931007 kernel: Segment Routing with IPv6 Aug 19 08:12:18.931023 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 08:12:18.931039 kernel: NET: Registered PF_PACKET protocol family Aug 19 08:12:18.931053 kernel: Key type dns_resolver registered Aug 19 08:12:18.931068 kernel: IPI shorthand broadcast: enabled Aug 19 08:12:18.931080 kernel: sched_clock: Marking stable (3329003972, 100921973)->(3448429215, -18503270) Aug 19 08:12:18.931095 kernel: registered taskstats version 1 Aug 19 08:12:18.931116 kernel: Loading compiled-in X.509 certificates Aug 19 08:12:18.931130 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: 93a065b103c00d4b81cc5822e4e7f9674e63afaf' Aug 19 08:12:18.931146 kernel: Demotion targets for Node 0: null Aug 19 08:12:18.931162 kernel: Key type .fscrypt registered Aug 19 08:12:18.931177 kernel: Key type fscrypt-provisioning registered Aug 19 08:12:18.931199 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 08:12:18.931245 kernel: ima: Allocated hash algorithm: sha1 Aug 19 08:12:18.931265 kernel: ima: No architecture policies found Aug 19 08:12:18.931282 kernel: clk: Disabling unused clocks Aug 19 08:12:18.931296 kernel: Warning: unable to open an initial console. Aug 19 08:12:18.931310 kernel: Freeing unused kernel image (initmem) memory: 54040K Aug 19 08:12:18.931324 kernel: Write protecting the kernel read-only data: 24576k Aug 19 08:12:18.931340 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 19 08:12:18.931357 kernel: Run /init as init process Aug 19 08:12:18.931374 kernel: with arguments: Aug 19 08:12:18.931391 kernel: /init Aug 19 08:12:18.931407 kernel: with environment: Aug 19 08:12:18.931422 kernel: HOME=/ Aug 19 08:12:18.931443 kernel: TERM=linux Aug 19 08:12:18.931457 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 08:12:18.931474 systemd[1]: Successfully made /usr/ read-only. Aug 19 08:12:18.931495 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:12:18.931513 systemd[1]: Detected virtualization kvm. Aug 19 08:12:18.931529 systemd[1]: Detected architecture x86-64. Aug 19 08:12:18.931543 systemd[1]: Running in initrd. Aug 19 08:12:18.931562 systemd[1]: No hostname configured, using default hostname. Aug 19 08:12:18.931579 systemd[1]: Hostname set to . Aug 19 08:12:18.931616 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:12:18.931633 systemd[1]: Queued start job for default target initrd.target. Aug 19 08:12:18.931646 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:12:18.931663 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:12:18.931681 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 08:12:18.931698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:12:18.931721 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 08:12:18.931738 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 08:12:18.931757 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 08:12:18.931779 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 08:12:18.931800 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:12:18.931817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:12:18.931834 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:12:18.931852 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:12:18.931869 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:12:18.931886 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:12:18.931900 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:12:18.931915 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:12:18.931935 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 08:12:18.931951 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 08:12:18.931968 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:12:18.931982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:12:18.931997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:12:18.932012 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:12:18.932028 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 08:12:18.932041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:12:18.932057 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 08:12:18.932078 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 08:12:18.932092 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 08:12:18.932105 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:12:18.932120 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:12:18.932135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:12:18.932148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 08:12:18.932168 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:12:18.932183 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 08:12:18.932200 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:12:18.932279 systemd-journald[212]: Collecting audit messages is disabled. Aug 19 08:12:18.932325 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:12:18.932340 systemd-journald[212]: Journal started Aug 19 08:12:18.932368 systemd-journald[212]: Runtime Journal (/run/log/journal/7c9ba659ddef48b5bbd5e2ff46ad8e94) is 4.9M, max 39.5M, 34.6M free. Aug 19 08:12:18.929059 systemd-modules-load[214]: Inserted module 'overlay' Aug 19 08:12:18.956726 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:12:18.957522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:12:18.962782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 08:12:18.966778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:12:18.969942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:12:18.976778 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 08:12:18.976823 kernel: Bridge firewalling registered Aug 19 08:12:18.973256 systemd-modules-load[214]: Inserted module 'br_netfilter' Aug 19 08:12:18.976557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:12:18.996807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:12:19.010174 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:12:19.010577 systemd-tmpfiles[231]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 08:12:19.023430 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:12:19.024395 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:12:19.026296 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 08:12:19.033229 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:12:19.038157 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:12:19.063131 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:12:19.098421 systemd-resolved[252]: Positive Trust Anchors: Aug 19 08:12:19.098439 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:12:19.098488 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:12:19.103331 systemd-resolved[252]: Defaulting to hostname 'linux'. Aug 19 08:12:19.105880 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:12:19.106433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:12:19.197688 kernel: SCSI subsystem initialized Aug 19 08:12:19.210628 kernel: Loading iSCSI transport class v2.0-870. Aug 19 08:12:19.226660 kernel: iscsi: registered transport (tcp) Aug 19 08:12:19.253662 kernel: iscsi: registered transport (qla4xxx) Aug 19 08:12:19.253772 kernel: QLogic iSCSI HBA Driver Aug 19 08:12:19.281037 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:12:19.303968 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:12:19.307497 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:12:19.371026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 08:12:19.373480 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 08:12:19.434645 kernel: raid6: avx2x4 gen() 17786 MB/s Aug 19 08:12:19.451633 kernel: raid6: avx2x2 gen() 18005 MB/s Aug 19 08:12:19.468844 kernel: raid6: avx2x1 gen() 13393 MB/s Aug 19 08:12:19.468965 kernel: raid6: using algorithm avx2x2 gen() 18005 MB/s Aug 19 08:12:19.486773 kernel: raid6: .... xor() 21095 MB/s, rmw enabled Aug 19 08:12:19.486883 kernel: raid6: using avx2x2 recovery algorithm Aug 19 08:12:19.508666 kernel: xor: automatically using best checksumming function avx Aug 19 08:12:19.696649 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 08:12:19.705410 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:12:19.707633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:12:19.735517 systemd-udevd[461]: Using default interface naming scheme 'v255'. Aug 19 08:12:19.742870 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:12:19.746819 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 08:12:19.781430 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Aug 19 08:12:19.820362 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:12:19.823386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:12:19.902714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:12:19.906442 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 08:12:20.001655 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Aug 19 08:12:20.009459 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 19 08:12:20.018633 kernel: scsi host0: Virtio SCSI HBA Aug 19 08:12:20.029725 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 19 08:12:20.031617 kernel: cryptd: max_cpu_qlen set to 1000 Aug 19 08:12:20.074264 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 08:12:20.074344 kernel: GPT:9289727 != 125829119 Aug 19 08:12:20.074358 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 08:12:20.074370 kernel: GPT:9289727 != 125829119 Aug 19 08:12:20.074393 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 08:12:20.074405 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:12:20.081326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:12:20.081515 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:12:20.083806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:12:20.086979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:12:20.088780 kernel: AES CTR mode by8 optimization enabled Aug 19 08:12:20.089840 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:12:20.090696 kernel: libata version 3.00 loaded. Aug 19 08:12:20.104903 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 19 08:12:20.126639 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 19 08:12:20.138618 kernel: scsi host1: ata_piix Aug 19 08:12:20.138858 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 19 08:12:20.140225 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Aug 19 08:12:20.141669 kernel: scsi host2: ata_piix Aug 19 08:12:20.141895 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Aug 19 08:12:20.141911 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Aug 19 08:12:20.154646 kernel: ACPI: bus type USB registered Aug 19 08:12:20.155616 kernel: usbcore: registered new interface driver usbfs Aug 19 08:12:20.155665 kernel: usbcore: registered new interface driver hub Aug 19 08:12:20.155678 kernel: usbcore: registered new device driver usb Aug 19 08:12:20.173385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:12:20.368278 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 19 08:12:20.373440 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 19 08:12:20.373770 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 19 08:12:20.375304 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 19 08:12:20.376425 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 19 08:12:20.379006 kernel: hub 1-0:1.0: USB hub found Aug 19 08:12:20.379371 kernel: hub 1-0:1.0: 2 ports detected Aug 19 08:12:20.383761 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 19 08:12:20.393540 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:12:20.400585 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 19 08:12:20.401076 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 19 08:12:20.402643 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 08:12:20.404266 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:12:20.404696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:12:20.405544 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:12:20.407476 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 08:12:20.408688 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 08:12:20.436253 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:12:20.438325 disk-uuid[616]: Primary Header is updated. Aug 19 08:12:20.438325 disk-uuid[616]: Secondary Entries is updated. Aug 19 08:12:20.438325 disk-uuid[616]: Secondary Header is updated. Aug 19 08:12:20.442064 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:12:21.459000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:12:21.460394 disk-uuid[624]: The operation has completed successfully. Aug 19 08:12:21.524247 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 08:12:21.525053 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 08:12:21.547481 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 08:12:21.567783 sh[635]: Success Aug 19 08:12:21.592284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 08:12:21.592372 kernel: device-mapper: uevent: version 1.0.3 Aug 19 08:12:21.592396 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 08:12:21.601677 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Aug 19 08:12:21.661273 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 08:12:21.666827 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 08:12:21.677836 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 08:12:21.693735 kernel: BTRFS: device fsid 99050df3-5e04-4f37-acde-dec46aab7896 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (647) Aug 19 08:12:21.695407 kernel: BTRFS info (device dm-0): first mount of filesystem 99050df3-5e04-4f37-acde-dec46aab7896 Aug 19 08:12:21.697658 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:12:21.697732 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 08:12:21.706439 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 08:12:21.708393 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:12:21.709835 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 08:12:21.712127 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 08:12:21.714766 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 08:12:21.746633 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (679) Aug 19 08:12:21.748626 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:12:21.748700 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:12:21.750861 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:12:21.760630 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:12:21.764530 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 08:12:21.767681 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 08:12:21.876227 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:12:21.879192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:12:21.943085 systemd-networkd[820]: lo: Link UP Aug 19 08:12:21.943098 systemd-networkd[820]: lo: Gained carrier Aug 19 08:12:21.948756 systemd-networkd[820]: Enumeration completed Aug 19 08:12:21.949330 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 19 08:12:21.949336 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 19 08:12:21.951121 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:12:21.951674 systemd[1]: Reached target network.target - Network. Aug 19 08:12:21.955943 systemd-networkd[820]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:12:21.955954 systemd-networkd[820]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:12:21.960088 systemd-networkd[820]: eth0: Link UP Aug 19 08:12:21.960355 systemd-networkd[820]: eth1: Link UP Aug 19 08:12:21.960550 systemd-networkd[820]: eth0: Gained carrier Aug 19 08:12:21.960573 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 19 08:12:21.964794 systemd-networkd[820]: eth1: Gained carrier Aug 19 08:12:21.964821 systemd-networkd[820]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:12:21.986678 systemd-networkd[820]: eth0: DHCPv4 address 143.198.65.59/20, gateway 143.198.64.1 acquired from 169.254.169.253 Aug 19 08:12:21.992464 ignition[727]: Ignition 2.21.0 Aug 19 08:12:21.992483 ignition[727]: Stage: fetch-offline Aug 19 08:12:21.996070 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:12:21.992548 ignition[727]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:12:21.996707 systemd-networkd[820]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 Aug 19 08:12:21.992561 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 19 08:12:21.998778 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 19 08:12:21.992743 ignition[727]: parsed url from cmdline: "" Aug 19 08:12:21.992749 ignition[727]: no config URL provided Aug 19 08:12:21.992758 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:12:21.992770 ignition[727]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:12:21.992779 ignition[727]: failed to fetch config: resource requires networking Aug 19 08:12:21.993051 ignition[727]: Ignition finished successfully Aug 19 08:12:22.031512 ignition[831]: Ignition 2.21.0 Aug 19 08:12:22.031531 ignition[831]: Stage: fetch Aug 19 08:12:22.032774 ignition[831]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:12:22.032791 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 19 08:12:22.032888 ignition[831]: parsed url from cmdline: "" Aug 19 08:12:22.032892 ignition[831]: no config URL provided Aug 19 08:12:22.032898 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:12:22.032906 ignition[831]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:12:22.032946 ignition[831]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 19 08:12:22.057775 ignition[831]: GET result: OK Aug 19 08:12:22.058457 ignition[831]: parsing config with SHA512: 9e60f64bf4607c957abcbe9dd544c121cb3e86b5b43be6c4bf93d01b2042821479b92af471d47f6c2d2d65739d3fac876c80f2b53514a70d5e3baac1cef4312a Aug 19 08:12:22.064623 unknown[831]: fetched base config from "system" Aug 19 08:12:22.064996 ignition[831]: fetch: fetch complete Aug 19 08:12:22.064638 unknown[831]: fetched base config from "system" Aug 19 08:12:22.065671 ignition[831]: fetch: fetch passed Aug 19 08:12:22.064644 unknown[831]: fetched user config from "digitalocean" Aug 19 08:12:22.065798 ignition[831]: Ignition finished successfully Aug 19 08:12:22.070417 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 19 08:12:22.072810 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 08:12:22.113892 ignition[838]: Ignition 2.21.0 Aug 19 08:12:22.113907 ignition[838]: Stage: kargs Aug 19 08:12:22.114121 ignition[838]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:12:22.116904 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 08:12:22.114132 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 19 08:12:22.115263 ignition[838]: kargs: kargs passed Aug 19 08:12:22.115330 ignition[838]: Ignition finished successfully Aug 19 08:12:22.121876 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 08:12:22.154788 ignition[845]: Ignition 2.21.0 Aug 19 08:12:22.154805 ignition[845]: Stage: disks Aug 19 08:12:22.154989 ignition[845]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:12:22.155002 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 19 08:12:22.155827 ignition[845]: disks: disks passed Aug 19 08:12:22.155890 ignition[845]: Ignition finished successfully Aug 19 08:12:22.157387 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 08:12:22.158669 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 08:12:22.159071 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 08:12:22.159459 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:12:22.160215 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:12:22.160941 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:12:22.162952 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 08:12:22.195324 systemd-fsck[854]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 08:12:22.201715 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 08:12:22.205242 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 08:12:22.332624 kernel: EXT4-fs (vda9): mounted filesystem 41966107-04fa-426e-9830-6b4efa50e27b r/w with ordered data mode. Quota mode: none. Aug 19 08:12:22.334083 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 08:12:22.335101 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 08:12:22.337190 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:12:22.339228 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 08:12:22.342829 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Aug 19 08:12:22.352344 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 19 08:12:22.353734 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 08:12:22.354525 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:12:22.357153 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 08:12:22.359864 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 08:12:22.392631 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (862) Aug 19 08:12:22.396568 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:12:22.396656 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:12:22.396670 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:12:22.426119 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:12:22.436030 coreos-metadata[864]: Aug 19 08:12:22.435 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 19 08:12:22.446700 initrd-setup-root[892]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 08:12:22.448717 coreos-metadata[865]: Aug 19 08:12:22.448 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 19 08:12:22.451346 coreos-metadata[864]: Aug 19 08:12:22.450 INFO Fetch successful Aug 19 08:12:22.456756 initrd-setup-root[899]: cut: /sysroot/etc/group: No such file or directory Aug 19 08:12:22.458214 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Aug 19 08:12:22.459231 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Aug 19 08:12:22.461872 coreos-metadata[865]: Aug 19 08:12:22.461 INFO Fetch successful Aug 19 08:12:22.467294 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 08:12:22.469271 coreos-metadata[865]: Aug 19 08:12:22.468 INFO wrote hostname ci-4426.0.0-a-0a67852594 to /sysroot/etc/hostname Aug 19 08:12:22.470298 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 19 08:12:22.474475 initrd-setup-root[915]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 08:12:22.588316 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 08:12:22.591378 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 08:12:22.593092 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 08:12:22.608625 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:12:22.629018 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 08:12:22.644056 ignition[983]: INFO : Ignition 2.21.0 Aug 19 08:12:22.644056 ignition[983]: INFO : Stage: mount Aug 19 08:12:22.644056 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:12:22.644056 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 19 08:12:22.644056 ignition[983]: INFO : mount: mount passed Aug 19 08:12:22.644056 ignition[983]: INFO : Ignition finished successfully Aug 19 08:12:22.646039 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 08:12:22.648205 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 08:12:22.692926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 08:12:22.694912 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:12:22.721658 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (994) Aug 19 08:12:22.724065 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:12:22.724138 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:12:22.724157 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:12:22.729518 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:12:22.768614 ignition[1011]: INFO : Ignition 2.21.0 Aug 19 08:12:22.768614 ignition[1011]: INFO : Stage: files Aug 19 08:12:22.769789 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:12:22.769789 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 19 08:12:22.771233 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping Aug 19 08:12:22.772618 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 08:12:22.772618 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 08:12:22.774818 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 08:12:22.775336 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 08:12:22.775336 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 08:12:22.775325 unknown[1011]: wrote ssh authorized keys file for user: core Aug 19 08:12:22.777008 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 19 08:12:22.777008 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 19 08:12:22.822669 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 08:12:22.900925 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 19 08:12:22.900925 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:12:22.900925 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 19 08:12:23.126023 systemd-networkd[820]: eth1: Gained IPv6LL Aug 19 08:12:23.135820 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 19 08:12:23.239008 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:12:23.239008 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 19 08:12:23.240227 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 08:12:23.240227 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:12:23.240227 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:12:23.240227 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:12:23.240227 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:12:23.240227 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:12:23.247691 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:12:23.247691 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:12:23.247691 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:12:23.247691 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 19 08:12:23.247691 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 19 08:12:23.247691 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 19 08:12:23.247691 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 19 08:12:23.581767 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 19 08:12:23.638724 systemd-networkd[820]: eth0: Gained IPv6LL Aug 19 08:12:24.013142 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 19 08:12:24.013142 ignition[1011]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 19 08:12:24.015048 ignition[1011]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:12:24.016128 ignition[1011]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:12:24.016128 ignition[1011]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 19 08:12:24.016128 ignition[1011]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 19 08:12:24.019387 ignition[1011]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 08:12:24.019387 ignition[1011]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:12:24.019387 ignition[1011]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:12:24.019387 ignition[1011]: INFO : files: files passed Aug 19 08:12:24.019387 ignition[1011]: INFO : Ignition finished successfully Aug 19 08:12:24.018792 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 08:12:24.022880 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 08:12:24.025813 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 08:12:24.040518 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 08:12:24.040655 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 08:12:24.053465 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:12:24.053465 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:12:24.057502 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:12:24.061128 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:12:24.062716 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 08:12:24.064160 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 08:12:24.128042 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 08:12:24.128220 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 08:12:24.129959 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 08:12:24.130403 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 08:12:24.131267 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 08:12:24.132341 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 08:12:24.164842 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:12:24.167356 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 08:12:24.193990 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:12:24.195539 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:12:24.196958 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 08:12:24.198097 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 08:12:24.198829 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:12:24.200462 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 08:12:24.201094 systemd[1]: Stopped target basic.target - Basic System. Aug 19 08:12:24.201864 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 08:12:24.202726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:12:24.203521 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 08:12:24.204511 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:12:24.205254 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 08:12:24.206242 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:12:24.207001 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 08:12:24.207824 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 08:12:24.208488 systemd[1]: Stopped target swap.target - Swaps. Aug 19 08:12:24.209120 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 08:12:24.209373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:12:24.210798 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:12:24.211876 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:12:24.212491 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 08:12:24.212662 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:12:24.213329 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 08:12:24.213571 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 08:12:24.215054 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 08:12:24.215318 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:12:24.216479 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 08:12:24.216782 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 08:12:24.217584 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 19 08:12:24.217911 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 19 08:12:24.220377 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 08:12:24.220885 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 08:12:24.221058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:12:24.223061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 08:12:24.228737 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 08:12:24.229027 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:12:24.231534 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 08:12:24.231791 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:12:24.241670 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 08:12:24.242663 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 08:12:24.266963 ignition[1065]: INFO : Ignition 2.21.0 Aug 19 08:12:24.267721 ignition[1065]: INFO : Stage: umount Aug 19 08:12:24.268659 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:12:24.268659 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 19 08:12:24.270815 ignition[1065]: INFO : umount: umount passed Aug 19 08:12:24.270815 ignition[1065]: INFO : Ignition finished successfully Aug 19 08:12:24.270465 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 08:12:24.274018 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 08:12:24.276720 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 08:12:24.306080 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 08:12:24.306219 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 08:12:24.307874 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 08:12:24.308360 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 08:12:24.309567 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 19 08:12:24.309838 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 19 08:12:24.316892 systemd[1]: Stopped target network.target - Network. Aug 19 08:12:24.322116 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 08:12:24.322214 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:12:24.322663 systemd[1]: Stopped target paths.target - Path Units. Aug 19 08:12:24.324665 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 08:12:24.329863 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:12:24.330343 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 08:12:24.330646 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 08:12:24.330961 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 08:12:24.331015 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:12:24.331323 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 08:12:24.331356 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:12:24.332823 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 08:12:24.332924 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 08:12:24.333841 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 08:12:24.333909 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 08:12:24.334852 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 08:12:24.335452 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 08:12:24.337168 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 08:12:24.337325 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 08:12:24.339688 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 08:12:24.339808 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 08:12:24.340687 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 08:12:24.340827 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 08:12:24.345725 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 08:12:24.346432 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 08:12:24.346535 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:12:24.350514 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:12:24.350874 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 08:12:24.351007 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 08:12:24.352850 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 08:12:24.354006 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 08:12:24.354421 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 08:12:24.354465 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:12:24.356241 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 08:12:24.357928 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 08:12:24.358014 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:12:24.358771 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:12:24.358818 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:12:24.362766 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 08:12:24.362854 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 08:12:24.363426 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:12:24.368914 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 08:12:24.382679 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 08:12:24.383873 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:12:24.384724 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 08:12:24.384781 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 08:12:24.385171 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 08:12:24.385206 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:12:24.385701 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 08:12:24.385802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:12:24.386304 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 08:12:24.386364 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 08:12:24.387964 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 08:12:24.388039 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:12:24.390744 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 08:12:24.391857 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 08:12:24.391952 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:12:24.392916 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 08:12:24.392967 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:12:24.396780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:12:24.396860 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:12:24.398274 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 08:12:24.398400 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 08:12:24.416084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 08:12:24.416249 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 08:12:24.418147 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 08:12:24.419499 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 08:12:24.452450 systemd[1]: Switching root. Aug 19 08:12:24.492612 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Aug 19 08:12:24.492715 systemd-journald[212]: Journal stopped Aug 19 08:12:25.776360 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 08:12:25.776473 kernel: SELinux: policy capability open_perms=1 Aug 19 08:12:25.776493 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 08:12:25.776511 kernel: SELinux: policy capability always_check_network=0 Aug 19 08:12:25.776529 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 08:12:25.776556 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 08:12:25.776573 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 08:12:25.776620 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 08:12:25.776638 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 08:12:25.776667 kernel: audit: type=1403 audit(1755591144.674:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 08:12:25.776688 systemd[1]: Successfully loaded SELinux policy in 69.948ms. Aug 19 08:12:25.776736 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.985ms. Aug 19 08:12:25.776763 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:12:25.776782 systemd[1]: Detected virtualization kvm. Aug 19 08:12:25.776806 systemd[1]: Detected architecture x86-64. Aug 19 08:12:25.776823 systemd[1]: Detected first boot. Aug 19 08:12:25.776842 systemd[1]: Hostname set to . Aug 19 08:12:25.776866 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:12:25.776884 zram_generator::config[1110]: No configuration found. Aug 19 08:12:25.776904 kernel: Guest personality initialized and is inactive Aug 19 08:12:25.776922 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 19 08:12:25.776940 kernel: Initialized host personality Aug 19 08:12:25.776957 kernel: NET: Registered PF_VSOCK protocol family Aug 19 08:12:25.776978 systemd[1]: Populated /etc with preset unit settings. Aug 19 08:12:25.776997 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 08:12:25.777014 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 08:12:25.777037 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 08:12:25.777056 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 08:12:25.777074 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 08:12:25.777092 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 08:12:25.777110 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 08:12:25.777128 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 08:12:25.777147 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 08:12:25.777166 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 08:12:25.777190 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 08:12:25.777219 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 08:12:25.777239 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:12:25.777258 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:12:25.777277 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 08:12:25.777296 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 08:12:25.777315 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 08:12:25.777339 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:12:25.777359 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 19 08:12:25.777378 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:12:25.777398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:12:25.777417 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 08:12:25.777436 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 08:12:25.777453 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 08:12:25.777470 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 08:12:25.777494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:12:25.777513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:12:25.777531 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:12:25.777553 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:12:25.777572 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 08:12:25.777637 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 08:12:25.777659 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 08:12:25.777678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:12:25.777697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:12:25.777716 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:12:25.777742 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 08:12:25.777762 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 08:12:25.777781 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 08:12:25.777800 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 08:12:25.777818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:12:25.777838 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 08:12:25.777858 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 08:12:25.777876 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 08:12:25.777907 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 08:12:25.777926 systemd[1]: Reached target machines.target - Containers. Aug 19 08:12:25.777947 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 08:12:25.777967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:12:25.777985 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:12:25.778005 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 08:12:25.778025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:12:25.778043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:12:25.778063 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:12:25.778085 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 08:12:25.778103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:12:25.778125 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 08:12:25.778144 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 08:12:25.778162 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 08:12:25.778181 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 08:12:25.778203 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 08:12:25.778227 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:12:25.778252 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:12:25.778270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:12:25.778289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:12:25.778320 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 08:12:25.778344 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 08:12:25.778367 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:12:25.778386 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 08:12:25.778405 systemd[1]: Stopped verity-setup.service. Aug 19 08:12:25.778425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:12:25.778443 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 08:12:25.778467 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 08:12:25.778486 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 08:12:25.778505 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 08:12:25.778524 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 08:12:25.778543 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 08:12:25.778561 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:12:25.778580 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 08:12:25.778684 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 08:12:25.778708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:12:25.778735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:12:25.780647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:12:25.780725 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 08:12:25.780749 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:12:25.780773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:12:25.780796 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 08:12:25.780819 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:12:25.780843 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 08:12:25.780866 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:12:25.780896 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 08:12:25.780980 systemd-journald[1177]: Collecting audit messages is disabled. Aug 19 08:12:25.781025 systemd-journald[1177]: Journal started Aug 19 08:12:25.781058 systemd-journald[1177]: Runtime Journal (/run/log/journal/7c9ba659ddef48b5bbd5e2ff46ad8e94) is 4.9M, max 39.5M, 34.6M free. Aug 19 08:12:25.421919 systemd[1]: Queued start job for default target multi-user.target. Aug 19 08:12:25.446767 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 19 08:12:25.447282 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 08:12:25.792738 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 08:12:25.792861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:12:25.797681 kernel: fuse: init (API version 7.41) Aug 19 08:12:25.799618 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 08:12:25.811659 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 08:12:25.819630 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 08:12:25.823627 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:12:25.824630 kernel: loop: module loaded Aug 19 08:12:25.829405 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:12:25.829827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:12:25.831158 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 08:12:25.832672 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 08:12:25.833923 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:12:25.834687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:12:25.836658 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 08:12:25.839431 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 08:12:25.862837 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 08:12:25.863446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:12:25.863575 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:12:25.875380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:12:25.891673 systemd-journald[1177]: Time spent on flushing to /var/log/journal/7c9ba659ddef48b5bbd5e2ff46ad8e94 is 43.402ms for 1001 entries. Aug 19 08:12:25.891673 systemd-journald[1177]: System Journal (/var/log/journal/7c9ba659ddef48b5bbd5e2ff46ad8e94) is 8M, max 195.6M, 187.6M free. Aug 19 08:12:25.949874 systemd-journald[1177]: Received client request to flush runtime journal. Aug 19 08:12:25.942927 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 08:12:25.948079 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 08:12:25.963370 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 08:12:25.964573 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 08:12:25.998697 kernel: loop0: detected capacity change from 0 to 111000 Aug 19 08:12:26.010229 kernel: ACPI: bus type drm_connector registered Aug 19 08:12:26.014282 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:12:26.014580 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:12:26.050135 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 08:12:26.074250 kernel: loop1: detected capacity change from 0 to 229808 Aug 19 08:12:26.077719 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 08:12:26.087739 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 08:12:26.097049 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 08:12:26.134938 kernel: loop2: detected capacity change from 0 to 8 Aug 19 08:12:26.160637 kernel: loop3: detected capacity change from 0 to 128016 Aug 19 08:12:26.220674 kernel: loop4: detected capacity change from 0 to 111000 Aug 19 08:12:26.230569 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:12:26.262691 kernel: loop5: detected capacity change from 0 to 229808 Aug 19 08:12:26.285411 kernel: loop6: detected capacity change from 0 to 8 Aug 19 08:12:26.288112 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 08:12:26.293421 kernel: loop7: detected capacity change from 0 to 128016 Aug 19 08:12:26.294066 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:12:26.309378 (sd-merge)[1251]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 19 08:12:26.310235 (sd-merge)[1251]: Merged extensions into '/usr'. Aug 19 08:12:26.322856 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 08:12:26.322881 systemd[1]: Reloading... Aug 19 08:12:26.416009 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Aug 19 08:12:26.416563 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Aug 19 08:12:26.589639 zram_generator::config[1282]: No configuration found. Aug 19 08:12:26.900071 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 08:12:27.079772 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 08:12:27.080049 systemd[1]: Reloading finished in 756 ms. Aug 19 08:12:27.100369 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 08:12:27.101583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:12:27.103190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 08:12:27.114630 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 08:12:27.129910 systemd[1]: Starting ensure-sysext.service... Aug 19 08:12:27.134981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:12:27.152222 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 08:12:27.182861 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Aug 19 08:12:27.183090 systemd[1]: Reloading... Aug 19 08:12:27.203180 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 08:12:27.203232 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 08:12:27.204564 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 08:12:27.205042 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 08:12:27.208194 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 08:12:27.208662 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Aug 19 08:12:27.208759 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Aug 19 08:12:27.214744 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:12:27.214762 systemd-tmpfiles[1328]: Skipping /boot Aug 19 08:12:27.229138 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:12:27.229157 systemd-tmpfiles[1328]: Skipping /boot Aug 19 08:12:27.351375 zram_generator::config[1356]: No configuration found. Aug 19 08:12:27.665383 systemd[1]: Reloading finished in 481 ms. Aug 19 08:12:27.688169 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 08:12:27.697803 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:12:27.707450 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:12:27.712845 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 08:12:27.716025 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 08:12:27.725389 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:12:27.731964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:12:27.740970 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 08:12:27.751915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:12:27.752960 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:12:27.758727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:12:27.768060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:12:27.781791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:12:27.782617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:12:27.782825 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:12:27.782969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:12:27.789486 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:12:27.790151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:12:27.790406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:12:27.790540 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:12:27.796227 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 08:12:27.796913 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:12:27.806554 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 08:12:27.813162 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:12:27.813585 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:12:27.821791 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:12:27.823954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:12:27.824177 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:12:27.829395 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 08:12:27.830337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:12:27.834807 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 08:12:27.850236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:12:27.851217 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:12:27.859695 systemd[1]: Finished ensure-sysext.service. Aug 19 08:12:27.864801 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:12:27.865326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:12:27.868960 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:12:27.869766 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:12:27.871453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:12:27.874963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:12:27.882783 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 08:12:27.894318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:12:27.894437 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:12:27.900388 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 19 08:12:27.901275 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 08:12:27.912734 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 08:12:27.941374 systemd-udevd[1405]: Using default interface naming scheme 'v255'. Aug 19 08:12:27.964581 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 08:12:27.972080 augenrules[1445]: No rules Aug 19 08:12:27.977665 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:12:27.978712 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:12:27.989792 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:12:27.997990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:12:28.145526 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 19 08:12:28.150314 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 08:12:28.195982 systemd-networkd[1456]: lo: Link UP Aug 19 08:12:28.195996 systemd-networkd[1456]: lo: Gained carrier Aug 19 08:12:28.197238 systemd-networkd[1456]: Enumeration completed Aug 19 08:12:28.199579 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:12:28.203472 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 08:12:28.207993 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 08:12:28.228385 systemd-resolved[1404]: Positive Trust Anchors: Aug 19 08:12:28.230661 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:12:28.230728 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:12:28.236297 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 08:12:28.252604 systemd-resolved[1404]: Using system hostname 'ci-4426.0.0-a-0a67852594'. Aug 19 08:12:28.257095 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:12:28.258357 systemd[1]: Reached target network.target - Network. Aug 19 08:12:28.259239 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:12:28.260483 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:12:28.261318 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 08:12:28.262629 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 08:12:28.263575 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 19 08:12:28.264604 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 08:12:28.265540 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 08:12:28.267103 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 08:12:28.267637 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 08:12:28.267691 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:12:28.268133 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:12:28.270859 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 08:12:28.274764 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 08:12:28.282655 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 08:12:28.284016 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 08:12:28.285725 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 08:12:28.297809 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 08:12:28.298885 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 08:12:28.300504 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 08:12:28.302000 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:12:28.302486 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:12:28.303427 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:12:28.303470 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:12:28.305959 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 08:12:28.311876 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 19 08:12:28.316251 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 08:12:28.320760 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 08:12:28.324214 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 08:12:28.327852 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 08:12:28.328269 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 08:12:28.337952 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 19 08:12:28.344023 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 08:12:28.357799 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 08:12:28.363921 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 08:12:28.368931 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 08:12:28.379907 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 08:12:28.382857 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 08:12:28.384114 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 08:12:28.391051 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 08:12:28.394864 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 08:12:28.400639 jq[1491]: false Aug 19 08:12:28.404213 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 08:12:28.405069 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 08:12:28.405281 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 08:12:28.415549 oslogin_cache_refresh[1493]: Refreshing passwd entry cache Aug 19 08:12:28.417685 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Refreshing passwd entry cache Aug 19 08:12:28.428090 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 08:12:28.428399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 08:12:28.435746 jq[1502]: true Aug 19 08:12:28.442635 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Failure getting users, quitting Aug 19 08:12:28.444636 oslogin_cache_refresh[1493]: Failure getting users, quitting Aug 19 08:12:28.446985 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:12:28.446985 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Refreshing group entry cache Aug 19 08:12:28.446985 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Failure getting groups, quitting Aug 19 08:12:28.446985 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:12:28.444711 oslogin_cache_refresh[1493]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:12:28.444809 oslogin_cache_refresh[1493]: Refreshing group entry cache Aug 19 08:12:28.445698 oslogin_cache_refresh[1493]: Failure getting groups, quitting Aug 19 08:12:28.445716 oslogin_cache_refresh[1493]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:12:28.455985 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 19 08:12:28.460412 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 19 08:12:28.485391 extend-filesystems[1492]: Found /dev/vda6 Aug 19 08:12:28.497466 dbus-daemon[1489]: [system] SELinux support is enabled Aug 19 08:12:28.497725 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 08:12:28.501247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 08:12:28.501278 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 08:12:28.502802 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 08:12:28.502824 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 08:12:28.514222 update_engine[1501]: I20250819 08:12:28.511044 1501 main.cc:92] Flatcar Update Engine starting Aug 19 08:12:28.513889 systemd[1]: Started update-engine.service - Update Engine. Aug 19 08:12:28.514830 update_engine[1501]: I20250819 08:12:28.514446 1501 update_check_scheduler.cc:74] Next update check in 5m44s Aug 19 08:12:28.518634 extend-filesystems[1492]: Found /dev/vda9 Aug 19 08:12:28.518105 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 08:12:28.537187 jq[1513]: true Aug 19 08:12:28.537486 extend-filesystems[1492]: Checking size of /dev/vda9 Aug 19 08:12:28.548233 (ntainerd)[1522]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 08:12:28.570927 tar[1517]: linux-amd64/LICENSE Aug 19 08:12:28.570927 tar[1517]: linux-amd64/helm Aug 19 08:12:28.571372 coreos-metadata[1488]: Aug 19 08:12:28.569 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 19 08:12:28.574156 coreos-metadata[1488]: Aug 19 08:12:28.573 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Aug 19 08:12:28.581478 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 08:12:28.582829 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 08:12:28.606615 extend-filesystems[1492]: Resized partition /dev/vda9 Aug 19 08:12:28.617846 extend-filesystems[1552]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 08:12:28.624625 bash[1550]: Updated "/home/core/.ssh/authorized_keys" Aug 19 08:12:28.626823 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 08:12:28.635681 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 19 08:12:28.631942 systemd[1]: Starting sshkeys.service... Aug 19 08:12:28.711777 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 19 08:12:28.721470 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 08:12:28.726856 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 19 08:12:28.726856 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 19 08:12:28.726856 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 19 08:12:28.721803 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 08:12:28.741389 extend-filesystems[1492]: Resized filesystem in /dev/vda9 Aug 19 08:12:28.744549 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 19 08:12:28.752466 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 19 08:12:28.849321 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Aug 19 08:12:28.854820 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 19 08:12:28.859998 systemd-logind[1500]: New seat seat0. Aug 19 08:12:28.865927 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 08:12:28.868367 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 08:12:28.874165 systemd-networkd[1456]: eth1: Configuring with /run/systemd/network/10-ee:43:65:b5:15:f3.network. Aug 19 08:12:28.883586 systemd-networkd[1456]: eth1: Link UP Aug 19 08:12:28.886480 systemd-networkd[1456]: eth1: Gained carrier Aug 19 08:12:28.900134 kernel: ISO 9660 Extensions: RRIP_1991A Aug 19 08:12:28.900461 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 19 08:12:28.903672 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 19 08:12:28.905893 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:28.918315 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 19 08:12:28.941051 systemd-networkd[1456]: eth0: Configuring with /run/systemd/network/10-8e:0e:07:29:29:8d.network. Aug 19 08:12:28.950061 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:28.950750 systemd-networkd[1456]: eth0: Link UP Aug 19 08:12:28.954741 systemd-networkd[1456]: eth0: Gained carrier Aug 19 08:12:28.961285 coreos-metadata[1558]: Aug 19 08:12:28.960 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 19 08:12:28.961285 coreos-metadata[1558]: Aug 19 08:12:28.961 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Aug 19 08:12:28.964674 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:28.969064 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:29.075924 kernel: mousedev: PS/2 mouse device common for all mice Aug 19 08:12:29.078181 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 19 08:12:29.115799 kernel: ACPI: button: Power Button [PWRF] Aug 19 08:12:29.130656 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 08:12:29.228605 containerd[1522]: time="2025-08-19T08:12:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 08:12:29.231167 containerd[1522]: time="2025-08-19T08:12:29.231123051Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 08:12:29.266434 containerd[1522]: time="2025-08-19T08:12:29.266376135Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.49µs" Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.268645733Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.268714717Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.268940821Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.268956580Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.268984491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.269042931Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.269055054Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.269343642Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.269368787Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.269388613Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.269401075Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 08:12:29.269673 containerd[1522]: time="2025-08-19T08:12:29.269524853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 08:12:29.270901 containerd[1522]: time="2025-08-19T08:12:29.270867356Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:12:29.271687 containerd[1522]: time="2025-08-19T08:12:29.271664124Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:12:29.271768 containerd[1522]: time="2025-08-19T08:12:29.271757641Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 08:12:29.271893 containerd[1522]: time="2025-08-19T08:12:29.271874942Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 08:12:29.272292 containerd[1522]: time="2025-08-19T08:12:29.272273220Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 08:12:29.272738 containerd[1522]: time="2025-08-19T08:12:29.272718358Z" level=info msg="metadata content store policy set" policy=shared Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277688008Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277774665Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277793957Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277807495Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277821939Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277833596Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277864442Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277879858Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277896484Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277907328Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277918702Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.277932478Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.278085505Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 08:12:29.280121 containerd[1522]: time="2025-08-19T08:12:29.278111164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278128302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278140819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278152346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278163800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278175452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278185577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278198896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278211586Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278230232Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278334669Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278356553Z" level=info msg="Start snapshots syncer" Aug 19 08:12:29.280532 containerd[1522]: time="2025-08-19T08:12:29.278401671Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 08:12:29.280787 containerd[1522]: time="2025-08-19T08:12:29.278705793Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 08:12:29.280787 containerd[1522]: time="2025-08-19T08:12:29.278794356Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.278916180Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279044504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279065075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279076059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279086303Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279099409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279110444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279123579Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279150360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279160996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279170973Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279234173Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279256996Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:12:29.280968 containerd[1522]: time="2025-08-19T08:12:29.279265780Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:12:29.281252 containerd[1522]: time="2025-08-19T08:12:29.279275538Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:12:29.281252 containerd[1522]: time="2025-08-19T08:12:29.279296847Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 08:12:29.281252 containerd[1522]: time="2025-08-19T08:12:29.279325197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 08:12:29.281252 containerd[1522]: time="2025-08-19T08:12:29.279339307Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 08:12:29.281252 containerd[1522]: time="2025-08-19T08:12:29.279358934Z" level=info msg="runtime interface created" Aug 19 08:12:29.281252 containerd[1522]: time="2025-08-19T08:12:29.279364743Z" level=info msg="created NRI interface" Aug 19 08:12:29.281252 containerd[1522]: time="2025-08-19T08:12:29.279401779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 08:12:29.281252 containerd[1522]: time="2025-08-19T08:12:29.279416226Z" level=info msg="Connect containerd service" Aug 19 08:12:29.281252 containerd[1522]: time="2025-08-19T08:12:29.279445626Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 08:12:29.283620 containerd[1522]: time="2025-08-19T08:12:29.283569928Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:12:29.359284 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 19 08:12:29.359780 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 19 08:12:29.366599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:12:29.374026 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 08:12:29.476797 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 08:12:29.487527 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 19 08:12:29.487674 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 19 08:12:29.488060 kernel: Console: switching to colour dummy device 80x25 Aug 19 08:12:29.488085 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 19 08:12:29.488106 kernel: [drm] features: -context_init Aug 19 08:12:29.489974 kernel: [drm] number of scanouts: 1 Aug 19 08:12:29.490114 kernel: [drm] number of cap sets: 0 Aug 19 08:12:29.491623 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Aug 19 08:12:29.502357 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 19 08:12:29.502507 kernel: Console: switching to colour frame buffer device 128x48 Aug 19 08:12:29.503853 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 19 08:12:29.576625 coreos-metadata[1488]: Aug 19 08:12:29.576 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Aug 19 08:12:29.596214 coreos-metadata[1488]: Aug 19 08:12:29.596 INFO Fetch successful Aug 19 08:12:29.670920 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 19 08:12:29.671828 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 08:12:29.685576 containerd[1522]: time="2025-08-19T08:12:29.680985827Z" level=info msg="Start subscribing containerd event" Aug 19 08:12:29.685576 containerd[1522]: time="2025-08-19T08:12:29.683913067Z" level=info msg="Start recovering state" Aug 19 08:12:29.685576 containerd[1522]: time="2025-08-19T08:12:29.684404701Z" level=info msg="Start event monitor" Aug 19 08:12:29.685576 containerd[1522]: time="2025-08-19T08:12:29.684433701Z" level=info msg="Start cni network conf syncer for default" Aug 19 08:12:29.685576 containerd[1522]: time="2025-08-19T08:12:29.684446895Z" level=info msg="Start streaming server" Aug 19 08:12:29.685576 containerd[1522]: time="2025-08-19T08:12:29.684462396Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 08:12:29.685576 containerd[1522]: time="2025-08-19T08:12:29.684472916Z" level=info msg="runtime interface starting up..." Aug 19 08:12:29.685576 containerd[1522]: time="2025-08-19T08:12:29.684481409Z" level=info msg="starting plugins..." Aug 19 08:12:29.685576 containerd[1522]: time="2025-08-19T08:12:29.684501278Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 08:12:29.693974 containerd[1522]: time="2025-08-19T08:12:29.686406866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 08:12:29.693974 containerd[1522]: time="2025-08-19T08:12:29.686519106Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 08:12:29.686873 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 08:12:29.707629 containerd[1522]: time="2025-08-19T08:12:29.706565700Z" level=info msg="containerd successfully booted in 0.478573s" Aug 19 08:12:29.884796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:12:29.963506 coreos-metadata[1558]: Aug 19 08:12:29.963 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Aug 19 08:12:29.981466 coreos-metadata[1558]: Aug 19 08:12:29.981 INFO Fetch successful Aug 19 08:12:29.999255 unknown[1558]: wrote ssh authorized keys file for user: core Aug 19 08:12:30.058506 update-ssh-keys[1620]: Updated "/home/core/.ssh/authorized_keys" Aug 19 08:12:30.062458 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 19 08:12:30.069400 systemd[1]: Finished sshkeys.service. Aug 19 08:12:30.085014 systemd-logind[1500]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 19 08:12:30.180836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:12:30.233091 systemd-logind[1500]: Watching system buttons on /dev/input/event2 (Power Button) Aug 19 08:12:30.279860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:12:30.280378 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:12:30.283707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:12:30.289792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:12:30.295611 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:12:30.313664 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 08:12:30.367630 kernel: EDAC MC: Ver: 3.0.0 Aug 19 08:12:30.380124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:12:30.380940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:12:30.414233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:12:30.437153 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 08:12:30.442961 tar[1517]: linux-amd64/README.md Aug 19 08:12:30.446317 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 08:12:30.479545 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 08:12:30.485174 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 08:12:30.485960 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 08:12:30.488052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:12:30.494152 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 08:12:30.522127 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 08:12:30.526172 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 08:12:30.530694 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 19 08:12:30.531996 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 08:12:30.678061 systemd-networkd[1456]: eth0: Gained IPv6LL Aug 19 08:12:30.679259 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:30.682208 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 08:12:30.687492 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 08:12:30.693974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:12:30.700256 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 08:12:30.738774 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 08:12:30.870076 systemd-networkd[1456]: eth1: Gained IPv6LL Aug 19 08:12:30.870744 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:31.939763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:12:31.951019 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 08:12:31.954436 systemd[1]: Startup finished in 3.435s (kernel) + 5.988s (initrd) + 7.348s (userspace) = 16.772s. Aug 19 08:12:31.959835 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:12:32.493474 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 08:12:32.495825 systemd[1]: Started sshd@0-143.198.65.59:22-139.178.89.65:40270.service - OpenSSH per-connection server daemon (139.178.89.65:40270). Aug 19 08:12:32.606693 sshd[1686]: Accepted publickey for core from 139.178.89.65 port 40270 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:12:32.610463 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:12:32.625960 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 08:12:32.630119 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 08:12:32.642965 systemd-logind[1500]: New session 1 of user core. Aug 19 08:12:32.669754 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 08:12:32.675506 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 08:12:32.696548 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 08:12:32.703838 systemd-logind[1500]: New session c1 of user core. Aug 19 08:12:32.724831 kubelet[1676]: E0819 08:12:32.724681 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:12:32.729918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:12:32.730115 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:12:32.730989 systemd[1]: kubelet.service: Consumed 1.441s CPU time, 266.7M memory peak. Aug 19 08:12:32.868450 systemd[1692]: Queued start job for default target default.target. Aug 19 08:12:32.887757 systemd[1692]: Created slice app.slice - User Application Slice. Aug 19 08:12:32.888066 systemd[1692]: Reached target paths.target - Paths. Aug 19 08:12:32.888212 systemd[1692]: Reached target timers.target - Timers. Aug 19 08:12:32.890552 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 08:12:32.925307 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 08:12:32.925479 systemd[1692]: Reached target sockets.target - Sockets. Aug 19 08:12:32.925887 systemd[1692]: Reached target basic.target - Basic System. Aug 19 08:12:32.926018 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 08:12:32.927394 systemd[1692]: Reached target default.target - Main User Target. Aug 19 08:12:32.927445 systemd[1692]: Startup finished in 212ms. Aug 19 08:12:32.931919 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 08:12:33.006104 systemd[1]: Started sshd@1-143.198.65.59:22-139.178.89.65:40286.service - OpenSSH per-connection server daemon (139.178.89.65:40286). Aug 19 08:12:33.083306 sshd[1704]: Accepted publickey for core from 139.178.89.65 port 40286 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:12:33.085808 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:12:33.092169 systemd-logind[1500]: New session 2 of user core. Aug 19 08:12:33.099913 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 08:12:33.167680 sshd[1707]: Connection closed by 139.178.89.65 port 40286 Aug 19 08:12:33.168938 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Aug 19 08:12:33.181729 systemd[1]: sshd@1-143.198.65.59:22-139.178.89.65:40286.service: Deactivated successfully. Aug 19 08:12:33.184862 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 08:12:33.186505 systemd-logind[1500]: Session 2 logged out. Waiting for processes to exit. Aug 19 08:12:33.192378 systemd[1]: Started sshd@2-143.198.65.59:22-139.178.89.65:40300.service - OpenSSH per-connection server daemon (139.178.89.65:40300). Aug 19 08:12:33.194053 systemd-logind[1500]: Removed session 2. Aug 19 08:12:33.262430 sshd[1713]: Accepted publickey for core from 139.178.89.65 port 40300 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:12:33.264274 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:12:33.271166 systemd-logind[1500]: New session 3 of user core. Aug 19 08:12:33.283984 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 08:12:33.341134 sshd[1716]: Connection closed by 139.178.89.65 port 40300 Aug 19 08:12:33.341882 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Aug 19 08:12:33.355791 systemd[1]: sshd@2-143.198.65.59:22-139.178.89.65:40300.service: Deactivated successfully. Aug 19 08:12:33.358507 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 08:12:33.359555 systemd-logind[1500]: Session 3 logged out. Waiting for processes to exit. Aug 19 08:12:33.363879 systemd[1]: Started sshd@3-143.198.65.59:22-139.178.89.65:40310.service - OpenSSH per-connection server daemon (139.178.89.65:40310). Aug 19 08:12:33.365238 systemd-logind[1500]: Removed session 3. Aug 19 08:12:33.443489 sshd[1722]: Accepted publickey for core from 139.178.89.65 port 40310 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:12:33.446254 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:12:33.452099 systemd-logind[1500]: New session 4 of user core. Aug 19 08:12:33.459951 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 08:12:33.529835 sshd[1725]: Connection closed by 139.178.89.65 port 40310 Aug 19 08:12:33.530947 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Aug 19 08:12:33.543033 systemd[1]: sshd@3-143.198.65.59:22-139.178.89.65:40310.service: Deactivated successfully. Aug 19 08:12:33.545474 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 08:12:33.546781 systemd-logind[1500]: Session 4 logged out. Waiting for processes to exit. Aug 19 08:12:33.550995 systemd[1]: Started sshd@4-143.198.65.59:22-139.178.89.65:40324.service - OpenSSH per-connection server daemon (139.178.89.65:40324). Aug 19 08:12:33.553759 systemd-logind[1500]: Removed session 4. Aug 19 08:12:33.623642 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 40324 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:12:33.625494 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:12:33.632287 systemd-logind[1500]: New session 5 of user core. Aug 19 08:12:33.639899 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 08:12:33.710532 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 08:12:33.711035 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:12:33.722615 sudo[1735]: pam_unix(sudo:session): session closed for user root Aug 19 08:12:33.727661 sshd[1734]: Connection closed by 139.178.89.65 port 40324 Aug 19 08:12:33.726836 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Aug 19 08:12:33.743439 systemd[1]: sshd@4-143.198.65.59:22-139.178.89.65:40324.service: Deactivated successfully. Aug 19 08:12:33.746150 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 08:12:33.747869 systemd-logind[1500]: Session 5 logged out. Waiting for processes to exit. Aug 19 08:12:33.751892 systemd[1]: Started sshd@5-143.198.65.59:22-139.178.89.65:40326.service - OpenSSH per-connection server daemon (139.178.89.65:40326). Aug 19 08:12:33.753080 systemd-logind[1500]: Removed session 5. Aug 19 08:12:33.818664 sshd[1741]: Accepted publickey for core from 139.178.89.65 port 40326 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:12:33.820786 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:12:33.826688 systemd-logind[1500]: New session 6 of user core. Aug 19 08:12:33.846012 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 08:12:33.910766 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 08:12:33.911893 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:12:33.919606 sudo[1746]: pam_unix(sudo:session): session closed for user root Aug 19 08:12:33.928051 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 08:12:33.928504 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:12:33.942660 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:12:33.999371 augenrules[1768]: No rules Aug 19 08:12:33.999857 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:12:34.000148 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:12:34.001334 sudo[1745]: pam_unix(sudo:session): session closed for user root Aug 19 08:12:34.005002 sshd[1744]: Connection closed by 139.178.89.65 port 40326 Aug 19 08:12:34.005949 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Aug 19 08:12:34.015302 systemd[1]: sshd@5-143.198.65.59:22-139.178.89.65:40326.service: Deactivated successfully. Aug 19 08:12:34.017749 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 08:12:34.018803 systemd-logind[1500]: Session 6 logged out. Waiting for processes to exit. Aug 19 08:12:34.023291 systemd[1]: Started sshd@6-143.198.65.59:22-139.178.89.65:40334.service - OpenSSH per-connection server daemon (139.178.89.65:40334). Aug 19 08:12:34.024219 systemd-logind[1500]: Removed session 6. Aug 19 08:12:34.090221 sshd[1777]: Accepted publickey for core from 139.178.89.65 port 40334 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:12:34.092316 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:12:34.100343 systemd-logind[1500]: New session 7 of user core. Aug 19 08:12:34.106939 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 08:12:34.168224 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 08:12:34.168584 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:12:34.727389 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 08:12:34.744436 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 08:12:35.485388 dockerd[1799]: time="2025-08-19T08:12:35.484747695Z" level=info msg="Starting up" Aug 19 08:12:35.488154 dockerd[1799]: time="2025-08-19T08:12:35.488105609Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 08:12:35.508948 dockerd[1799]: time="2025-08-19T08:12:35.508822517Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 08:12:35.611191 dockerd[1799]: time="2025-08-19T08:12:35.610839689Z" level=info msg="Loading containers: start." Aug 19 08:12:35.625978 kernel: Initializing XFRM netlink socket Aug 19 08:12:35.929013 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:35.932306 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:35.944060 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:35.999181 systemd-networkd[1456]: docker0: Link UP Aug 19 08:12:35.999539 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 19 08:12:36.002624 dockerd[1799]: time="2025-08-19T08:12:36.002559980Z" level=info msg="Loading containers: done." Aug 19 08:12:36.023460 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1012454755-merged.mount: Deactivated successfully. Aug 19 08:12:36.026828 dockerd[1799]: time="2025-08-19T08:12:36.026715544Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 08:12:36.026828 dockerd[1799]: time="2025-08-19T08:12:36.026831363Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 08:12:36.027000 dockerd[1799]: time="2025-08-19T08:12:36.026929643Z" level=info msg="Initializing buildkit" Aug 19 08:12:36.051757 dockerd[1799]: time="2025-08-19T08:12:36.051704206Z" level=info msg="Completed buildkit initialization" Aug 19 08:12:36.061739 dockerd[1799]: time="2025-08-19T08:12:36.060919591Z" level=info msg="Daemon has completed initialization" Aug 19 08:12:36.061739 dockerd[1799]: time="2025-08-19T08:12:36.061050715Z" level=info msg="API listen on /run/docker.sock" Aug 19 08:12:36.062042 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 08:12:36.840259 containerd[1522]: time="2025-08-19T08:12:36.840164341Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Aug 19 08:12:37.418246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439990588.mount: Deactivated successfully. Aug 19 08:12:38.807645 containerd[1522]: time="2025-08-19T08:12:38.806176578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:38.807645 containerd[1522]: time="2025-08-19T08:12:38.807429141Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Aug 19 08:12:38.808221 containerd[1522]: time="2025-08-19T08:12:38.807709015Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:38.811927 containerd[1522]: time="2025-08-19T08:12:38.811831927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:38.813429 containerd[1522]: time="2025-08-19T08:12:38.813194499Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 1.97239763s" Aug 19 08:12:38.813429 containerd[1522]: time="2025-08-19T08:12:38.813265702Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Aug 19 08:12:38.814579 containerd[1522]: time="2025-08-19T08:12:38.814531384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Aug 19 08:12:40.348000 containerd[1522]: time="2025-08-19T08:12:40.347888453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:40.349240 containerd[1522]: time="2025-08-19T08:12:40.349169120Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Aug 19 08:12:40.351214 containerd[1522]: time="2025-08-19T08:12:40.351147814Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:40.356575 containerd[1522]: time="2025-08-19T08:12:40.356494715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:40.357881 containerd[1522]: time="2025-08-19T08:12:40.357789753Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.543211959s" Aug 19 08:12:40.357881 containerd[1522]: time="2025-08-19T08:12:40.357839677Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Aug 19 08:12:40.358893 containerd[1522]: time="2025-08-19T08:12:40.358860103Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Aug 19 08:12:41.671940 containerd[1522]: time="2025-08-19T08:12:41.671854761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:41.673178 containerd[1522]: time="2025-08-19T08:12:41.673143345Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Aug 19 08:12:41.673341 containerd[1522]: time="2025-08-19T08:12:41.673308571Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:41.676632 containerd[1522]: time="2025-08-19T08:12:41.676271762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:41.678084 containerd[1522]: time="2025-08-19T08:12:41.677245604Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.318348791s" Aug 19 08:12:41.678084 containerd[1522]: time="2025-08-19T08:12:41.677285488Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Aug 19 08:12:41.678283 containerd[1522]: time="2025-08-19T08:12:41.678185964Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Aug 19 08:12:42.891191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819455676.mount: Deactivated successfully. Aug 19 08:12:42.894375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 08:12:42.899133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:12:43.118859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:12:43.131741 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:12:43.223487 kubelet[2096]: E0819 08:12:43.223254 2096 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:12:43.228074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:12:43.228276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:12:43.228794 systemd[1]: kubelet.service: Consumed 253ms CPU time, 108.3M memory peak. Aug 19 08:12:43.752698 containerd[1522]: time="2025-08-19T08:12:43.752633999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:43.754625 containerd[1522]: time="2025-08-19T08:12:43.754553253Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Aug 19 08:12:43.755296 containerd[1522]: time="2025-08-19T08:12:43.755245839Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:43.756924 containerd[1522]: time="2025-08-19T08:12:43.756876510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:43.757612 containerd[1522]: time="2025-08-19T08:12:43.757395495Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.079177727s" Aug 19 08:12:43.757612 containerd[1522]: time="2025-08-19T08:12:43.757434355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Aug 19 08:12:43.758826 containerd[1522]: time="2025-08-19T08:12:43.758761517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 19 08:12:43.760384 systemd-resolved[1404]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 19 08:12:44.268536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4611967.mount: Deactivated successfully. Aug 19 08:12:45.243230 containerd[1522]: time="2025-08-19T08:12:45.243136708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:45.244397 containerd[1522]: time="2025-08-19T08:12:45.244039150Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 19 08:12:45.245169 containerd[1522]: time="2025-08-19T08:12:45.245131434Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:45.248167 containerd[1522]: time="2025-08-19T08:12:45.248117751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:45.249482 containerd[1522]: time="2025-08-19T08:12:45.249434705Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.490603897s" Aug 19 08:12:45.249795 containerd[1522]: time="2025-08-19T08:12:45.249765975Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 19 08:12:45.250465 containerd[1522]: time="2025-08-19T08:12:45.250417941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 08:12:45.732346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852408062.mount: Deactivated successfully. Aug 19 08:12:45.737563 containerd[1522]: time="2025-08-19T08:12:45.736766787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:12:45.737978 containerd[1522]: time="2025-08-19T08:12:45.737939332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 19 08:12:45.738257 containerd[1522]: time="2025-08-19T08:12:45.738224829Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:12:45.742313 containerd[1522]: time="2025-08-19T08:12:45.742262421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:12:45.743256 containerd[1522]: time="2025-08-19T08:12:45.743217875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.643172ms" Aug 19 08:12:45.743424 containerd[1522]: time="2025-08-19T08:12:45.743402466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 19 08:12:45.744054 containerd[1522]: time="2025-08-19T08:12:45.744015302Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 19 08:12:46.217818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1894406606.mount: Deactivated successfully. Aug 19 08:12:46.869988 systemd-resolved[1404]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 19 08:12:48.978906 containerd[1522]: time="2025-08-19T08:12:48.978830539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:48.981298 containerd[1522]: time="2025-08-19T08:12:48.981235555Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Aug 19 08:12:48.984464 containerd[1522]: time="2025-08-19T08:12:48.984401534Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:48.991549 containerd[1522]: time="2025-08-19T08:12:48.990462661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:12:48.991549 containerd[1522]: time="2025-08-19T08:12:48.991267445Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.247113971s" Aug 19 08:12:48.991549 containerd[1522]: time="2025-08-19T08:12:48.991337513Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 19 08:12:52.877241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:12:52.878128 systemd[1]: kubelet.service: Consumed 253ms CPU time, 108.3M memory peak. Aug 19 08:12:52.882937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:12:52.933482 systemd[1]: Reload requested from client PID 2242 ('systemctl') (unit session-7.scope)... Aug 19 08:12:52.933512 systemd[1]: Reloading... Aug 19 08:12:53.117697 zram_generator::config[2285]: No configuration found. Aug 19 08:12:53.489307 systemd[1]: Reloading finished in 555 ms. Aug 19 08:12:53.565425 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 08:12:53.565560 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 08:12:53.566274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:12:53.566378 systemd[1]: kubelet.service: Consumed 136ms CPU time, 97.9M memory peak. Aug 19 08:12:53.570178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:12:53.757779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:12:53.769372 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:12:53.831096 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:12:53.831096 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 08:12:53.831096 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:12:53.831694 kubelet[2339]: I0819 08:12:53.831216 2339 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:12:54.758004 kubelet[2339]: I0819 08:12:54.757922 2339 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 19 08:12:54.759150 kubelet[2339]: I0819 08:12:54.758202 2339 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:12:54.759150 kubelet[2339]: I0819 08:12:54.758804 2339 server.go:956] "Client rotation is on, will bootstrap in background" Aug 19 08:12:54.797777 kubelet[2339]: I0819 08:12:54.797243 2339 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:12:54.799803 kubelet[2339]: E0819 08:12:54.799621 2339 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.65.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.65.59:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 19 08:12:54.812874 kubelet[2339]: I0819 08:12:54.812834 2339 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:12:54.822250 kubelet[2339]: I0819 08:12:54.822051 2339 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:12:54.824750 kubelet[2339]: I0819 08:12:54.824655 2339 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:12:54.829943 kubelet[2339]: I0819 08:12:54.824962 2339 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-a-0a67852594","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:12:54.829943 kubelet[2339]: I0819 08:12:54.829444 2339 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:12:54.829943 kubelet[2339]: I0819 08:12:54.829468 2339 container_manager_linux.go:303] "Creating device plugin manager" Aug 19 08:12:54.830926 kubelet[2339]: I0819 08:12:54.830893 2339 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:12:54.836460 kubelet[2339]: I0819 08:12:54.836402 2339 kubelet.go:480] "Attempting to sync node with API server" Aug 19 08:12:54.837871 kubelet[2339]: I0819 08:12:54.837070 2339 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:12:54.837871 kubelet[2339]: I0819 08:12:54.837130 2339 kubelet.go:386] "Adding apiserver pod source" Aug 19 08:12:54.840624 kubelet[2339]: I0819 08:12:54.840271 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:12:54.849715 kubelet[2339]: E0819 08:12:54.849161 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.65.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-a-0a67852594&limit=500&resourceVersion=0\": dial tcp 143.198.65.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 19 08:12:54.850759 kubelet[2339]: E0819 08:12:54.850700 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.65.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.65.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 19 08:12:54.851384 kubelet[2339]: I0819 08:12:54.851347 2339 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:12:54.854034 kubelet[2339]: I0819 08:12:54.853430 2339 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 19 08:12:54.857186 kubelet[2339]: W0819 08:12:54.857128 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 08:12:54.864429 kubelet[2339]: I0819 08:12:54.864392 2339 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 08:12:54.864742 kubelet[2339]: I0819 08:12:54.864723 2339 server.go:1289] "Started kubelet" Aug 19 08:12:54.874420 kubelet[2339]: E0819 08:12:54.869033 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.65.59:6443/api/v1/namespaces/default/events\": dial tcp 143.198.65.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.0.0-a-0a67852594.185d1cdf3c9e3a94 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.0.0-a-0a67852594,UID:ci-4426.0.0-a-0a67852594,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.0.0-a-0a67852594,},FirstTimestamp:2025-08-19 08:12:54.864648852 +0000 UTC m=+1.089298704,LastTimestamp:2025-08-19 08:12:54.864648852 +0000 UTC m=+1.089298704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.0.0-a-0a67852594,}" Aug 19 08:12:54.874420 kubelet[2339]: I0819 08:12:54.871067 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:12:54.874420 kubelet[2339]: I0819 08:12:54.873430 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:12:54.874420 kubelet[2339]: I0819 08:12:54.874152 2339 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:12:54.878840 kubelet[2339]: I0819 08:12:54.878791 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:12:54.881963 kubelet[2339]: I0819 08:12:54.871064 2339 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:12:54.884667 kubelet[2339]: E0819 08:12:54.881976 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-0a67852594\" not found" Aug 19 08:12:54.884667 kubelet[2339]: I0819 08:12:54.882020 2339 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 08:12:54.886629 kubelet[2339]: I0819 08:12:54.882038 2339 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 08:12:54.886629 kubelet[2339]: I0819 08:12:54.885012 2339 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:12:54.886629 kubelet[2339]: E0819 08:12:54.885182 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.65.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.65.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 19 08:12:54.886629 kubelet[2339]: I0819 08:12:54.885458 2339 factory.go:223] Registration of the systemd container factory successfully Aug 19 08:12:54.886629 kubelet[2339]: I0819 08:12:54.885552 2339 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:12:54.887030 kubelet[2339]: I0819 08:12:54.886954 2339 server.go:317] "Adding debug handlers to kubelet server" Aug 19 08:12:54.888475 kubelet[2339]: E0819 08:12:54.888424 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.65.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-a-0a67852594?timeout=10s\": dial tcp 143.198.65.59:6443: connect: connection refused" interval="200ms" Aug 19 08:12:54.890578 kubelet[2339]: I0819 08:12:54.890551 2339 factory.go:223] Registration of the containerd container factory successfully Aug 19 08:12:54.895382 kubelet[2339]: E0819 08:12:54.895332 2339 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:12:54.912491 kubelet[2339]: I0819 08:12:54.912450 2339 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 08:12:54.912685 kubelet[2339]: I0819 08:12:54.912470 2339 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 08:12:54.912946 kubelet[2339]: I0819 08:12:54.912882 2339 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:12:54.915081 kubelet[2339]: I0819 08:12:54.915052 2339 policy_none.go:49] "None policy: Start" Aug 19 08:12:54.915293 kubelet[2339]: I0819 08:12:54.915236 2339 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 08:12:54.915293 kubelet[2339]: I0819 08:12:54.915256 2339 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:12:54.929131 kubelet[2339]: I0819 08:12:54.928821 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 19 08:12:54.930985 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 08:12:54.934626 kubelet[2339]: I0819 08:12:54.934403 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 19 08:12:54.934626 kubelet[2339]: I0819 08:12:54.934443 2339 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 19 08:12:54.934626 kubelet[2339]: I0819 08:12:54.934471 2339 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 08:12:54.934626 kubelet[2339]: I0819 08:12:54.934479 2339 kubelet.go:2436] "Starting kubelet main sync loop" Aug 19 08:12:54.934626 kubelet[2339]: E0819 08:12:54.934528 2339 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:12:54.937904 kubelet[2339]: E0819 08:12:54.937784 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.65.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.65.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 19 08:12:54.946899 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 08:12:54.953532 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 08:12:54.963959 kubelet[2339]: E0819 08:12:54.963928 2339 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 19 08:12:54.964324 kubelet[2339]: I0819 08:12:54.964308 2339 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:12:54.964433 kubelet[2339]: I0819 08:12:54.964402 2339 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:12:54.965838 kubelet[2339]: I0819 08:12:54.965182 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:12:54.969352 kubelet[2339]: E0819 08:12:54.969325 2339 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 08:12:54.969752 kubelet[2339]: E0819 08:12:54.969726 2339 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.0.0-a-0a67852594\" not found" Aug 19 08:12:55.052984 systemd[1]: Created slice kubepods-burstable-pod7853bc82b06c8bfa9662438123bfc4da.slice - libcontainer container kubepods-burstable-pod7853bc82b06c8bfa9662438123bfc4da.slice. Aug 19 08:12:55.066647 kubelet[2339]: I0819 08:12:55.066036 2339 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.066850 kubelet[2339]: E0819 08:12:55.066610 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.65.59:6443/api/v1/nodes\": dial tcp 143.198.65.59:6443: connect: connection refused" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.073491 kubelet[2339]: E0819 08:12:55.073450 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.081584 systemd[1]: Created slice kubepods-burstable-pod58ff7715fd0ef59a3cc50a5727296eb5.slice - libcontainer container kubepods-burstable-pod58ff7715fd0ef59a3cc50a5727296eb5.slice. Aug 19 08:12:55.085465 kubelet[2339]: I0819 08:12:55.085417 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7853bc82b06c8bfa9662438123bfc4da-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-a-0a67852594\" (UID: \"7853bc82b06c8bfa9662438123bfc4da\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.085869 kubelet[2339]: I0819 08:12:55.085483 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7853bc82b06c8bfa9662438123bfc4da-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-a-0a67852594\" (UID: \"7853bc82b06c8bfa9662438123bfc4da\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.085869 kubelet[2339]: I0819 08:12:55.085508 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.085869 kubelet[2339]: I0819 08:12:55.085563 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.085869 kubelet[2339]: I0819 08:12:55.085622 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.085869 kubelet[2339]: I0819 08:12:55.085653 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7853bc82b06c8bfa9662438123bfc4da-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-a-0a67852594\" (UID: \"7853bc82b06c8bfa9662438123bfc4da\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.086043 kubelet[2339]: I0819 08:12:55.085695 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.086043 kubelet[2339]: I0819 08:12:55.085719 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.086043 kubelet[2339]: I0819 08:12:55.085746 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3340775a5b33728a9c0b5d09103e9ba-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-a-0a67852594\" (UID: \"d3340775a5b33728a9c0b5d09103e9ba\") " pod="kube-system/kube-scheduler-ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.090239 kubelet[2339]: E0819 08:12:55.090192 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.65.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-a-0a67852594?timeout=10s\": dial tcp 143.198.65.59:6443: connect: connection refused" interval="400ms" Aug 19 08:12:55.093187 kubelet[2339]: E0819 08:12:55.092902 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.098530 systemd[1]: Created slice kubepods-burstable-podd3340775a5b33728a9c0b5d09103e9ba.slice - libcontainer container kubepods-burstable-podd3340775a5b33728a9c0b5d09103e9ba.slice. Aug 19 08:12:55.101510 kubelet[2339]: E0819 08:12:55.101472 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.268883 kubelet[2339]: I0819 08:12:55.268801 2339 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.269781 kubelet[2339]: E0819 08:12:55.269700 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.65.59:6443/api/v1/nodes\": dial tcp 143.198.65.59:6443: connect: connection refused" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.377524 kubelet[2339]: E0819 08:12:55.377372 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:55.379241 containerd[1522]: time="2025-08-19T08:12:55.379183804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-a-0a67852594,Uid:7853bc82b06c8bfa9662438123bfc4da,Namespace:kube-system,Attempt:0,}" Aug 19 08:12:55.394055 kubelet[2339]: E0819 08:12:55.394010 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:55.394856 containerd[1522]: time="2025-08-19T08:12:55.394768179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-a-0a67852594,Uid:58ff7715fd0ef59a3cc50a5727296eb5,Namespace:kube-system,Attempt:0,}" Aug 19 08:12:55.403290 kubelet[2339]: E0819 08:12:55.403238 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:55.406382 containerd[1522]: time="2025-08-19T08:12:55.406319750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-a-0a67852594,Uid:d3340775a5b33728a9c0b5d09103e9ba,Namespace:kube-system,Attempt:0,}" Aug 19 08:12:55.491529 kubelet[2339]: E0819 08:12:55.491478 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.65.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-a-0a67852594?timeout=10s\": dial tcp 143.198.65.59:6443: connect: connection refused" interval="800ms" Aug 19 08:12:55.511677 containerd[1522]: time="2025-08-19T08:12:55.511616003Z" level=info msg="connecting to shim f294338c39c82800b1cba5d5d80d16d2032ade66fda6f102dd750dc055f6b598" address="unix:///run/containerd/s/346c9cb909e929f3249b2f2b739a8c3d656b49b88b5c5e1d8d84e76d0f6de2b8" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:12:55.515271 containerd[1522]: time="2025-08-19T08:12:55.515208216Z" level=info msg="connecting to shim f46f42aa93dcc90aba7888f4fec4a146f80c82f6c96bae33d0c1cbc931b8b011" address="unix:///run/containerd/s/672e5f4366f07511c384e8ab49aebdcc5177c46b95086c287ba2b58bc862cdf1" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:12:55.518009 containerd[1522]: time="2025-08-19T08:12:55.517949395Z" level=info msg="connecting to shim a6bd37bb7800352216091ae87851edf1855eb7a6eeb56e4d8eb2d7df6c544f01" address="unix:///run/containerd/s/264ceff32f8ba38fe414163ffa40aa12dbced731ed9ac5ef8457350f2f235764" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:12:55.641948 systemd[1]: Started cri-containerd-a6bd37bb7800352216091ae87851edf1855eb7a6eeb56e4d8eb2d7df6c544f01.scope - libcontainer container a6bd37bb7800352216091ae87851edf1855eb7a6eeb56e4d8eb2d7df6c544f01. Aug 19 08:12:55.645390 systemd[1]: Started cri-containerd-f294338c39c82800b1cba5d5d80d16d2032ade66fda6f102dd750dc055f6b598.scope - libcontainer container f294338c39c82800b1cba5d5d80d16d2032ade66fda6f102dd750dc055f6b598. Aug 19 08:12:55.648165 systemd[1]: Started cri-containerd-f46f42aa93dcc90aba7888f4fec4a146f80c82f6c96bae33d0c1cbc931b8b011.scope - libcontainer container f46f42aa93dcc90aba7888f4fec4a146f80c82f6c96bae33d0c1cbc931b8b011. Aug 19 08:12:55.672895 kubelet[2339]: I0819 08:12:55.672737 2339 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.674440 kubelet[2339]: E0819 08:12:55.674357 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.65.59:6443/api/v1/nodes\": dial tcp 143.198.65.59:6443: connect: connection refused" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:55.757369 containerd[1522]: time="2025-08-19T08:12:55.757299072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-a-0a67852594,Uid:7853bc82b06c8bfa9662438123bfc4da,Namespace:kube-system,Attempt:0,} returns sandbox id \"f294338c39c82800b1cba5d5d80d16d2032ade66fda6f102dd750dc055f6b598\"" Aug 19 08:12:55.759977 kubelet[2339]: E0819 08:12:55.759932 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:55.773186 containerd[1522]: time="2025-08-19T08:12:55.773064519Z" level=info msg="CreateContainer within sandbox \"f294338c39c82800b1cba5d5d80d16d2032ade66fda6f102dd750dc055f6b598\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 08:12:55.786741 containerd[1522]: time="2025-08-19T08:12:55.786677136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-a-0a67852594,Uid:58ff7715fd0ef59a3cc50a5727296eb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f46f42aa93dcc90aba7888f4fec4a146f80c82f6c96bae33d0c1cbc931b8b011\"" Aug 19 08:12:55.791695 kubelet[2339]: E0819 08:12:55.790263 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:55.799439 containerd[1522]: time="2025-08-19T08:12:55.799183988Z" level=info msg="Container 45addb7fed599b887e0f441b3d3c3b6bca6a3d17716867572120824ffb42adbf: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:12:55.800418 containerd[1522]: time="2025-08-19T08:12:55.799208623Z" level=info msg="CreateContainer within sandbox \"f46f42aa93dcc90aba7888f4fec4a146f80c82f6c96bae33d0c1cbc931b8b011\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 08:12:55.812761 containerd[1522]: time="2025-08-19T08:12:55.811513831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-a-0a67852594,Uid:d3340775a5b33728a9c0b5d09103e9ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6bd37bb7800352216091ae87851edf1855eb7a6eeb56e4d8eb2d7df6c544f01\"" Aug 19 08:12:55.814372 kubelet[2339]: E0819 08:12:55.814319 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:55.818479 containerd[1522]: time="2025-08-19T08:12:55.818422428Z" level=info msg="Container 52b0cc86913fc80c094657157b81373d78f84174074c0b4c12e8ecd92553ba04: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:12:55.820276 containerd[1522]: time="2025-08-19T08:12:55.820078923Z" level=info msg="CreateContainer within sandbox \"a6bd37bb7800352216091ae87851edf1855eb7a6eeb56e4d8eb2d7df6c544f01\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 08:12:55.823697 containerd[1522]: time="2025-08-19T08:12:55.823616028Z" level=info msg="CreateContainer within sandbox \"f294338c39c82800b1cba5d5d80d16d2032ade66fda6f102dd750dc055f6b598\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"45addb7fed599b887e0f441b3d3c3b6bca6a3d17716867572120824ffb42adbf\"" Aug 19 08:12:55.825048 containerd[1522]: time="2025-08-19T08:12:55.824972462Z" level=info msg="StartContainer for \"45addb7fed599b887e0f441b3d3c3b6bca6a3d17716867572120824ffb42adbf\"" Aug 19 08:12:55.832128 containerd[1522]: time="2025-08-19T08:12:55.831993344Z" level=info msg="connecting to shim 45addb7fed599b887e0f441b3d3c3b6bca6a3d17716867572120824ffb42adbf" address="unix:///run/containerd/s/346c9cb909e929f3249b2f2b739a8c3d656b49b88b5c5e1d8d84e76d0f6de2b8" protocol=ttrpc version=3 Aug 19 08:12:55.843185 containerd[1522]: time="2025-08-19T08:12:55.842746857Z" level=info msg="CreateContainer within sandbox \"f46f42aa93dcc90aba7888f4fec4a146f80c82f6c96bae33d0c1cbc931b8b011\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"52b0cc86913fc80c094657157b81373d78f84174074c0b4c12e8ecd92553ba04\"" Aug 19 08:12:55.845752 containerd[1522]: time="2025-08-19T08:12:55.845677180Z" level=info msg="StartContainer for \"52b0cc86913fc80c094657157b81373d78f84174074c0b4c12e8ecd92553ba04\"" Aug 19 08:12:55.847885 containerd[1522]: time="2025-08-19T08:12:55.847808894Z" level=info msg="Container fac6592d8bd8dbf7bc409933a07c70bb2644d0a9b4830e79740453e1630f0733: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:12:55.851319 containerd[1522]: time="2025-08-19T08:12:55.851236462Z" level=info msg="connecting to shim 52b0cc86913fc80c094657157b81373d78f84174074c0b4c12e8ecd92553ba04" address="unix:///run/containerd/s/672e5f4366f07511c384e8ab49aebdcc5177c46b95086c287ba2b58bc862cdf1" protocol=ttrpc version=3 Aug 19 08:12:55.867227 containerd[1522]: time="2025-08-19T08:12:55.867078014Z" level=info msg="CreateContainer within sandbox \"a6bd37bb7800352216091ae87851edf1855eb7a6eeb56e4d8eb2d7df6c544f01\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fac6592d8bd8dbf7bc409933a07c70bb2644d0a9b4830e79740453e1630f0733\"" Aug 19 08:12:55.869230 containerd[1522]: time="2025-08-19T08:12:55.869177450Z" level=info msg="StartContainer for \"fac6592d8bd8dbf7bc409933a07c70bb2644d0a9b4830e79740453e1630f0733\"" Aug 19 08:12:55.872112 containerd[1522]: time="2025-08-19T08:12:55.872058968Z" level=info msg="connecting to shim fac6592d8bd8dbf7bc409933a07c70bb2644d0a9b4830e79740453e1630f0733" address="unix:///run/containerd/s/264ceff32f8ba38fe414163ffa40aa12dbced731ed9ac5ef8457350f2f235764" protocol=ttrpc version=3 Aug 19 08:12:55.872934 systemd[1]: Started cri-containerd-45addb7fed599b887e0f441b3d3c3b6bca6a3d17716867572120824ffb42adbf.scope - libcontainer container 45addb7fed599b887e0f441b3d3c3b6bca6a3d17716867572120824ffb42adbf. Aug 19 08:12:55.898053 systemd[1]: Started cri-containerd-52b0cc86913fc80c094657157b81373d78f84174074c0b4c12e8ecd92553ba04.scope - libcontainer container 52b0cc86913fc80c094657157b81373d78f84174074c0b4c12e8ecd92553ba04. Aug 19 08:12:55.936721 systemd[1]: Started cri-containerd-fac6592d8bd8dbf7bc409933a07c70bb2644d0a9b4830e79740453e1630f0733.scope - libcontainer container fac6592d8bd8dbf7bc409933a07c70bb2644d0a9b4830e79740453e1630f0733. Aug 19 08:12:55.958360 kubelet[2339]: E0819 08:12:55.958268 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.65.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.65.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 19 08:12:55.996674 containerd[1522]: time="2025-08-19T08:12:55.996013378Z" level=info msg="StartContainer for \"45addb7fed599b887e0f441b3d3c3b6bca6a3d17716867572120824ffb42adbf\" returns successfully" Aug 19 08:12:56.052845 containerd[1522]: time="2025-08-19T08:12:56.052789146Z" level=info msg="StartContainer for \"52b0cc86913fc80c094657157b81373d78f84174074c0b4c12e8ecd92553ba04\" returns successfully" Aug 19 08:12:56.088704 containerd[1522]: time="2025-08-19T08:12:56.088641448Z" level=info msg="StartContainer for \"fac6592d8bd8dbf7bc409933a07c70bb2644d0a9b4830e79740453e1630f0733\" returns successfully" Aug 19 08:12:56.203442 kubelet[2339]: E0819 08:12:56.203289 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.65.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-a-0a67852594&limit=500&resourceVersion=0\": dial tcp 143.198.65.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 19 08:12:56.262625 kubelet[2339]: E0819 08:12:56.260174 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.65.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.65.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 19 08:12:56.475551 kubelet[2339]: I0819 08:12:56.475510 2339 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:56.986396 kubelet[2339]: E0819 08:12:56.986277 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:56.987982 kubelet[2339]: E0819 08:12:56.987829 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:56.990643 kubelet[2339]: E0819 08:12:56.990573 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:56.992294 kubelet[2339]: E0819 08:12:56.992255 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:56.997660 kubelet[2339]: E0819 08:12:56.996965 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:56.997860 kubelet[2339]: E0819 08:12:56.997684 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:58.000126 kubelet[2339]: E0819 08:12:57.999472 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:58.000126 kubelet[2339]: E0819 08:12:57.999678 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:58.000126 kubelet[2339]: E0819 08:12:57.999812 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:58.000126 kubelet[2339]: E0819 08:12:57.999917 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:58.000126 kubelet[2339]: E0819 08:12:58.000041 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:58.000126 kubelet[2339]: E0819 08:12:58.000055 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:58.719985 kubelet[2339]: E0819 08:12:58.719920 2339 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:58.809061 kubelet[2339]: I0819 08:12:58.808733 2339 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:58.809061 kubelet[2339]: E0819 08:12:58.808782 2339 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4426.0.0-a-0a67852594\": node \"ci-4426.0.0-a-0a67852594\" not found" Aug 19 08:12:58.851133 kubelet[2339]: E0819 08:12:58.851072 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-0a67852594\" not found" Aug 19 08:12:58.865782 kubelet[2339]: E0819 08:12:58.865654 2339 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4426.0.0-a-0a67852594.185d1cdf3c9e3a94 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.0.0-a-0a67852594,UID:ci-4426.0.0-a-0a67852594,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.0.0-a-0a67852594,},FirstTimestamp:2025-08-19 08:12:54.864648852 +0000 UTC m=+1.089298704,LastTimestamp:2025-08-19 08:12:54.864648852 +0000 UTC m=+1.089298704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.0.0-a-0a67852594,}" Aug 19 08:12:58.951808 kubelet[2339]: E0819 08:12:58.951737 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-0a67852594\" not found" Aug 19 08:12:59.005266 kubelet[2339]: E0819 08:12:59.004450 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:59.006485 kubelet[2339]: E0819 08:12:59.006258 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:59.009067 kubelet[2339]: E0819 08:12:59.007438 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:59.009067 kubelet[2339]: E0819 08:12:59.006355 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-0a67852594\" not found" node="ci-4426.0.0-a-0a67852594" Aug 19 08:12:59.009067 kubelet[2339]: E0819 08:12:59.007765 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:59.009067 kubelet[2339]: E0819 08:12:59.008857 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:12:59.053095 kubelet[2339]: E0819 08:12:59.053033 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-0a67852594\" not found" Aug 19 08:12:59.154207 kubelet[2339]: E0819 08:12:59.154145 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-0a67852594\" not found" Aug 19 08:12:59.254648 kubelet[2339]: E0819 08:12:59.254573 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-0a67852594\" not found" Aug 19 08:12:59.355769 kubelet[2339]: E0819 08:12:59.355265 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-0a67852594\" not found" Aug 19 08:12:59.386139 kubelet[2339]: I0819 08:12:59.386073 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-a-0a67852594" Aug 19 08:12:59.397164 kubelet[2339]: E0819 08:12:59.396871 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.0.0-a-0a67852594\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4426.0.0-a-0a67852594" Aug 19 08:12:59.397164 kubelet[2339]: I0819 08:12:59.396911 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" Aug 19 08:12:59.400746 kubelet[2339]: E0819 08:12:59.400694 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.0.0-a-0a67852594\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" Aug 19 08:12:59.401282 kubelet[2339]: I0819 08:12:59.400980 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:12:59.405647 kubelet[2339]: E0819 08:12:59.405174 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:12:59.850757 kubelet[2339]: I0819 08:12:59.850715 2339 apiserver.go:52] "Watching apiserver" Aug 19 08:12:59.885522 kubelet[2339]: I0819 08:12:59.885464 2339 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 08:13:01.233929 systemd[1]: Reload requested from client PID 2615 ('systemctl') (unit session-7.scope)... Aug 19 08:13:01.233950 systemd[1]: Reloading... Aug 19 08:13:01.394647 zram_generator::config[2664]: No configuration found. Aug 19 08:13:01.864113 systemd[1]: Reloading finished in 629 ms. Aug 19 08:13:01.914260 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:13:01.931291 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 08:13:01.931636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:13:01.931731 systemd[1]: kubelet.service: Consumed 1.647s CPU time, 127.3M memory peak. Aug 19 08:13:01.937323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:13:02.166927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:13:02.183531 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:13:02.281251 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:13:02.281251 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 08:13:02.281251 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:13:02.281887 kubelet[2709]: I0819 08:13:02.281313 2709 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:13:02.300763 kubelet[2709]: I0819 08:13:02.300702 2709 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 19 08:13:02.300763 kubelet[2709]: I0819 08:13:02.300750 2709 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:13:02.301686 kubelet[2709]: I0819 08:13:02.301114 2709 server.go:956] "Client rotation is on, will bootstrap in background" Aug 19 08:13:02.304317 kubelet[2709]: I0819 08:13:02.303510 2709 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 19 08:13:02.307213 kubelet[2709]: I0819 08:13:02.307134 2709 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:13:02.320990 kubelet[2709]: I0819 08:13:02.320945 2709 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:13:02.332352 kubelet[2709]: I0819 08:13:02.330710 2709 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:13:02.332352 kubelet[2709]: I0819 08:13:02.331008 2709 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:13:02.332352 kubelet[2709]: I0819 08:13:02.331041 2709 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-a-0a67852594","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:13:02.332352 kubelet[2709]: I0819 08:13:02.331299 2709 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:13:02.332751 kubelet[2709]: I0819 08:13:02.331310 2709 container_manager_linux.go:303] "Creating device plugin manager" Aug 19 08:13:02.332751 kubelet[2709]: I0819 08:13:02.331360 2709 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:13:02.332751 kubelet[2709]: I0819 08:13:02.331574 2709 kubelet.go:480] "Attempting to sync node with API server" Aug 19 08:13:02.332751 kubelet[2709]: I0819 08:13:02.331611 2709 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:13:02.332751 kubelet[2709]: I0819 08:13:02.331640 2709 kubelet.go:386] "Adding apiserver pod source" Aug 19 08:13:02.332751 kubelet[2709]: I0819 08:13:02.331665 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:13:02.341286 sudo[2723]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 19 08:13:02.341876 sudo[2723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 19 08:13:02.345081 kubelet[2709]: I0819 08:13:02.345035 2709 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:13:02.351934 kubelet[2709]: I0819 08:13:02.351680 2709 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 19 08:13:02.365549 kubelet[2709]: I0819 08:13:02.364346 2709 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 08:13:02.365549 kubelet[2709]: I0819 08:13:02.364426 2709 server.go:1289] "Started kubelet" Aug 19 08:13:02.369668 kubelet[2709]: I0819 08:13:02.369535 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:13:02.370640 kubelet[2709]: I0819 08:13:02.370415 2709 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:13:02.370640 kubelet[2709]: I0819 08:13:02.370520 2709 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:13:02.374568 kubelet[2709]: I0819 08:13:02.373164 2709 server.go:317] "Adding debug handlers to kubelet server" Aug 19 08:13:02.377647 kubelet[2709]: I0819 08:13:02.376740 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:13:02.384244 kubelet[2709]: I0819 08:13:02.384042 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:13:02.388649 kubelet[2709]: I0819 08:13:02.386103 2709 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 08:13:02.395503 kubelet[2709]: I0819 08:13:02.395265 2709 factory.go:223] Registration of the systemd container factory successfully Aug 19 08:13:02.395503 kubelet[2709]: I0819 08:13:02.395466 2709 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:13:02.400665 kubelet[2709]: I0819 08:13:02.400306 2709 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 08:13:02.403023 kubelet[2709]: I0819 08:13:02.402962 2709 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:13:02.405735 kubelet[2709]: I0819 08:13:02.404108 2709 factory.go:223] Registration of the containerd container factory successfully Aug 19 08:13:02.412894 kubelet[2709]: E0819 08:13:02.412798 2709 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:13:02.434671 kubelet[2709]: I0819 08:13:02.434370 2709 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 19 08:13:02.477304 kubelet[2709]: I0819 08:13:02.476024 2709 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 19 08:13:02.477304 kubelet[2709]: I0819 08:13:02.476161 2709 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 19 08:13:02.477304 kubelet[2709]: I0819 08:13:02.476198 2709 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 08:13:02.477304 kubelet[2709]: I0819 08:13:02.476232 2709 kubelet.go:2436] "Starting kubelet main sync loop" Aug 19 08:13:02.477304 kubelet[2709]: E0819 08:13:02.476414 2709 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:13:02.552226 kubelet[2709]: I0819 08:13:02.552016 2709 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 08:13:02.552226 kubelet[2709]: I0819 08:13:02.552042 2709 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 08:13:02.552226 kubelet[2709]: I0819 08:13:02.552075 2709 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:13:02.552462 kubelet[2709]: I0819 08:13:02.552277 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 08:13:02.552462 kubelet[2709]: I0819 08:13:02.552293 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 08:13:02.552462 kubelet[2709]: I0819 08:13:02.552318 2709 policy_none.go:49] "None policy: Start" Aug 19 08:13:02.552462 kubelet[2709]: I0819 08:13:02.552336 2709 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 08:13:02.552462 kubelet[2709]: I0819 08:13:02.552351 2709 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:13:02.552647 kubelet[2709]: I0819 08:13:02.552491 2709 state_mem.go:75] "Updated machine memory state" Aug 19 08:13:02.564169 kubelet[2709]: E0819 08:13:02.559727 2709 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 19 08:13:02.564169 kubelet[2709]: I0819 08:13:02.559999 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:13:02.564169 kubelet[2709]: I0819 08:13:02.560017 2709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:13:02.564169 kubelet[2709]: I0819 08:13:02.561239 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:13:02.571617 kubelet[2709]: E0819 08:13:02.570339 2709 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 08:13:02.580696 kubelet[2709]: I0819 08:13:02.580520 2709 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.585804 kubelet[2709]: I0819 08:13:02.581224 2709 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.585804 kubelet[2709]: I0819 08:13:02.585402 2709 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.608090 kubelet[2709]: I0819 08:13:02.608036 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3340775a5b33728a9c0b5d09103e9ba-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-a-0a67852594\" (UID: \"d3340775a5b33728a9c0b5d09103e9ba\") " pod="kube-system/kube-scheduler-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.608769 kubelet[2709]: I0819 08:13:02.608699 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7853bc82b06c8bfa9662438123bfc4da-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-a-0a67852594\" (UID: \"7853bc82b06c8bfa9662438123bfc4da\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.609283 kubelet[2709]: I0819 08:13:02.609248 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7853bc82b06c8bfa9662438123bfc4da-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-a-0a67852594\" (UID: \"7853bc82b06c8bfa9662438123bfc4da\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.610043 kubelet[2709]: I0819 08:13:02.610001 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.611741 kubelet[2709]: I0819 08:13:02.610476 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.612280 kubelet[2709]: I0819 08:13:02.612187 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7853bc82b06c8bfa9662438123bfc4da-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-a-0a67852594\" (UID: \"7853bc82b06c8bfa9662438123bfc4da\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.612705 kubelet[2709]: I0819 08:13:02.612445 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.612989 kubelet[2709]: I0819 08:13:02.612829 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.613146 kubelet[2709]: I0819 08:13:02.613097 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58ff7715fd0ef59a3cc50a5727296eb5-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-a-0a67852594\" (UID: \"58ff7715fd0ef59a3cc50a5727296eb5\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.628315 kubelet[2709]: I0819 08:13:02.627797 2709 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 19 08:13:02.629386 kubelet[2709]: I0819 08:13:02.629347 2709 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 19 08:13:02.634291 kubelet[2709]: I0819 08:13:02.634023 2709 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 19 08:13:02.676494 kubelet[2709]: I0819 08:13:02.676437 2709 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.697158 kubelet[2709]: I0819 08:13:02.697013 2709 kubelet_node_status.go:124] "Node was previously registered" node="ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.697814 kubelet[2709]: I0819 08:13:02.697412 2709 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.0.0-a-0a67852594" Aug 19 08:13:02.930737 kubelet[2709]: E0819 08:13:02.930691 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:02.934636 kubelet[2709]: E0819 08:13:02.931319 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:02.936208 kubelet[2709]: E0819 08:13:02.934962 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:03.056728 sudo[2723]: pam_unix(sudo:session): session closed for user root Aug 19 08:13:03.344081 kubelet[2709]: I0819 08:13:03.343874 2709 apiserver.go:52] "Watching apiserver" Aug 19 08:13:03.403617 kubelet[2709]: I0819 08:13:03.403537 2709 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 08:13:03.461771 kubelet[2709]: I0819 08:13:03.459810 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.0.0-a-0a67852594" podStartSLOduration=1.459784583 podStartE2EDuration="1.459784583s" podCreationTimestamp="2025-08-19 08:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:13:03.459482554 +0000 UTC m=+1.265113044" watchObservedRunningTime="2025-08-19 08:13:03.459784583 +0000 UTC m=+1.265415071" Aug 19 08:13:03.462025 kubelet[2709]: I0819 08:13:03.461943 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.0.0-a-0a67852594" podStartSLOduration=1.461916237 podStartE2EDuration="1.461916237s" podCreationTimestamp="2025-08-19 08:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:13:03.443368804 +0000 UTC m=+1.248999520" watchObservedRunningTime="2025-08-19 08:13:03.461916237 +0000 UTC m=+1.267546728" Aug 19 08:13:03.483332 kubelet[2709]: I0819 08:13:03.483238 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.0.0-a-0a67852594" podStartSLOduration=1.483216318 podStartE2EDuration="1.483216318s" podCreationTimestamp="2025-08-19 08:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:13:03.482822372 +0000 UTC m=+1.288452860" watchObservedRunningTime="2025-08-19 08:13:03.483216318 +0000 UTC m=+1.288846810" Aug 19 08:13:03.544306 kubelet[2709]: E0819 08:13:03.544242 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:03.545902 kubelet[2709]: E0819 08:13:03.545841 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:03.547900 kubelet[2709]: E0819 08:13:03.546750 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:04.547508 kubelet[2709]: E0819 08:13:04.547434 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:04.550840 kubelet[2709]: E0819 08:13:04.548775 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:04.922810 sudo[1781]: pam_unix(sudo:session): session closed for user root Aug 19 08:13:04.926366 sshd[1780]: Connection closed by 139.178.89.65 port 40334 Aug 19 08:13:04.927053 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Aug 19 08:13:04.933302 systemd[1]: sshd@6-143.198.65.59:22-139.178.89.65:40334.service: Deactivated successfully. Aug 19 08:13:04.937290 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 08:13:04.937735 systemd[1]: session-7.scope: Consumed 6.477s CPU time, 223.2M memory peak. Aug 19 08:13:04.943820 systemd-logind[1500]: Session 7 logged out. Waiting for processes to exit. Aug 19 08:13:04.945633 systemd-logind[1500]: Removed session 7. Aug 19 08:13:05.097190 kubelet[2709]: E0819 08:13:05.097131 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:06.079419 kubelet[2709]: E0819 08:13:06.079054 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:06.208803 systemd-resolved[1404]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 19 08:13:06.361061 systemd-timesyncd[1429]: Contacted time server 64.142.122.38:123 (2.flatcar.pool.ntp.org). Aug 19 08:13:06.361159 systemd-timesyncd[1429]: Initial clock synchronization to Tue 2025-08-19 08:13:06.474511 UTC. Aug 19 08:13:06.554922 kubelet[2709]: E0819 08:13:06.554555 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:07.418892 kubelet[2709]: I0819 08:13:07.418832 2709 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 08:13:07.420358 containerd[1522]: time="2025-08-19T08:13:07.420207556Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 08:13:07.421142 kubelet[2709]: I0819 08:13:07.420639 2709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 08:13:08.403568 systemd[1]: Created slice kubepods-besteffort-pod76a0562b_62cb_4563_966d_dba1bb4f8877.slice - libcontainer container kubepods-besteffort-pod76a0562b_62cb_4563_966d_dba1bb4f8877.slice. Aug 19 08:13:08.423245 systemd[1]: Created slice kubepods-burstable-pod84c58333_6ada_4bef_9203_c687c293258f.slice - libcontainer container kubepods-burstable-pod84c58333_6ada_4bef_9203_c687c293258f.slice. Aug 19 08:13:08.452204 kubelet[2709]: I0819 08:13:08.452080 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84c58333-6ada-4bef-9203-c687c293258f-cilium-config-path\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.452204 kubelet[2709]: I0819 08:13:08.452146 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cb5c\" (UniqueName: \"kubernetes.io/projected/76a0562b-62cb-4563-966d-dba1bb4f8877-kube-api-access-5cb5c\") pod \"kube-proxy-fcqfn\" (UID: \"76a0562b-62cb-4563-966d-dba1bb4f8877\") " pod="kube-system/kube-proxy-fcqfn" Aug 19 08:13:08.453741 kubelet[2709]: I0819 08:13:08.452244 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-hostproc\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.453741 kubelet[2709]: I0819 08:13:08.452298 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cni-path\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.453741 kubelet[2709]: I0819 08:13:08.452315 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-host-proc-sys-net\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.453741 kubelet[2709]: I0819 08:13:08.452337 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76a0562b-62cb-4563-966d-dba1bb4f8877-kube-proxy\") pod \"kube-proxy-fcqfn\" (UID: \"76a0562b-62cb-4563-966d-dba1bb4f8877\") " pod="kube-system/kube-proxy-fcqfn" Aug 19 08:13:08.453741 kubelet[2709]: I0819 08:13:08.452360 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76a0562b-62cb-4563-966d-dba1bb4f8877-lib-modules\") pod \"kube-proxy-fcqfn\" (UID: \"76a0562b-62cb-4563-966d-dba1bb4f8877\") " pod="kube-system/kube-proxy-fcqfn" Aug 19 08:13:08.453741 kubelet[2709]: I0819 08:13:08.452387 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-host-proc-sys-kernel\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.454098 kubelet[2709]: I0819 08:13:08.452408 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84c58333-6ada-4bef-9203-c687c293258f-hubble-tls\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.454098 kubelet[2709]: I0819 08:13:08.452428 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76a0562b-62cb-4563-966d-dba1bb4f8877-xtables-lock\") pod \"kube-proxy-fcqfn\" (UID: \"76a0562b-62cb-4563-966d-dba1bb4f8877\") " pod="kube-system/kube-proxy-fcqfn" Aug 19 08:13:08.454098 kubelet[2709]: I0819 08:13:08.452450 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cilium-run\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.454098 kubelet[2709]: I0819 08:13:08.452468 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cilium-cgroup\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.454098 kubelet[2709]: I0819 08:13:08.452497 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84c58333-6ada-4bef-9203-c687c293258f-clustermesh-secrets\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.454098 kubelet[2709]: I0819 08:13:08.452512 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6sr2\" (UniqueName: \"kubernetes.io/projected/84c58333-6ada-4bef-9203-c687c293258f-kube-api-access-f6sr2\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.454358 kubelet[2709]: I0819 08:13:08.452529 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-bpf-maps\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.454358 kubelet[2709]: I0819 08:13:08.452545 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-etc-cni-netd\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.454358 kubelet[2709]: I0819 08:13:08.452564 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-lib-modules\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.454358 kubelet[2709]: I0819 08:13:08.452579 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-xtables-lock\") pod \"cilium-kcq8b\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " pod="kube-system/cilium-kcq8b" Aug 19 08:13:08.669012 systemd[1]: Created slice kubepods-besteffort-pod5a268a9e_c269_4baa_bc2c_583894c939b6.slice - libcontainer container kubepods-besteffort-pod5a268a9e_c269_4baa_bc2c_583894c939b6.slice. Aug 19 08:13:08.683742 kubelet[2709]: I0819 08:13:08.683675 2709 status_manager.go:895] "Failed to get status for pod" podUID="5a268a9e-c269-4baa-bc2c-583894c939b6" pod="kube-system/cilium-operator-6c4d7847fc-nlvfd" err="pods \"cilium-operator-6c4d7847fc-nlvfd\" is forbidden: User \"system:node:ci-4426.0.0-a-0a67852594\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4426.0.0-a-0a67852594' and this object" Aug 19 08:13:08.715286 kubelet[2709]: E0819 08:13:08.715011 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:08.716455 containerd[1522]: time="2025-08-19T08:13:08.716315981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fcqfn,Uid:76a0562b-62cb-4563-966d-dba1bb4f8877,Namespace:kube-system,Attempt:0,}" Aug 19 08:13:08.733074 kubelet[2709]: E0819 08:13:08.733012 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:08.734129 containerd[1522]: time="2025-08-19T08:13:08.734026648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kcq8b,Uid:84c58333-6ada-4bef-9203-c687c293258f,Namespace:kube-system,Attempt:0,}" Aug 19 08:13:08.765641 kubelet[2709]: I0819 08:13:08.758659 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a268a9e-c269-4baa-bc2c-583894c939b6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nlvfd\" (UID: \"5a268a9e-c269-4baa-bc2c-583894c939b6\") " pod="kube-system/cilium-operator-6c4d7847fc-nlvfd" Aug 19 08:13:08.765641 kubelet[2709]: I0819 08:13:08.759985 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2th\" (UniqueName: \"kubernetes.io/projected/5a268a9e-c269-4baa-bc2c-583894c939b6-kube-api-access-vb2th\") pod \"cilium-operator-6c4d7847fc-nlvfd\" (UID: \"5a268a9e-c269-4baa-bc2c-583894c939b6\") " pod="kube-system/cilium-operator-6c4d7847fc-nlvfd" Aug 19 08:13:08.776325 containerd[1522]: time="2025-08-19T08:13:08.776249748Z" level=info msg="connecting to shim d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979" address="unix:///run/containerd/s/a5efd9e7943b3a868e5541c630b13c24de78dc73a4966e35837ad76e6570a5bd" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:13:08.776724 containerd[1522]: time="2025-08-19T08:13:08.776676767Z" level=info msg="connecting to shim b45f1699bb4080c503c42bcd999f2d1f4eec8d499644e0a54a8160852316bf24" address="unix:///run/containerd/s/30fb181db104e40c6af31e3e125f619d92dea3d34e98c8a19df8b36a5e8a62ad" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:13:08.839978 systemd[1]: Started cri-containerd-b45f1699bb4080c503c42bcd999f2d1f4eec8d499644e0a54a8160852316bf24.scope - libcontainer container b45f1699bb4080c503c42bcd999f2d1f4eec8d499644e0a54a8160852316bf24. Aug 19 08:13:08.843952 systemd[1]: Started cri-containerd-d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979.scope - libcontainer container d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979. Aug 19 08:13:08.919579 containerd[1522]: time="2025-08-19T08:13:08.919498988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kcq8b,Uid:84c58333-6ada-4bef-9203-c687c293258f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\"" Aug 19 08:13:08.922020 kubelet[2709]: E0819 08:13:08.921261 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:08.923868 containerd[1522]: time="2025-08-19T08:13:08.923762140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fcqfn,Uid:76a0562b-62cb-4563-966d-dba1bb4f8877,Namespace:kube-system,Attempt:0,} returns sandbox id \"b45f1699bb4080c503c42bcd999f2d1f4eec8d499644e0a54a8160852316bf24\"" Aug 19 08:13:08.924873 kubelet[2709]: E0819 08:13:08.924837 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:08.926605 containerd[1522]: time="2025-08-19T08:13:08.926541859Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 19 08:13:08.934420 containerd[1522]: time="2025-08-19T08:13:08.934356569Z" level=info msg="CreateContainer within sandbox \"b45f1699bb4080c503c42bcd999f2d1f4eec8d499644e0a54a8160852316bf24\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 08:13:08.951047 containerd[1522]: time="2025-08-19T08:13:08.950980168Z" level=info msg="Container 3d36f4c378ee08acc03494a21b2c3209e0a91d0bba6024035099bbba36b579c3: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:13:08.960621 containerd[1522]: time="2025-08-19T08:13:08.960492221Z" level=info msg="CreateContainer within sandbox \"b45f1699bb4080c503c42bcd999f2d1f4eec8d499644e0a54a8160852316bf24\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3d36f4c378ee08acc03494a21b2c3209e0a91d0bba6024035099bbba36b579c3\"" Aug 19 08:13:08.963936 containerd[1522]: time="2025-08-19T08:13:08.962942088Z" level=info msg="StartContainer for \"3d36f4c378ee08acc03494a21b2c3209e0a91d0bba6024035099bbba36b579c3\"" Aug 19 08:13:08.967864 containerd[1522]: time="2025-08-19T08:13:08.967797059Z" level=info msg="connecting to shim 3d36f4c378ee08acc03494a21b2c3209e0a91d0bba6024035099bbba36b579c3" address="unix:///run/containerd/s/30fb181db104e40c6af31e3e125f619d92dea3d34e98c8a19df8b36a5e8a62ad" protocol=ttrpc version=3 Aug 19 08:13:08.979885 kubelet[2709]: E0819 08:13:08.978119 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:08.981637 containerd[1522]: time="2025-08-19T08:13:08.981545844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nlvfd,Uid:5a268a9e-c269-4baa-bc2c-583894c939b6,Namespace:kube-system,Attempt:0,}" Aug 19 08:13:09.010853 systemd[1]: Started cri-containerd-3d36f4c378ee08acc03494a21b2c3209e0a91d0bba6024035099bbba36b579c3.scope - libcontainer container 3d36f4c378ee08acc03494a21b2c3209e0a91d0bba6024035099bbba36b579c3. Aug 19 08:13:09.025659 containerd[1522]: time="2025-08-19T08:13:09.023838966Z" level=info msg="connecting to shim 8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d" address="unix:///run/containerd/s/1924f3c709904186e9619b68d38df1a470f2de1178f4f052b5b492982c5cf888" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:13:09.083330 systemd[1]: Started cri-containerd-8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d.scope - libcontainer container 8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d. Aug 19 08:13:09.140351 containerd[1522]: time="2025-08-19T08:13:09.140266304Z" level=info msg="StartContainer for \"3d36f4c378ee08acc03494a21b2c3209e0a91d0bba6024035099bbba36b579c3\" returns successfully" Aug 19 08:13:09.187392 containerd[1522]: time="2025-08-19T08:13:09.184939554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nlvfd,Uid:5a268a9e-c269-4baa-bc2c-583894c939b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\"" Aug 19 08:13:09.192310 kubelet[2709]: E0819 08:13:09.191146 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:09.598726 kubelet[2709]: E0819 08:13:09.597302 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:09.619628 kubelet[2709]: I0819 08:13:09.619089 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fcqfn" podStartSLOduration=1.619070157 podStartE2EDuration="1.619070157s" podCreationTimestamp="2025-08-19 08:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:13:09.618782146 +0000 UTC m=+7.424412635" watchObservedRunningTime="2025-08-19 08:13:09.619070157 +0000 UTC m=+7.424700642" Aug 19 08:13:12.743947 kubelet[2709]: E0819 08:13:12.743802 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:13.609037 kubelet[2709]: E0819 08:13:13.608993 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:14.187086 update_engine[1501]: I20250819 08:13:14.186861 1501 update_attempter.cc:509] Updating boot flags... Aug 19 08:13:15.115161 kubelet[2709]: E0819 08:13:15.115113 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:15.425189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3868521965.mount: Deactivated successfully. Aug 19 08:13:18.137534 containerd[1522]: time="2025-08-19T08:13:18.137305397Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:13:18.138556 containerd[1522]: time="2025-08-19T08:13:18.138472811Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 19 08:13:18.139469 containerd[1522]: time="2025-08-19T08:13:18.139430055Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:13:18.141816 containerd[1522]: time="2025-08-19T08:13:18.141754544Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.214946895s" Aug 19 08:13:18.142101 containerd[1522]: time="2025-08-19T08:13:18.142049016Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 19 08:13:18.143768 containerd[1522]: time="2025-08-19T08:13:18.143721191Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 19 08:13:18.155405 containerd[1522]: time="2025-08-19T08:13:18.154917889Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:13:18.199751 containerd[1522]: time="2025-08-19T08:13:18.199701786Z" level=info msg="Container f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:13:18.203129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224208570.mount: Deactivated successfully. Aug 19 08:13:18.215819 containerd[1522]: time="2025-08-19T08:13:18.215762767Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\"" Aug 19 08:13:18.216833 containerd[1522]: time="2025-08-19T08:13:18.216791583Z" level=info msg="StartContainer for \"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\"" Aug 19 08:13:18.219760 containerd[1522]: time="2025-08-19T08:13:18.219711374Z" level=info msg="connecting to shim f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230" address="unix:///run/containerd/s/a5efd9e7943b3a868e5541c630b13c24de78dc73a4966e35837ad76e6570a5bd" protocol=ttrpc version=3 Aug 19 08:13:18.252326 systemd[1]: Started cri-containerd-f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230.scope - libcontainer container f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230. Aug 19 08:13:18.298330 containerd[1522]: time="2025-08-19T08:13:18.298207875Z" level=info msg="StartContainer for \"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\" returns successfully" Aug 19 08:13:18.321776 systemd[1]: cri-containerd-f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230.scope: Deactivated successfully. Aug 19 08:13:18.322524 systemd[1]: cri-containerd-f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230.scope: Consumed 29ms CPU time, 6.7M memory peak, 4K read from disk, 3.2M written to disk. Aug 19 08:13:18.366619 containerd[1522]: time="2025-08-19T08:13:18.366541144Z" level=info msg="received exit event container_id:\"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\" id:\"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\" pid:3147 exited_at:{seconds:1755591198 nanos:327000323}" Aug 19 08:13:18.369414 containerd[1522]: time="2025-08-19T08:13:18.369345403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\" id:\"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\" pid:3147 exited_at:{seconds:1755591198 nanos:327000323}" Aug 19 08:13:18.402305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230-rootfs.mount: Deactivated successfully. Aug 19 08:13:18.626141 kubelet[2709]: E0819 08:13:18.626088 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:18.632682 containerd[1522]: time="2025-08-19T08:13:18.631810883Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:13:18.644883 containerd[1522]: time="2025-08-19T08:13:18.644799989Z" level=info msg="Container 83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:13:18.660217 containerd[1522]: time="2025-08-19T08:13:18.660033553Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\"" Aug 19 08:13:18.663335 containerd[1522]: time="2025-08-19T08:13:18.662946620Z" level=info msg="StartContainer for \"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\"" Aug 19 08:13:18.664925 containerd[1522]: time="2025-08-19T08:13:18.664844156Z" level=info msg="connecting to shim 83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7" address="unix:///run/containerd/s/a5efd9e7943b3a868e5541c630b13c24de78dc73a4966e35837ad76e6570a5bd" protocol=ttrpc version=3 Aug 19 08:13:18.695912 systemd[1]: Started cri-containerd-83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7.scope - libcontainer container 83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7. Aug 19 08:13:18.756571 containerd[1522]: time="2025-08-19T08:13:18.756404172Z" level=info msg="StartContainer for \"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\" returns successfully" Aug 19 08:13:18.778218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:13:18.778821 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:13:18.779784 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:13:18.784050 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:13:18.784305 systemd[1]: cri-containerd-83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7.scope: Deactivated successfully. Aug 19 08:13:18.789457 containerd[1522]: time="2025-08-19T08:13:18.789407873Z" level=info msg="received exit event container_id:\"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\" id:\"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\" pid:3193 exited_at:{seconds:1755591198 nanos:788985487}" Aug 19 08:13:18.790444 containerd[1522]: time="2025-08-19T08:13:18.790053231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\" id:\"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\" pid:3193 exited_at:{seconds:1755591198 nanos:788985487}" Aug 19 08:13:18.822291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:13:19.632368 kubelet[2709]: E0819 08:13:19.631742 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:19.640267 containerd[1522]: time="2025-08-19T08:13:19.640217497Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:13:19.668817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461485884.mount: Deactivated successfully. Aug 19 08:13:19.670154 containerd[1522]: time="2025-08-19T08:13:19.669712038Z" level=info msg="Container 864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:13:19.684610 containerd[1522]: time="2025-08-19T08:13:19.684528924Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\"" Aug 19 08:13:19.685511 containerd[1522]: time="2025-08-19T08:13:19.685204185Z" level=info msg="StartContainer for \"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\"" Aug 19 08:13:19.690653 containerd[1522]: time="2025-08-19T08:13:19.690554398Z" level=info msg="connecting to shim 864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851" address="unix:///run/containerd/s/a5efd9e7943b3a868e5541c630b13c24de78dc73a4966e35837ad76e6570a5bd" protocol=ttrpc version=3 Aug 19 08:13:19.731920 systemd[1]: Started cri-containerd-864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851.scope - libcontainer container 864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851. Aug 19 08:13:19.855601 systemd[1]: cri-containerd-864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851.scope: Deactivated successfully. Aug 19 08:13:19.864083 containerd[1522]: time="2025-08-19T08:13:19.863884389Z" level=info msg="received exit event container_id:\"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\" id:\"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\" pid:3239 exited_at:{seconds:1755591199 nanos:861787399}" Aug 19 08:13:19.864083 containerd[1522]: time="2025-08-19T08:13:19.864075265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\" id:\"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\" pid:3239 exited_at:{seconds:1755591199 nanos:861787399}" Aug 19 08:13:19.866751 containerd[1522]: time="2025-08-19T08:13:19.865449937Z" level=info msg="StartContainer for \"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\" returns successfully" Aug 19 08:13:20.202051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851-rootfs.mount: Deactivated successfully. Aug 19 08:13:20.581199 containerd[1522]: time="2025-08-19T08:13:20.581115010Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:13:20.582403 containerd[1522]: time="2025-08-19T08:13:20.582337284Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 19 08:13:20.583612 containerd[1522]: time="2025-08-19T08:13:20.583014005Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:13:20.585495 containerd[1522]: time="2025-08-19T08:13:20.585433349Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.441457339s" Aug 19 08:13:20.585495 containerd[1522]: time="2025-08-19T08:13:20.585497469Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 19 08:13:20.592073 containerd[1522]: time="2025-08-19T08:13:20.591345782Z" level=info msg="CreateContainer within sandbox \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 19 08:13:20.606633 containerd[1522]: time="2025-08-19T08:13:20.605186592Z" level=info msg="Container 8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:13:20.612858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147231599.mount: Deactivated successfully. Aug 19 08:13:20.621516 containerd[1522]: time="2025-08-19T08:13:20.621418684Z" level=info msg="CreateContainer within sandbox \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\"" Aug 19 08:13:20.623443 containerd[1522]: time="2025-08-19T08:13:20.622796515Z" level=info msg="StartContainer for \"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\"" Aug 19 08:13:20.625624 containerd[1522]: time="2025-08-19T08:13:20.625473829Z" level=info msg="connecting to shim 8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39" address="unix:///run/containerd/s/1924f3c709904186e9619b68d38df1a470f2de1178f4f052b5b492982c5cf888" protocol=ttrpc version=3 Aug 19 08:13:20.650175 kubelet[2709]: E0819 08:13:20.650131 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:20.664727 containerd[1522]: time="2025-08-19T08:13:20.661798888Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:13:20.685964 systemd[1]: Started cri-containerd-8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39.scope - libcontainer container 8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39. Aug 19 08:13:20.705412 containerd[1522]: time="2025-08-19T08:13:20.705199273Z" level=info msg="Container f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:13:20.722109 containerd[1522]: time="2025-08-19T08:13:20.721977507Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\"" Aug 19 08:13:20.723852 containerd[1522]: time="2025-08-19T08:13:20.723780335Z" level=info msg="StartContainer for \"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\"" Aug 19 08:13:20.728829 containerd[1522]: time="2025-08-19T08:13:20.728704336Z" level=info msg="connecting to shim f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b" address="unix:///run/containerd/s/a5efd9e7943b3a868e5541c630b13c24de78dc73a4966e35837ad76e6570a5bd" protocol=ttrpc version=3 Aug 19 08:13:20.772996 systemd[1]: Started cri-containerd-f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b.scope - libcontainer container f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b. Aug 19 08:13:20.799449 containerd[1522]: time="2025-08-19T08:13:20.799215120Z" level=info msg="StartContainer for \"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" returns successfully" Aug 19 08:13:20.836344 systemd[1]: cri-containerd-f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b.scope: Deactivated successfully. Aug 19 08:13:20.840739 containerd[1522]: time="2025-08-19T08:13:20.840547951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\" id:\"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\" pid:3319 exited_at:{seconds:1755591200 nanos:838369975}" Aug 19 08:13:20.841660 containerd[1522]: time="2025-08-19T08:13:20.840830766Z" level=info msg="received exit event container_id:\"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\" id:\"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\" pid:3319 exited_at:{seconds:1755591200 nanos:838369975}" Aug 19 08:13:20.858506 containerd[1522]: time="2025-08-19T08:13:20.858231013Z" level=info msg="StartContainer for \"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\" returns successfully" Aug 19 08:13:21.201099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898974181.mount: Deactivated successfully. Aug 19 08:13:21.656542 kubelet[2709]: E0819 08:13:21.656488 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:21.667890 kubelet[2709]: E0819 08:13:21.667850 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:21.674925 containerd[1522]: time="2025-08-19T08:13:21.674813669Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:13:21.717735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339178422.mount: Deactivated successfully. Aug 19 08:13:21.724519 containerd[1522]: time="2025-08-19T08:13:21.724326906Z" level=info msg="Container 04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:13:21.735492 containerd[1522]: time="2025-08-19T08:13:21.735344793Z" level=info msg="CreateContainer within sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\"" Aug 19 08:13:21.738044 containerd[1522]: time="2025-08-19T08:13:21.737987171Z" level=info msg="StartContainer for \"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\"" Aug 19 08:13:21.740066 containerd[1522]: time="2025-08-19T08:13:21.740016253Z" level=info msg="connecting to shim 04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a" address="unix:///run/containerd/s/a5efd9e7943b3a868e5541c630b13c24de78dc73a4966e35837ad76e6570a5bd" protocol=ttrpc version=3 Aug 19 08:13:21.783931 systemd[1]: Started cri-containerd-04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a.scope - libcontainer container 04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a. Aug 19 08:13:21.857477 kubelet[2709]: I0819 08:13:21.857285 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nlvfd" podStartSLOduration=2.4640050110000002 podStartE2EDuration="13.857253845s" podCreationTimestamp="2025-08-19 08:13:08 +0000 UTC" firstStartedPulling="2025-08-19 08:13:09.194215725 +0000 UTC m=+6.999846217" lastFinishedPulling="2025-08-19 08:13:20.587464584 +0000 UTC m=+18.393095051" observedRunningTime="2025-08-19 08:13:21.8569871 +0000 UTC m=+19.662617589" watchObservedRunningTime="2025-08-19 08:13:21.857253845 +0000 UTC m=+19.662884334" Aug 19 08:13:21.875714 containerd[1522]: time="2025-08-19T08:13:21.875662471Z" level=info msg="StartContainer for \"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" returns successfully" Aug 19 08:13:22.132531 containerd[1522]: time="2025-08-19T08:13:22.132443118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" id:\"9f88c5ac196695ecf3d664d04ed8bfc8185d8fdcca49a183ed6639caa027b0a4\" pid:3394 exited_at:{seconds:1755591202 nanos:131702467}" Aug 19 08:13:22.186170 kubelet[2709]: I0819 08:13:22.186130 2709 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 19 08:13:22.244450 systemd[1]: Created slice kubepods-burstable-pod40a1300b_2d45_4188_ad17_0e7fa3a54cee.slice - libcontainer container kubepods-burstable-pod40a1300b_2d45_4188_ad17_0e7fa3a54cee.slice. Aug 19 08:13:22.257270 kubelet[2709]: I0819 08:13:22.257227 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40a1300b-2d45-4188-ad17-0e7fa3a54cee-config-volume\") pod \"coredns-674b8bbfcf-vsgpt\" (UID: \"40a1300b-2d45-4188-ad17-0e7fa3a54cee\") " pod="kube-system/coredns-674b8bbfcf-vsgpt" Aug 19 08:13:22.257846 kubelet[2709]: I0819 08:13:22.257586 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7t6p\" (UniqueName: \"kubernetes.io/projected/40a1300b-2d45-4188-ad17-0e7fa3a54cee-kube-api-access-f7t6p\") pod \"coredns-674b8bbfcf-vsgpt\" (UID: \"40a1300b-2d45-4188-ad17-0e7fa3a54cee\") " pod="kube-system/coredns-674b8bbfcf-vsgpt" Aug 19 08:13:22.264730 systemd[1]: Created slice kubepods-burstable-podb52f709b_e815_4077_a4c0_0ccfa2731af5.slice - libcontainer container kubepods-burstable-podb52f709b_e815_4077_a4c0_0ccfa2731af5.slice. Aug 19 08:13:22.359632 kubelet[2709]: I0819 08:13:22.358545 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b52f709b-e815-4077-a4c0-0ccfa2731af5-config-volume\") pod \"coredns-674b8bbfcf-zxzrl\" (UID: \"b52f709b-e815-4077-a4c0-0ccfa2731af5\") " pod="kube-system/coredns-674b8bbfcf-zxzrl" Aug 19 08:13:22.360347 kubelet[2709]: I0819 08:13:22.360050 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfvsj\" (UniqueName: \"kubernetes.io/projected/b52f709b-e815-4077-a4c0-0ccfa2731af5-kube-api-access-gfvsj\") pod \"coredns-674b8bbfcf-zxzrl\" (UID: \"b52f709b-e815-4077-a4c0-0ccfa2731af5\") " pod="kube-system/coredns-674b8bbfcf-zxzrl" Aug 19 08:13:22.560820 kubelet[2709]: E0819 08:13:22.560743 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:22.562486 containerd[1522]: time="2025-08-19T08:13:22.562385144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vsgpt,Uid:40a1300b-2d45-4188-ad17-0e7fa3a54cee,Namespace:kube-system,Attempt:0,}" Aug 19 08:13:22.571021 kubelet[2709]: E0819 08:13:22.568908 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:22.572850 containerd[1522]: time="2025-08-19T08:13:22.572193840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxzrl,Uid:b52f709b-e815-4077-a4c0-0ccfa2731af5,Namespace:kube-system,Attempt:0,}" Aug 19 08:13:22.766781 kubelet[2709]: E0819 08:13:22.766521 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:22.770238 kubelet[2709]: E0819 08:13:22.769379 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:22.912185 kubelet[2709]: I0819 08:13:22.911976 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kcq8b" podStartSLOduration=5.694146849 podStartE2EDuration="14.911948387s" podCreationTimestamp="2025-08-19 08:13:08 +0000 UTC" firstStartedPulling="2025-08-19 08:13:08.925555326 +0000 UTC m=+6.731185796" lastFinishedPulling="2025-08-19 08:13:18.143356851 +0000 UTC m=+15.948987334" observedRunningTime="2025-08-19 08:13:22.90876685 +0000 UTC m=+20.714397343" watchObservedRunningTime="2025-08-19 08:13:22.911948387 +0000 UTC m=+20.717578879" Aug 19 08:13:23.768450 kubelet[2709]: E0819 08:13:23.768050 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:24.458576 systemd-networkd[1456]: cilium_host: Link UP Aug 19 08:13:24.460060 systemd-networkd[1456]: cilium_net: Link UP Aug 19 08:13:24.460320 systemd-networkd[1456]: cilium_net: Gained carrier Aug 19 08:13:24.460552 systemd-networkd[1456]: cilium_host: Gained carrier Aug 19 08:13:24.642239 systemd-networkd[1456]: cilium_vxlan: Link UP Aug 19 08:13:24.642249 systemd-networkd[1456]: cilium_vxlan: Gained carrier Aug 19 08:13:24.771615 kubelet[2709]: E0819 08:13:24.771472 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:25.014227 systemd-networkd[1456]: cilium_host: Gained IPv6LL Aug 19 08:13:25.074717 kernel: NET: Registered PF_ALG protocol family Aug 19 08:13:25.141878 systemd-networkd[1456]: cilium_net: Gained IPv6LL Aug 19 08:13:26.106447 systemd-networkd[1456]: lxc_health: Link UP Aug 19 08:13:26.129409 systemd-networkd[1456]: lxc_health: Gained carrier Aug 19 08:13:26.229849 systemd-networkd[1456]: cilium_vxlan: Gained IPv6LL Aug 19 08:13:26.736538 kubelet[2709]: E0819 08:13:26.736490 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:26.773675 kernel: eth0: renamed from tmp08d1f Aug 19 08:13:26.781188 systemd-networkd[1456]: lxc1f1e4b090dd7: Link UP Aug 19 08:13:26.790009 kubelet[2709]: E0819 08:13:26.789873 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:26.802333 systemd-networkd[1456]: lxce63070d253c9: Link UP Aug 19 08:13:26.808734 kernel: eth0: renamed from tmp428d6 Aug 19 08:13:26.807074 systemd-networkd[1456]: lxc1f1e4b090dd7: Gained carrier Aug 19 08:13:26.814967 systemd-networkd[1456]: lxce63070d253c9: Gained carrier Aug 19 08:13:27.765934 systemd-networkd[1456]: lxc_health: Gained IPv6LL Aug 19 08:13:27.789390 kubelet[2709]: E0819 08:13:27.789316 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:28.085885 systemd-networkd[1456]: lxce63070d253c9: Gained IPv6LL Aug 19 08:13:28.469907 systemd-networkd[1456]: lxc1f1e4b090dd7: Gained IPv6LL Aug 19 08:13:32.508741 containerd[1522]: time="2025-08-19T08:13:32.508676991Z" level=info msg="connecting to shim 428d673609668b5966fefe44bc46b5287f291459f16744d917a06220fdeef3ed" address="unix:///run/containerd/s/b12bb0cb0461e7e6e2235e5bda7d8432bf9c512531afd6103d0340a8cbdb814e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:13:32.533324 containerd[1522]: time="2025-08-19T08:13:32.533085975Z" level=info msg="connecting to shim 08d1f412b3033b144e9bc4a421a2e389fdcc23536aa19661b3a854862b7996db" address="unix:///run/containerd/s/05b74d95ba346fed4717409f956af630988335dfdd5a87b3d56e0d02690d4c14" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:13:32.596240 systemd[1]: Started cri-containerd-428d673609668b5966fefe44bc46b5287f291459f16744d917a06220fdeef3ed.scope - libcontainer container 428d673609668b5966fefe44bc46b5287f291459f16744d917a06220fdeef3ed. Aug 19 08:13:32.610685 systemd[1]: Started cri-containerd-08d1f412b3033b144e9bc4a421a2e389fdcc23536aa19661b3a854862b7996db.scope - libcontainer container 08d1f412b3033b144e9bc4a421a2e389fdcc23536aa19661b3a854862b7996db. Aug 19 08:13:32.714229 containerd[1522]: time="2025-08-19T08:13:32.714161963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxzrl,Uid:b52f709b-e815-4077-a4c0-0ccfa2731af5,Namespace:kube-system,Attempt:0,} returns sandbox id \"08d1f412b3033b144e9bc4a421a2e389fdcc23536aa19661b3a854862b7996db\"" Aug 19 08:13:32.716977 kubelet[2709]: E0819 08:13:32.716509 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:32.721222 containerd[1522]: time="2025-08-19T08:13:32.721169180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vsgpt,Uid:40a1300b-2d45-4188-ad17-0e7fa3a54cee,Namespace:kube-system,Attempt:0,} returns sandbox id \"428d673609668b5966fefe44bc46b5287f291459f16744d917a06220fdeef3ed\"" Aug 19 08:13:32.723700 kubelet[2709]: E0819 08:13:32.723665 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:32.726224 containerd[1522]: time="2025-08-19T08:13:32.726032530Z" level=info msg="CreateContainer within sandbox \"08d1f412b3033b144e9bc4a421a2e389fdcc23536aa19661b3a854862b7996db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:13:32.729513 containerd[1522]: time="2025-08-19T08:13:32.729450835Z" level=info msg="CreateContainer within sandbox \"428d673609668b5966fefe44bc46b5287f291459f16744d917a06220fdeef3ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:13:32.757274 containerd[1522]: time="2025-08-19T08:13:32.757147773Z" level=info msg="Container 0fa037b92e4097f9554f14ddc5ca68cdbd85315d0a4d3789a9c7c04199f61bb4: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:13:32.758102 containerd[1522]: time="2025-08-19T08:13:32.757664091Z" level=info msg="Container 23157d93f8d9343874f4e6639e36275e7fa24d4aa198a37ed18f0f6448a3ec2d: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:13:32.768087 containerd[1522]: time="2025-08-19T08:13:32.767471974Z" level=info msg="CreateContainer within sandbox \"08d1f412b3033b144e9bc4a421a2e389fdcc23536aa19661b3a854862b7996db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23157d93f8d9343874f4e6639e36275e7fa24d4aa198a37ed18f0f6448a3ec2d\"" Aug 19 08:13:32.768627 containerd[1522]: time="2025-08-19T08:13:32.768422220Z" level=info msg="CreateContainer within sandbox \"428d673609668b5966fefe44bc46b5287f291459f16744d917a06220fdeef3ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fa037b92e4097f9554f14ddc5ca68cdbd85315d0a4d3789a9c7c04199f61bb4\"" Aug 19 08:13:32.769271 containerd[1522]: time="2025-08-19T08:13:32.769218647Z" level=info msg="StartContainer for \"23157d93f8d9343874f4e6639e36275e7fa24d4aa198a37ed18f0f6448a3ec2d\"" Aug 19 08:13:32.770450 containerd[1522]: time="2025-08-19T08:13:32.770401754Z" level=info msg="StartContainer for \"0fa037b92e4097f9554f14ddc5ca68cdbd85315d0a4d3789a9c7c04199f61bb4\"" Aug 19 08:13:32.773134 containerd[1522]: time="2025-08-19T08:13:32.773003703Z" level=info msg="connecting to shim 23157d93f8d9343874f4e6639e36275e7fa24d4aa198a37ed18f0f6448a3ec2d" address="unix:///run/containerd/s/05b74d95ba346fed4717409f956af630988335dfdd5a87b3d56e0d02690d4c14" protocol=ttrpc version=3 Aug 19 08:13:32.773308 containerd[1522]: time="2025-08-19T08:13:32.773280150Z" level=info msg="connecting to shim 0fa037b92e4097f9554f14ddc5ca68cdbd85315d0a4d3789a9c7c04199f61bb4" address="unix:///run/containerd/s/b12bb0cb0461e7e6e2235e5bda7d8432bf9c512531afd6103d0340a8cbdb814e" protocol=ttrpc version=3 Aug 19 08:13:32.803905 systemd[1]: Started cri-containerd-0fa037b92e4097f9554f14ddc5ca68cdbd85315d0a4d3789a9c7c04199f61bb4.scope - libcontainer container 0fa037b92e4097f9554f14ddc5ca68cdbd85315d0a4d3789a9c7c04199f61bb4. Aug 19 08:13:32.815246 systemd[1]: Started cri-containerd-23157d93f8d9343874f4e6639e36275e7fa24d4aa198a37ed18f0f6448a3ec2d.scope - libcontainer container 23157d93f8d9343874f4e6639e36275e7fa24d4aa198a37ed18f0f6448a3ec2d. Aug 19 08:13:32.885083 containerd[1522]: time="2025-08-19T08:13:32.885031591Z" level=info msg="StartContainer for \"0fa037b92e4097f9554f14ddc5ca68cdbd85315d0a4d3789a9c7c04199f61bb4\" returns successfully" Aug 19 08:13:32.888613 containerd[1522]: time="2025-08-19T08:13:32.888531286Z" level=info msg="StartContainer for \"23157d93f8d9343874f4e6639e36275e7fa24d4aa198a37ed18f0f6448a3ec2d\" returns successfully" Aug 19 08:13:33.473272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933937800.mount: Deactivated successfully. Aug 19 08:13:33.840605 kubelet[2709]: E0819 08:13:33.840438 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:33.849504 kubelet[2709]: E0819 08:13:33.849287 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:33.866472 kubelet[2709]: I0819 08:13:33.866354 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zxzrl" podStartSLOduration=25.866336743 podStartE2EDuration="25.866336743s" podCreationTimestamp="2025-08-19 08:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:13:33.86312533 +0000 UTC m=+31.668755817" watchObservedRunningTime="2025-08-19 08:13:33.866336743 +0000 UTC m=+31.671967232" Aug 19 08:13:33.918846 kubelet[2709]: I0819 08:13:33.918549 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vsgpt" podStartSLOduration=25.918524653 podStartE2EDuration="25.918524653s" podCreationTimestamp="2025-08-19 08:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:13:33.91732901 +0000 UTC m=+31.722959500" watchObservedRunningTime="2025-08-19 08:13:33.918524653 +0000 UTC m=+31.724155140" Aug 19 08:13:34.852555 kubelet[2709]: E0819 08:13:34.852348 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:34.852555 kubelet[2709]: E0819 08:13:34.852410 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:35.854494 kubelet[2709]: E0819 08:13:35.854064 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:35.855650 kubelet[2709]: E0819 08:13:35.855580 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:13:52.825135 systemd[1]: Started sshd@7-143.198.65.59:22-139.178.89.65:36904.service - OpenSSH per-connection server daemon (139.178.89.65:36904). Aug 19 08:13:52.951010 sshd[4052]: Accepted publickey for core from 139.178.89.65 port 36904 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:13:52.953328 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:13:52.964187 systemd-logind[1500]: New session 8 of user core. Aug 19 08:13:52.968969 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 08:13:53.735028 sshd[4055]: Connection closed by 139.178.89.65 port 36904 Aug 19 08:13:53.736077 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Aug 19 08:13:53.742126 systemd[1]: sshd@7-143.198.65.59:22-139.178.89.65:36904.service: Deactivated successfully. Aug 19 08:13:53.745660 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 08:13:53.747196 systemd-logind[1500]: Session 8 logged out. Waiting for processes to exit. Aug 19 08:13:53.749609 systemd-logind[1500]: Removed session 8. Aug 19 08:13:58.751768 systemd[1]: Started sshd@8-143.198.65.59:22-139.178.89.65:36906.service - OpenSSH per-connection server daemon (139.178.89.65:36906). Aug 19 08:13:58.825811 sshd[4069]: Accepted publickey for core from 139.178.89.65 port 36906 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:13:58.827854 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:13:58.833967 systemd-logind[1500]: New session 9 of user core. Aug 19 08:13:58.843018 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 08:13:58.990416 sshd[4072]: Connection closed by 139.178.89.65 port 36906 Aug 19 08:13:58.991190 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Aug 19 08:13:58.996111 systemd-logind[1500]: Session 9 logged out. Waiting for processes to exit. Aug 19 08:13:58.996425 systemd[1]: sshd@8-143.198.65.59:22-139.178.89.65:36906.service: Deactivated successfully. Aug 19 08:13:59.001243 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 08:13:59.006288 systemd-logind[1500]: Removed session 9. Aug 19 08:14:04.008992 systemd[1]: Started sshd@9-143.198.65.59:22-139.178.89.65:55278.service - OpenSSH per-connection server daemon (139.178.89.65:55278). Aug 19 08:14:04.100434 sshd[4088]: Accepted publickey for core from 139.178.89.65 port 55278 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:04.102707 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:04.109988 systemd-logind[1500]: New session 10 of user core. Aug 19 08:14:04.119958 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 08:14:04.265743 sshd[4091]: Connection closed by 139.178.89.65 port 55278 Aug 19 08:14:04.266646 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:04.272954 systemd[1]: sshd@9-143.198.65.59:22-139.178.89.65:55278.service: Deactivated successfully. Aug 19 08:14:04.275787 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 08:14:04.277480 systemd-logind[1500]: Session 10 logged out. Waiting for processes to exit. Aug 19 08:14:04.280436 systemd-logind[1500]: Removed session 10. Aug 19 08:14:09.287675 systemd[1]: Started sshd@10-143.198.65.59:22-139.178.89.65:46744.service - OpenSSH per-connection server daemon (139.178.89.65:46744). Aug 19 08:14:09.367556 sshd[4104]: Accepted publickey for core from 139.178.89.65 port 46744 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:09.370474 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:09.379126 systemd-logind[1500]: New session 11 of user core. Aug 19 08:14:09.383905 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 08:14:09.528286 sshd[4107]: Connection closed by 139.178.89.65 port 46744 Aug 19 08:14:09.529204 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:09.543682 systemd[1]: sshd@10-143.198.65.59:22-139.178.89.65:46744.service: Deactivated successfully. Aug 19 08:14:09.546546 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 08:14:09.547890 systemd-logind[1500]: Session 11 logged out. Waiting for processes to exit. Aug 19 08:14:09.552964 systemd[1]: Started sshd@11-143.198.65.59:22-139.178.89.65:46746.service - OpenSSH per-connection server daemon (139.178.89.65:46746). Aug 19 08:14:09.554342 systemd-logind[1500]: Removed session 11. Aug 19 08:14:09.619538 sshd[4122]: Accepted publickey for core from 139.178.89.65 port 46746 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:09.621773 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:09.630056 systemd-logind[1500]: New session 12 of user core. Aug 19 08:14:09.636909 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 08:14:09.834656 sshd[4125]: Connection closed by 139.178.89.65 port 46746 Aug 19 08:14:09.837152 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:09.852842 systemd[1]: sshd@11-143.198.65.59:22-139.178.89.65:46746.service: Deactivated successfully. Aug 19 08:14:09.857838 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 08:14:09.862159 systemd-logind[1500]: Session 12 logged out. Waiting for processes to exit. Aug 19 08:14:09.873138 systemd[1]: Started sshd@12-143.198.65.59:22-139.178.89.65:46756.service - OpenSSH per-connection server daemon (139.178.89.65:46756). Aug 19 08:14:09.876414 systemd-logind[1500]: Removed session 12. Aug 19 08:14:09.976536 sshd[4135]: Accepted publickey for core from 139.178.89.65 port 46756 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:09.978860 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:09.987087 systemd-logind[1500]: New session 13 of user core. Aug 19 08:14:09.994917 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 08:14:10.158572 sshd[4138]: Connection closed by 139.178.89.65 port 46756 Aug 19 08:14:10.159572 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:10.165363 systemd-logind[1500]: Session 13 logged out. Waiting for processes to exit. Aug 19 08:14:10.165516 systemd[1]: sshd@12-143.198.65.59:22-139.178.89.65:46756.service: Deactivated successfully. Aug 19 08:14:10.168179 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 08:14:10.172402 systemd-logind[1500]: Removed session 13. Aug 19 08:14:15.176047 systemd[1]: Started sshd@13-143.198.65.59:22-139.178.89.65:46770.service - OpenSSH per-connection server daemon (139.178.89.65:46770). Aug 19 08:14:15.266187 sshd[4151]: Accepted publickey for core from 139.178.89.65 port 46770 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:15.268907 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:15.276770 systemd-logind[1500]: New session 14 of user core. Aug 19 08:14:15.282883 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 08:14:15.434696 sshd[4154]: Connection closed by 139.178.89.65 port 46770 Aug 19 08:14:15.435932 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:15.442480 systemd[1]: sshd@13-143.198.65.59:22-139.178.89.65:46770.service: Deactivated successfully. Aug 19 08:14:15.445553 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 08:14:15.447261 systemd-logind[1500]: Session 14 logged out. Waiting for processes to exit. Aug 19 08:14:15.450302 systemd-logind[1500]: Removed session 14. Aug 19 08:14:18.481887 kubelet[2709]: E0819 08:14:18.481835 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:20.454781 systemd[1]: Started sshd@14-143.198.65.59:22-139.178.89.65:58998.service - OpenSSH per-connection server daemon (139.178.89.65:58998). Aug 19 08:14:20.527764 sshd[4166]: Accepted publickey for core from 139.178.89.65 port 58998 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:20.529982 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:20.537798 systemd-logind[1500]: New session 15 of user core. Aug 19 08:14:20.544895 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 08:14:20.704914 sshd[4169]: Connection closed by 139.178.89.65 port 58998 Aug 19 08:14:20.707207 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:20.715724 systemd[1]: sshd@14-143.198.65.59:22-139.178.89.65:58998.service: Deactivated successfully. Aug 19 08:14:20.719492 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 08:14:20.721565 systemd-logind[1500]: Session 15 logged out. Waiting for processes to exit. Aug 19 08:14:20.724533 systemd-logind[1500]: Removed session 15. Aug 19 08:14:22.480850 kubelet[2709]: E0819 08:14:22.480764 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:25.725462 systemd[1]: Started sshd@15-143.198.65.59:22-139.178.89.65:59004.service - OpenSSH per-connection server daemon (139.178.89.65:59004). Aug 19 08:14:25.793991 sshd[4180]: Accepted publickey for core from 139.178.89.65 port 59004 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:25.795547 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:25.801544 systemd-logind[1500]: New session 16 of user core. Aug 19 08:14:25.806861 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 08:14:25.953503 sshd[4183]: Connection closed by 139.178.89.65 port 59004 Aug 19 08:14:25.954167 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:25.968403 systemd[1]: sshd@15-143.198.65.59:22-139.178.89.65:59004.service: Deactivated successfully. Aug 19 08:14:25.970893 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 08:14:25.972178 systemd-logind[1500]: Session 16 logged out. Waiting for processes to exit. Aug 19 08:14:25.978097 systemd[1]: Started sshd@16-143.198.65.59:22-139.178.89.65:59012.service - OpenSSH per-connection server daemon (139.178.89.65:59012). Aug 19 08:14:25.982828 systemd-logind[1500]: Removed session 16. Aug 19 08:14:26.074327 sshd[4194]: Accepted publickey for core from 139.178.89.65 port 59012 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:26.076170 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:26.083338 systemd-logind[1500]: New session 17 of user core. Aug 19 08:14:26.087935 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 08:14:26.427153 sshd[4197]: Connection closed by 139.178.89.65 port 59012 Aug 19 08:14:26.428276 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:26.446370 systemd[1]: sshd@16-143.198.65.59:22-139.178.89.65:59012.service: Deactivated successfully. Aug 19 08:14:26.449935 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 08:14:26.452305 systemd-logind[1500]: Session 17 logged out. Waiting for processes to exit. Aug 19 08:14:26.457274 systemd[1]: Started sshd@17-143.198.65.59:22-139.178.89.65:59016.service - OpenSSH per-connection server daemon (139.178.89.65:59016). Aug 19 08:14:26.458341 systemd-logind[1500]: Removed session 17. Aug 19 08:14:26.538942 sshd[4207]: Accepted publickey for core from 139.178.89.65 port 59016 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:26.541095 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:26.547911 systemd-logind[1500]: New session 18 of user core. Aug 19 08:14:26.552918 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 08:14:27.357603 sshd[4210]: Connection closed by 139.178.89.65 port 59016 Aug 19 08:14:27.358294 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:27.378710 systemd[1]: sshd@17-143.198.65.59:22-139.178.89.65:59016.service: Deactivated successfully. Aug 19 08:14:27.383179 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 08:14:27.385724 systemd-logind[1500]: Session 18 logged out. Waiting for processes to exit. Aug 19 08:14:27.391150 systemd-logind[1500]: Removed session 18. Aug 19 08:14:27.394433 systemd[1]: Started sshd@18-143.198.65.59:22-139.178.89.65:59020.service - OpenSSH per-connection server daemon (139.178.89.65:59020). Aug 19 08:14:27.478871 kubelet[2709]: E0819 08:14:27.478828 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:27.480041 kubelet[2709]: E0819 08:14:27.479481 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:27.510286 sshd[4224]: Accepted publickey for core from 139.178.89.65 port 59020 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:27.513168 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:27.524271 systemd-logind[1500]: New session 19 of user core. Aug 19 08:14:27.531082 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 08:14:27.875731 sshd[4230]: Connection closed by 139.178.89.65 port 59020 Aug 19 08:14:27.878884 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:27.888774 systemd[1]: sshd@18-143.198.65.59:22-139.178.89.65:59020.service: Deactivated successfully. Aug 19 08:14:27.893548 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 08:14:27.895095 systemd-logind[1500]: Session 19 logged out. Waiting for processes to exit. Aug 19 08:14:27.900905 systemd[1]: Started sshd@19-143.198.65.59:22-139.178.89.65:59024.service - OpenSSH per-connection server daemon (139.178.89.65:59024). Aug 19 08:14:27.903018 systemd-logind[1500]: Removed session 19. Aug 19 08:14:27.971466 sshd[4240]: Accepted publickey for core from 139.178.89.65 port 59024 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:27.973515 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:27.979421 systemd-logind[1500]: New session 20 of user core. Aug 19 08:14:27.994058 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 08:14:28.138092 sshd[4243]: Connection closed by 139.178.89.65 port 59024 Aug 19 08:14:28.138743 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:28.143299 systemd[1]: sshd@19-143.198.65.59:22-139.178.89.65:59024.service: Deactivated successfully. Aug 19 08:14:28.147331 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 08:14:28.150473 systemd-logind[1500]: Session 20 logged out. Waiting for processes to exit. Aug 19 08:14:28.152200 systemd-logind[1500]: Removed session 20. Aug 19 08:14:33.156206 systemd[1]: Started sshd@20-143.198.65.59:22-139.178.89.65:50492.service - OpenSSH per-connection server daemon (139.178.89.65:50492). Aug 19 08:14:33.228290 sshd[4255]: Accepted publickey for core from 139.178.89.65 port 50492 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:33.230204 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:33.239200 systemd-logind[1500]: New session 21 of user core. Aug 19 08:14:33.245973 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 08:14:33.407414 sshd[4258]: Connection closed by 139.178.89.65 port 50492 Aug 19 08:14:33.408279 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:33.418064 systemd[1]: sshd@20-143.198.65.59:22-139.178.89.65:50492.service: Deactivated successfully. Aug 19 08:14:33.420972 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 08:14:33.423331 systemd-logind[1500]: Session 21 logged out. Waiting for processes to exit. Aug 19 08:14:33.427488 systemd-logind[1500]: Removed session 21. Aug 19 08:14:38.422553 systemd[1]: Started sshd@21-143.198.65.59:22-139.178.89.65:50506.service - OpenSSH per-connection server daemon (139.178.89.65:50506). Aug 19 08:14:38.505103 sshd[4272]: Accepted publickey for core from 139.178.89.65 port 50506 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:38.506843 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:38.514893 systemd-logind[1500]: New session 22 of user core. Aug 19 08:14:38.519845 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 08:14:38.683881 sshd[4275]: Connection closed by 139.178.89.65 port 50506 Aug 19 08:14:38.690145 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:38.698807 systemd[1]: sshd@21-143.198.65.59:22-139.178.89.65:50506.service: Deactivated successfully. Aug 19 08:14:38.702003 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 08:14:38.704119 systemd-logind[1500]: Session 22 logged out. Waiting for processes to exit. Aug 19 08:14:38.706346 systemd-logind[1500]: Removed session 22. Aug 19 08:14:39.477424 kubelet[2709]: E0819 08:14:39.477332 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:43.477503 kubelet[2709]: E0819 08:14:43.477461 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:43.705167 systemd[1]: Started sshd@22-143.198.65.59:22-139.178.89.65:47046.service - OpenSSH per-connection server daemon (139.178.89.65:47046). Aug 19 08:14:43.796976 sshd[4289]: Accepted publickey for core from 139.178.89.65 port 47046 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:43.799380 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:43.808550 systemd-logind[1500]: New session 23 of user core. Aug 19 08:14:43.812928 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 08:14:43.981723 sshd[4292]: Connection closed by 139.178.89.65 port 47046 Aug 19 08:14:43.980388 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:43.992423 systemd[1]: sshd@22-143.198.65.59:22-139.178.89.65:47046.service: Deactivated successfully. Aug 19 08:14:43.996489 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 08:14:43.998701 systemd-logind[1500]: Session 23 logged out. Waiting for processes to exit. Aug 19 08:14:44.005322 systemd[1]: Started sshd@23-143.198.65.59:22-139.178.89.65:47056.service - OpenSSH per-connection server daemon (139.178.89.65:47056). Aug 19 08:14:44.007017 systemd-logind[1500]: Removed session 23. Aug 19 08:14:44.081985 sshd[4303]: Accepted publickey for core from 139.178.89.65 port 47056 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:44.085272 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:44.094342 systemd-logind[1500]: New session 24 of user core. Aug 19 08:14:44.097928 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 08:14:45.782307 containerd[1522]: time="2025-08-19T08:14:45.779569391Z" level=info msg="StopContainer for \"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" with timeout 30 (s)" Aug 19 08:14:45.794220 containerd[1522]: time="2025-08-19T08:14:45.794126650Z" level=info msg="Stop container \"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" with signal terminated" Aug 19 08:14:45.829772 systemd[1]: cri-containerd-8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39.scope: Deactivated successfully. Aug 19 08:14:45.832883 containerd[1522]: time="2025-08-19T08:14:45.832788257Z" level=info msg="received exit event container_id:\"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" id:\"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" pid:3296 exited_at:{seconds:1755591285 nanos:832237479}" Aug 19 08:14:45.833341 containerd[1522]: time="2025-08-19T08:14:45.833280396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" id:\"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" pid:3296 exited_at:{seconds:1755591285 nanos:832237479}" Aug 19 08:14:45.834437 containerd[1522]: time="2025-08-19T08:14:45.834409063Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:14:45.843451 containerd[1522]: time="2025-08-19T08:14:45.843387709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" id:\"cf2c4bfa736d542cd7b8bf3b063038bde9762370586c348ecbec0654cb7771f5\" pid:4332 exited_at:{seconds:1755591285 nanos:842728606}" Aug 19 08:14:45.848677 containerd[1522]: time="2025-08-19T08:14:45.848567890Z" level=info msg="StopContainer for \"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" with timeout 2 (s)" Aug 19 08:14:45.849714 containerd[1522]: time="2025-08-19T08:14:45.849003442Z" level=info msg="Stop container \"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" with signal terminated" Aug 19 08:14:45.864697 systemd-networkd[1456]: lxc_health: Link DOWN Aug 19 08:14:45.864708 systemd-networkd[1456]: lxc_health: Lost carrier Aug 19 08:14:45.905940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39-rootfs.mount: Deactivated successfully. Aug 19 08:14:45.907525 systemd[1]: cri-containerd-04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a.scope: Deactivated successfully. Aug 19 08:14:45.908901 systemd[1]: cri-containerd-04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a.scope: Consumed 9.887s CPU time, 189M memory peak, 66M read from disk, 13.3M written to disk. Aug 19 08:14:45.912341 containerd[1522]: time="2025-08-19T08:14:45.911869678Z" level=info msg="received exit event container_id:\"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" id:\"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" pid:3364 exited_at:{seconds:1755591285 nanos:910440260}" Aug 19 08:14:45.912572 containerd[1522]: time="2025-08-19T08:14:45.912489717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" id:\"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" pid:3364 exited_at:{seconds:1755591285 nanos:910440260}" Aug 19 08:14:45.928195 containerd[1522]: time="2025-08-19T08:14:45.928010647Z" level=info msg="StopContainer for \"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" returns successfully" Aug 19 08:14:45.928948 containerd[1522]: time="2025-08-19T08:14:45.928850485Z" level=info msg="StopPodSandbox for \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\"" Aug 19 08:14:45.929128 containerd[1522]: time="2025-08-19T08:14:45.929087729Z" level=info msg="Container to stop \"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:14:45.939695 systemd[1]: cri-containerd-8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d.scope: Deactivated successfully. Aug 19 08:14:45.947156 containerd[1522]: time="2025-08-19T08:14:45.946736460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" id:\"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" pid:2932 exit_status:137 exited_at:{seconds:1755591285 nanos:945032717}" Aug 19 08:14:45.950356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a-rootfs.mount: Deactivated successfully. Aug 19 08:14:45.965450 containerd[1522]: time="2025-08-19T08:14:45.965394223Z" level=info msg="StopContainer for \"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" returns successfully" Aug 19 08:14:45.967342 containerd[1522]: time="2025-08-19T08:14:45.966951768Z" level=info msg="StopPodSandbox for \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\"" Aug 19 08:14:45.967342 containerd[1522]: time="2025-08-19T08:14:45.967047471Z" level=info msg="Container to stop \"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:14:45.967342 containerd[1522]: time="2025-08-19T08:14:45.967063875Z" level=info msg="Container to stop \"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:14:45.967342 containerd[1522]: time="2025-08-19T08:14:45.967077173Z" level=info msg="Container to stop \"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:14:45.967342 containerd[1522]: time="2025-08-19T08:14:45.967090430Z" level=info msg="Container to stop \"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:14:45.967342 containerd[1522]: time="2025-08-19T08:14:45.967103889Z" level=info msg="Container to stop \"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:14:45.979573 systemd[1]: cri-containerd-d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979.scope: Deactivated successfully. Aug 19 08:14:46.043919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d-rootfs.mount: Deactivated successfully. Aug 19 08:14:46.050814 containerd[1522]: time="2025-08-19T08:14:46.050658372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" id:\"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" pid:2853 exit_status:137 exited_at:{seconds:1755591285 nanos:982452943}" Aug 19 08:14:46.057070 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d-shm.mount: Deactivated successfully. Aug 19 08:14:46.064848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979-rootfs.mount: Deactivated successfully. Aug 19 08:14:46.067631 containerd[1522]: time="2025-08-19T08:14:46.067546251Z" level=info msg="TearDown network for sandbox \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" successfully" Aug 19 08:14:46.067853 containerd[1522]: time="2025-08-19T08:14:46.067830318Z" level=info msg="StopPodSandbox for \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" returns successfully" Aug 19 08:14:46.068799 containerd[1522]: time="2025-08-19T08:14:46.068763985Z" level=info msg="shim disconnected" id=8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d namespace=k8s.io Aug 19 08:14:46.069032 containerd[1522]: time="2025-08-19T08:14:46.068994619Z" level=warning msg="cleaning up after shim disconnected" id=8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d namespace=k8s.io Aug 19 08:14:46.070290 containerd[1522]: time="2025-08-19T08:14:46.070230731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:14:46.077496 containerd[1522]: time="2025-08-19T08:14:46.077302850Z" level=info msg="received exit event sandbox_id:\"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" exit_status:137 exited_at:{seconds:1755591285 nanos:982452943}" Aug 19 08:14:46.079053 containerd[1522]: time="2025-08-19T08:14:46.078723176Z" level=info msg="received exit event sandbox_id:\"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" exit_status:137 exited_at:{seconds:1755591285 nanos:945032717}" Aug 19 08:14:46.079408 containerd[1522]: time="2025-08-19T08:14:46.079357448Z" level=info msg="shim disconnected" id=d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979 namespace=k8s.io Aug 19 08:14:46.079480 containerd[1522]: time="2025-08-19T08:14:46.079411946Z" level=warning msg="cleaning up after shim disconnected" id=d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979 namespace=k8s.io Aug 19 08:14:46.079480 containerd[1522]: time="2025-08-19T08:14:46.079423675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:14:46.085630 containerd[1522]: time="2025-08-19T08:14:46.083830640Z" level=info msg="TearDown network for sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" successfully" Aug 19 08:14:46.085630 containerd[1522]: time="2025-08-19T08:14:46.083872082Z" level=info msg="StopPodSandbox for \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" returns successfully" Aug 19 08:14:46.195579 kubelet[2709]: I0819 08:14:46.195513 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84c58333-6ada-4bef-9203-c687c293258f-clustermesh-secrets\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.196544 kubelet[2709]: I0819 08:14:46.196498 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-hostproc\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.196683 kubelet[2709]: I0819 08:14:46.196669 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-host-proc-sys-net\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.196914 kubelet[2709]: I0819 08:14:46.196898 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cilium-cgroup\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.196993 kubelet[2709]: I0819 08:14:46.196979 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-xtables-lock\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.197071 kubelet[2709]: I0819 08:14:46.196906 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-hostproc" (OuterVolumeSpecName: "hostproc") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.197143 kubelet[2709]: I0819 08:14:46.196831 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.197288 kubelet[2709]: I0819 08:14:46.196959 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.197288 kubelet[2709]: I0819 08:14:46.197128 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.197288 kubelet[2709]: I0819 08:14:46.197057 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-host-proc-sys-kernel\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.197288 kubelet[2709]: I0819 08:14:46.197226 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-bpf-maps\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.197288 kubelet[2709]: I0819 08:14:46.197260 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-etc-cni-netd\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.197627 kubelet[2709]: I0819 08:14:46.197403 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a268a9e-c269-4baa-bc2c-583894c939b6-cilium-config-path\") pod \"5a268a9e-c269-4baa-bc2c-583894c939b6\" (UID: \"5a268a9e-c269-4baa-bc2c-583894c939b6\") " Aug 19 08:14:46.197627 kubelet[2709]: I0819 08:14:46.197432 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb2th\" (UniqueName: \"kubernetes.io/projected/5a268a9e-c269-4baa-bc2c-583894c939b6-kube-api-access-vb2th\") pod \"5a268a9e-c269-4baa-bc2c-583894c939b6\" (UID: \"5a268a9e-c269-4baa-bc2c-583894c939b6\") " Aug 19 08:14:46.197627 kubelet[2709]: I0819 08:14:46.197472 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84c58333-6ada-4bef-9203-c687c293258f-cilium-config-path\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.197627 kubelet[2709]: I0819 08:14:46.197504 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84c58333-6ada-4bef-9203-c687c293258f-hubble-tls\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.197627 kubelet[2709]: I0819 08:14:46.197533 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cilium-run\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.197627 kubelet[2709]: I0819 08:14:46.197562 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6sr2\" (UniqueName: \"kubernetes.io/projected/84c58333-6ada-4bef-9203-c687c293258f-kube-api-access-f6sr2\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.198306 kubelet[2709]: I0819 08:14:46.197586 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-lib-modules\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.198306 kubelet[2709]: I0819 08:14:46.197647 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cni-path\") pod \"84c58333-6ada-4bef-9203-c687c293258f\" (UID: \"84c58333-6ada-4bef-9203-c687c293258f\") " Aug 19 08:14:46.198306 kubelet[2709]: I0819 08:14:46.197725 2709 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-host-proc-sys-kernel\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.198306 kubelet[2709]: I0819 08:14:46.197744 2709 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-hostproc\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.198306 kubelet[2709]: I0819 08:14:46.197763 2709 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-host-proc-sys-net\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.198306 kubelet[2709]: I0819 08:14:46.197781 2709 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cilium-cgroup\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.200941 kubelet[2709]: I0819 08:14:46.197533 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.201070 kubelet[2709]: I0819 08:14:46.197826 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cni-path" (OuterVolumeSpecName: "cni-path") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.201070 kubelet[2709]: I0819 08:14:46.200884 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.201617 kubelet[2709]: I0819 08:14:46.201418 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.201913 kubelet[2709]: I0819 08:14:46.201894 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.202008 kubelet[2709]: I0819 08:14:46.201996 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:14:46.209313 kubelet[2709]: I0819 08:14:46.209249 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c58333-6ada-4bef-9203-c687c293258f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:14:46.210808 kubelet[2709]: I0819 08:14:46.210748 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a268a9e-c269-4baa-bc2c-583894c939b6-kube-api-access-vb2th" (OuterVolumeSpecName: "kube-api-access-vb2th") pod "5a268a9e-c269-4baa-bc2c-583894c939b6" (UID: "5a268a9e-c269-4baa-bc2c-583894c939b6"). InnerVolumeSpecName "kube-api-access-vb2th". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:14:46.211360 kubelet[2709]: I0819 08:14:46.211309 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c58333-6ada-4bef-9203-c687c293258f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 19 08:14:46.211495 kubelet[2709]: I0819 08:14:46.211456 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a268a9e-c269-4baa-bc2c-583894c939b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5a268a9e-c269-4baa-bc2c-583894c939b6" (UID: "5a268a9e-c269-4baa-bc2c-583894c939b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 08:14:46.212002 kubelet[2709]: I0819 08:14:46.211962 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c58333-6ada-4bef-9203-c687c293258f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 08:14:46.213714 kubelet[2709]: I0819 08:14:46.213638 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c58333-6ada-4bef-9203-c687c293258f-kube-api-access-f6sr2" (OuterVolumeSpecName: "kube-api-access-f6sr2") pod "84c58333-6ada-4bef-9203-c687c293258f" (UID: "84c58333-6ada-4bef-9203-c687c293258f"). InnerVolumeSpecName "kube-api-access-f6sr2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:14:46.299245 kubelet[2709]: I0819 08:14:46.299066 2709 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-xtables-lock\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299245 kubelet[2709]: I0819 08:14:46.299121 2709 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-bpf-maps\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299245 kubelet[2709]: I0819 08:14:46.299138 2709 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-etc-cni-netd\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299245 kubelet[2709]: I0819 08:14:46.299152 2709 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a268a9e-c269-4baa-bc2c-583894c939b6-cilium-config-path\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299245 kubelet[2709]: I0819 08:14:46.299166 2709 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vb2th\" (UniqueName: \"kubernetes.io/projected/5a268a9e-c269-4baa-bc2c-583894c939b6-kube-api-access-vb2th\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299245 kubelet[2709]: I0819 08:14:46.299193 2709 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84c58333-6ada-4bef-9203-c687c293258f-cilium-config-path\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299245 kubelet[2709]: I0819 08:14:46.299205 2709 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84c58333-6ada-4bef-9203-c687c293258f-hubble-tls\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299245 kubelet[2709]: I0819 08:14:46.299219 2709 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cilium-run\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299752 kubelet[2709]: I0819 08:14:46.299240 2709 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f6sr2\" (UniqueName: \"kubernetes.io/projected/84c58333-6ada-4bef-9203-c687c293258f-kube-api-access-f6sr2\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299950 kubelet[2709]: I0819 08:14:46.299858 2709 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-lib-modules\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299950 kubelet[2709]: I0819 08:14:46.299908 2709 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84c58333-6ada-4bef-9203-c687c293258f-cni-path\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.299950 kubelet[2709]: I0819 08:14:46.299922 2709 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84c58333-6ada-4bef-9203-c687c293258f-clustermesh-secrets\") on node \"ci-4426.0.0-a-0a67852594\" DevicePath \"\"" Aug 19 08:14:46.491926 systemd[1]: Removed slice kubepods-besteffort-pod5a268a9e_c269_4baa_bc2c_583894c939b6.slice - libcontainer container kubepods-besteffort-pod5a268a9e_c269_4baa_bc2c_583894c939b6.slice. Aug 19 08:14:46.495479 systemd[1]: Removed slice kubepods-burstable-pod84c58333_6ada_4bef_9203_c687c293258f.slice - libcontainer container kubepods-burstable-pod84c58333_6ada_4bef_9203_c687c293258f.slice. Aug 19 08:14:46.495982 systemd[1]: kubepods-burstable-pod84c58333_6ada_4bef_9203_c687c293258f.slice: Consumed 10.020s CPU time, 189.3M memory peak, 66M read from disk, 16.6M written to disk. Aug 19 08:14:46.905041 systemd[1]: var-lib-kubelet-pods-5a268a9e\x2dc269\x2d4baa\x2dbc2c\x2d583894c939b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvb2th.mount: Deactivated successfully. Aug 19 08:14:46.905721 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979-shm.mount: Deactivated successfully. Aug 19 08:14:46.905805 systemd[1]: var-lib-kubelet-pods-84c58333\x2d6ada\x2d4bef\x2d9203\x2dc687c293258f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df6sr2.mount: Deactivated successfully. Aug 19 08:14:46.905891 systemd[1]: var-lib-kubelet-pods-84c58333\x2d6ada\x2d4bef\x2d9203\x2dc687c293258f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 19 08:14:46.905988 systemd[1]: var-lib-kubelet-pods-84c58333\x2d6ada\x2d4bef\x2d9203\x2dc687c293258f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 19 08:14:47.135618 kubelet[2709]: I0819 08:14:47.135552 2709 scope.go:117] "RemoveContainer" containerID="8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39" Aug 19 08:14:47.141332 containerd[1522]: time="2025-08-19T08:14:47.141049670Z" level=info msg="RemoveContainer for \"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\"" Aug 19 08:14:47.147396 containerd[1522]: time="2025-08-19T08:14:47.147303011Z" level=info msg="RemoveContainer for \"8ba8e786891cbc0ebe012b76f7b94fa39beddb33eee0d023b84bc209e022bc39\" returns successfully" Aug 19 08:14:47.152366 kubelet[2709]: I0819 08:14:47.151641 2709 scope.go:117] "RemoveContainer" containerID="04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a" Aug 19 08:14:47.159218 containerd[1522]: time="2025-08-19T08:14:47.159019315Z" level=info msg="RemoveContainer for \"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\"" Aug 19 08:14:47.163879 containerd[1522]: time="2025-08-19T08:14:47.163828655Z" level=info msg="RemoveContainer for \"04b2e3a857cccf3e940a507a39a2bc5e440e3fe3e1288fc0e7c4e7e03eeed53a\" returns successfully" Aug 19 08:14:47.164523 kubelet[2709]: I0819 08:14:47.164496 2709 scope.go:117] "RemoveContainer" containerID="f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b" Aug 19 08:14:47.169664 containerd[1522]: time="2025-08-19T08:14:47.168917523Z" level=info msg="RemoveContainer for \"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\"" Aug 19 08:14:47.176760 containerd[1522]: time="2025-08-19T08:14:47.176711462Z" level=info msg="RemoveContainer for \"f988c26bca02a3d136b8b4b0fbaff08b5fde350ddaf4fd50085a7ea84c84fe0b\" returns successfully" Aug 19 08:14:47.177293 kubelet[2709]: I0819 08:14:47.177171 2709 scope.go:117] "RemoveContainer" containerID="864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851" Aug 19 08:14:47.184492 containerd[1522]: time="2025-08-19T08:14:47.183522495Z" level=info msg="RemoveContainer for \"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\"" Aug 19 08:14:47.191285 containerd[1522]: time="2025-08-19T08:14:47.191169132Z" level=info msg="RemoveContainer for \"864bcdd77f3aaf02e2528bcfae57f7f018509e496a10393db7f16ec1739cc851\" returns successfully" Aug 19 08:14:47.191739 kubelet[2709]: I0819 08:14:47.191666 2709 scope.go:117] "RemoveContainer" containerID="83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7" Aug 19 08:14:47.194131 containerd[1522]: time="2025-08-19T08:14:47.194084684Z" level=info msg="RemoveContainer for \"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\"" Aug 19 08:14:47.197895 containerd[1522]: time="2025-08-19T08:14:47.197693359Z" level=info msg="RemoveContainer for \"83fdd16f5af6ba24fbea6f6fda3028fc25f743879058521aef0104ff8f23d8d7\" returns successfully" Aug 19 08:14:47.198183 kubelet[2709]: I0819 08:14:47.198137 2709 scope.go:117] "RemoveContainer" containerID="f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230" Aug 19 08:14:47.200871 containerd[1522]: time="2025-08-19T08:14:47.200816149Z" level=info msg="RemoveContainer for \"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\"" Aug 19 08:14:47.204798 containerd[1522]: time="2025-08-19T08:14:47.204744487Z" level=info msg="RemoveContainer for \"f9fdbe1c2e59106a8b670aa628a0b88d70f45d4f05eb8f91aef9d0a55a048230\" returns successfully" Aug 19 08:14:47.616575 kubelet[2709]: E0819 08:14:47.616476 2709 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 08:14:47.685642 sshd[4306]: Connection closed by 139.178.89.65 port 47056 Aug 19 08:14:47.686865 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:47.697158 systemd[1]: sshd@23-143.198.65.59:22-139.178.89.65:47056.service: Deactivated successfully. Aug 19 08:14:47.700234 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 08:14:47.702100 systemd-logind[1500]: Session 24 logged out. Waiting for processes to exit. Aug 19 08:14:47.707429 systemd[1]: Started sshd@24-143.198.65.59:22-139.178.89.65:47062.service - OpenSSH per-connection server daemon (139.178.89.65:47062). Aug 19 08:14:47.708709 systemd-logind[1500]: Removed session 24. Aug 19 08:14:47.810438 sshd[4460]: Accepted publickey for core from 139.178.89.65 port 47062 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:47.813417 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:47.821832 systemd-logind[1500]: New session 25 of user core. Aug 19 08:14:47.825827 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 08:14:48.438614 sshd[4463]: Connection closed by 139.178.89.65 port 47062 Aug 19 08:14:48.443262 sshd-session[4460]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:48.462036 systemd[1]: sshd@24-143.198.65.59:22-139.178.89.65:47062.service: Deactivated successfully. Aug 19 08:14:48.467121 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 08:14:48.469667 systemd-logind[1500]: Session 25 logged out. Waiting for processes to exit. Aug 19 08:14:48.482072 systemd[1]: Started sshd@25-143.198.65.59:22-139.178.89.65:47066.service - OpenSSH per-connection server daemon (139.178.89.65:47066). Aug 19 08:14:48.485557 systemd-logind[1500]: Removed session 25. Aug 19 08:14:48.492616 kubelet[2709]: I0819 08:14:48.491932 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a268a9e-c269-4baa-bc2c-583894c939b6" path="/var/lib/kubelet/pods/5a268a9e-c269-4baa-bc2c-583894c939b6/volumes" Aug 19 08:14:48.495654 kubelet[2709]: I0819 08:14:48.495302 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c58333-6ada-4bef-9203-c687c293258f" path="/var/lib/kubelet/pods/84c58333-6ada-4bef-9203-c687c293258f/volumes" Aug 19 08:14:48.525734 systemd[1]: Created slice kubepods-burstable-podeb12a7ae_d392_4356_a3c6_5c10ae2cb1d8.slice - libcontainer container kubepods-burstable-podeb12a7ae_d392_4356_a3c6_5c10ae2cb1d8.slice. Aug 19 08:14:48.595133 sshd[4474]: Accepted publickey for core from 139.178.89.65 port 47066 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:48.599121 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:48.610882 systemd-logind[1500]: New session 26 of user core. Aug 19 08:14:48.617948 kubelet[2709]: I0819 08:14:48.617894 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-etc-cni-netd\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.618245 kubelet[2709]: I0819 08:14:48.618215 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-host-proc-sys-net\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.618398 kubelet[2709]: I0819 08:14:48.618371 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-hubble-tls\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.618674 kubelet[2709]: I0819 08:14:48.618651 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mld\" (UniqueName: \"kubernetes.io/projected/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-kube-api-access-98mld\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.618821 kubelet[2709]: I0819 08:14:48.618803 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-bpf-maps\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.618939 kubelet[2709]: I0819 08:14:48.618923 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-cni-path\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.619035 kubelet[2709]: I0819 08:14:48.619020 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-xtables-lock\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.619159 kubelet[2709]: I0819 08:14:48.619120 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-host-proc-sys-kernel\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.619294 kubelet[2709]: I0819 08:14:48.619271 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-cilium-run\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.619421 kubelet[2709]: I0819 08:14:48.619393 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-lib-modules\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.619523 kubelet[2709]: I0819 08:14:48.619508 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-hostproc\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.619642 kubelet[2709]: I0819 08:14:48.619624 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-clustermesh-secrets\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.619736 kubelet[2709]: I0819 08:14:48.619721 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-cilium-cgroup\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.619857 kubelet[2709]: I0819 08:14:48.619837 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-cilium-config-path\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.620058 kubelet[2709]: I0819 08:14:48.619953 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8-cilium-ipsec-secrets\") pod \"cilium-w2bfg\" (UID: \"eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8\") " pod="kube-system/cilium-w2bfg" Aug 19 08:14:48.620386 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 19 08:14:48.688892 sshd[4477]: Connection closed by 139.178.89.65 port 47066 Aug 19 08:14:48.689751 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Aug 19 08:14:48.706892 systemd[1]: sshd@25-143.198.65.59:22-139.178.89.65:47066.service: Deactivated successfully. Aug 19 08:14:48.713112 systemd[1]: session-26.scope: Deactivated successfully. Aug 19 08:14:48.715142 systemd-logind[1500]: Session 26 logged out. Waiting for processes to exit. Aug 19 08:14:48.723112 systemd-logind[1500]: Removed session 26. Aug 19 08:14:48.724006 systemd[1]: Started sshd@26-143.198.65.59:22-139.178.89.65:47080.service - OpenSSH per-connection server daemon (139.178.89.65:47080). Aug 19 08:14:48.834863 kubelet[2709]: E0819 08:14:48.834813 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:48.841742 containerd[1522]: time="2025-08-19T08:14:48.841670214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w2bfg,Uid:eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8,Namespace:kube-system,Attempt:0,}" Aug 19 08:14:48.861494 sshd[4484]: Accepted publickey for core from 139.178.89.65 port 47080 ssh2: RSA SHA256:6ZPtP37Szh8vywX1QAhVHChkhw7gRGk7Yy96jCNh4bw Aug 19 08:14:48.866136 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:14:48.873833 containerd[1522]: time="2025-08-19T08:14:48.873779123Z" level=info msg="connecting to shim 9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916" address="unix:///run/containerd/s/bc6d53229ca8bd21127b45bf99ae312fef50b666093dfc5cc400b35271bfe3ba" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:14:48.876219 systemd-logind[1500]: New session 27 of user core. Aug 19 08:14:48.880860 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 19 08:14:48.906314 systemd[1]: Started cri-containerd-9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916.scope - libcontainer container 9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916. Aug 19 08:14:48.945826 containerd[1522]: time="2025-08-19T08:14:48.945580087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w2bfg,Uid:eb12a7ae-d392-4356-a3c6-5c10ae2cb1d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\"" Aug 19 08:14:48.949382 kubelet[2709]: E0819 08:14:48.949344 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:48.958493 containerd[1522]: time="2025-08-19T08:14:48.958214798Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:14:48.968579 containerd[1522]: time="2025-08-19T08:14:48.968520973Z" level=info msg="Container ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:14:48.977702 containerd[1522]: time="2025-08-19T08:14:48.977629689Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad\"" Aug 19 08:14:48.979684 containerd[1522]: time="2025-08-19T08:14:48.979624806Z" level=info msg="StartContainer for \"ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad\"" Aug 19 08:14:48.982622 containerd[1522]: time="2025-08-19T08:14:48.982502269Z" level=info msg="connecting to shim ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad" address="unix:///run/containerd/s/bc6d53229ca8bd21127b45bf99ae312fef50b666093dfc5cc400b35271bfe3ba" protocol=ttrpc version=3 Aug 19 08:14:49.015217 systemd[1]: Started cri-containerd-ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad.scope - libcontainer container ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad. Aug 19 08:14:49.066229 containerd[1522]: time="2025-08-19T08:14:49.066075443Z" level=info msg="StartContainer for \"ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad\" returns successfully" Aug 19 08:14:49.100019 systemd[1]: cri-containerd-ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad.scope: Deactivated successfully. Aug 19 08:14:49.100896 systemd[1]: cri-containerd-ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad.scope: Consumed 29ms CPU time, 9.6M memory peak, 2.9M read from disk. Aug 19 08:14:49.103451 containerd[1522]: time="2025-08-19T08:14:49.103389311Z" level=info msg="received exit event container_id:\"ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad\" id:\"ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad\" pid:4556 exited_at:{seconds:1755591289 nanos:102539142}" Aug 19 08:14:49.103848 containerd[1522]: time="2025-08-19T08:14:49.103452571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad\" id:\"ebe4329e4da2fe0c3374396ddf78d919a98f57b0cb2e7dab2bdc639b8334dcad\" pid:4556 exited_at:{seconds:1755591289 nanos:102539142}" Aug 19 08:14:49.165068 kubelet[2709]: E0819 08:14:49.164988 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:49.174470 containerd[1522]: time="2025-08-19T08:14:49.174389951Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:14:49.203717 containerd[1522]: time="2025-08-19T08:14:49.202923354Z" level=info msg="Container 0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:14:49.210460 containerd[1522]: time="2025-08-19T08:14:49.210386411Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99\"" Aug 19 08:14:49.212130 containerd[1522]: time="2025-08-19T08:14:49.211430285Z" level=info msg="StartContainer for \"0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99\"" Aug 19 08:14:49.214522 containerd[1522]: time="2025-08-19T08:14:49.214462406Z" level=info msg="connecting to shim 0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99" address="unix:///run/containerd/s/bc6d53229ca8bd21127b45bf99ae312fef50b666093dfc5cc400b35271bfe3ba" protocol=ttrpc version=3 Aug 19 08:14:49.239896 systemd[1]: Started cri-containerd-0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99.scope - libcontainer container 0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99. Aug 19 08:14:49.283619 containerd[1522]: time="2025-08-19T08:14:49.283147481Z" level=info msg="StartContainer for \"0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99\" returns successfully" Aug 19 08:14:49.298642 systemd[1]: cri-containerd-0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99.scope: Deactivated successfully. Aug 19 08:14:49.299464 systemd[1]: cri-containerd-0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99.scope: Consumed 27ms CPU time, 7.5M memory peak, 2.1M read from disk. Aug 19 08:14:49.300348 containerd[1522]: time="2025-08-19T08:14:49.300291031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99\" id:\"0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99\" pid:4601 exited_at:{seconds:1755591289 nanos:298462070}" Aug 19 08:14:49.300766 containerd[1522]: time="2025-08-19T08:14:49.300730323Z" level=info msg="received exit event container_id:\"0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99\" id:\"0b883f2ee7eb7a36a066233f2875f2cf20372734c4f84019bc23ec4826f11b99\" pid:4601 exited_at:{seconds:1755591289 nanos:298462070}" Aug 19 08:14:50.169024 kubelet[2709]: E0819 08:14:50.168968 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:50.181633 containerd[1522]: time="2025-08-19T08:14:50.178882886Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:14:50.198515 containerd[1522]: time="2025-08-19T08:14:50.198468827Z" level=info msg="Container 40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:14:50.212080 containerd[1522]: time="2025-08-19T08:14:50.211914300Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78\"" Aug 19 08:14:50.213069 containerd[1522]: time="2025-08-19T08:14:50.213031264Z" level=info msg="StartContainer for \"40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78\"" Aug 19 08:14:50.217288 containerd[1522]: time="2025-08-19T08:14:50.217037938Z" level=info msg="connecting to shim 40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78" address="unix:///run/containerd/s/bc6d53229ca8bd21127b45bf99ae312fef50b666093dfc5cc400b35271bfe3ba" protocol=ttrpc version=3 Aug 19 08:14:50.265967 systemd[1]: Started cri-containerd-40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78.scope - libcontainer container 40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78. Aug 19 08:14:50.335547 containerd[1522]: time="2025-08-19T08:14:50.335297735Z" level=info msg="StartContainer for \"40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78\" returns successfully" Aug 19 08:14:50.341339 systemd[1]: cri-containerd-40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78.scope: Deactivated successfully. Aug 19 08:14:50.347443 containerd[1522]: time="2025-08-19T08:14:50.347217974Z" level=info msg="received exit event container_id:\"40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78\" id:\"40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78\" pid:4644 exited_at:{seconds:1755591290 nanos:346815735}" Aug 19 08:14:50.347997 containerd[1522]: time="2025-08-19T08:14:50.347921957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78\" id:\"40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78\" pid:4644 exited_at:{seconds:1755591290 nanos:346815735}" Aug 19 08:14:50.387076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40fbbc7a340c79ce0c786c82ffe2c8e659fd5ee233cfdfca734c1381af10be78-rootfs.mount: Deactivated successfully. Aug 19 08:14:51.176843 kubelet[2709]: E0819 08:14:51.176317 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:51.187808 containerd[1522]: time="2025-08-19T08:14:51.186884383Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:14:51.227943 containerd[1522]: time="2025-08-19T08:14:51.227870033Z" level=info msg="Container f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:14:51.236210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103424732.mount: Deactivated successfully. Aug 19 08:14:51.254404 containerd[1522]: time="2025-08-19T08:14:51.253949025Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e\"" Aug 19 08:14:51.256279 containerd[1522]: time="2025-08-19T08:14:51.255795331Z" level=info msg="StartContainer for \"f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e\"" Aug 19 08:14:51.257650 containerd[1522]: time="2025-08-19T08:14:51.257562372Z" level=info msg="connecting to shim f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e" address="unix:///run/containerd/s/bc6d53229ca8bd21127b45bf99ae312fef50b666093dfc5cc400b35271bfe3ba" protocol=ttrpc version=3 Aug 19 08:14:51.294027 systemd[1]: Started cri-containerd-f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e.scope - libcontainer container f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e. Aug 19 08:14:51.346439 systemd[1]: cri-containerd-f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e.scope: Deactivated successfully. Aug 19 08:14:51.348355 containerd[1522]: time="2025-08-19T08:14:51.347543963Z" level=info msg="received exit event container_id:\"f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e\" id:\"f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e\" pid:4686 exited_at:{seconds:1755591291 nanos:347042786}" Aug 19 08:14:51.349792 containerd[1522]: time="2025-08-19T08:14:51.349680265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e\" id:\"f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e\" pid:4686 exited_at:{seconds:1755591291 nanos:347042786}" Aug 19 08:14:51.360481 containerd[1522]: time="2025-08-19T08:14:51.360250318Z" level=info msg="StartContainer for \"f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e\" returns successfully" Aug 19 08:14:51.385401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f03836917dc36a6eb66618e70744d2610e364dc3af6d08e9a13ec4df0ecd837e-rootfs.mount: Deactivated successfully. Aug 19 08:14:52.184376 kubelet[2709]: E0819 08:14:52.182857 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:52.188617 containerd[1522]: time="2025-08-19T08:14:52.188467208Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:14:52.203682 containerd[1522]: time="2025-08-19T08:14:52.201223855Z" level=info msg="Container 469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:14:52.219843 containerd[1522]: time="2025-08-19T08:14:52.219759065Z" level=info msg="CreateContainer within sandbox \"9a96b0453d0025278478055eeafe2487d839bd93093a916d3122a1839a50b916\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef\"" Aug 19 08:14:52.221614 containerd[1522]: time="2025-08-19T08:14:52.221137611Z" level=info msg="StartContainer for \"469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef\"" Aug 19 08:14:52.222792 containerd[1522]: time="2025-08-19T08:14:52.222751524Z" level=info msg="connecting to shim 469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef" address="unix:///run/containerd/s/bc6d53229ca8bd21127b45bf99ae312fef50b666093dfc5cc400b35271bfe3ba" protocol=ttrpc version=3 Aug 19 08:14:52.253027 systemd[1]: Started cri-containerd-469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef.scope - libcontainer container 469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef. Aug 19 08:14:52.310181 containerd[1522]: time="2025-08-19T08:14:52.310122869Z" level=info msg="StartContainer for \"469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef\" returns successfully" Aug 19 08:14:52.448905 containerd[1522]: time="2025-08-19T08:14:52.448286883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef\" id:\"e7fea35bf6789c360001edcc7c0f87e87a9a6624d304e4522c1ec5eeacc64324\" pid:4754 exited_at:{seconds:1755591292 nanos:446541258}" Aug 19 08:14:52.912688 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 19 08:14:53.192272 kubelet[2709]: E0819 08:14:53.192112 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:53.223635 kubelet[2709]: I0819 08:14:53.220945 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w2bfg" podStartSLOduration=5.220920701 podStartE2EDuration="5.220920701s" podCreationTimestamp="2025-08-19 08:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:14:53.220147379 +0000 UTC m=+111.025777866" watchObservedRunningTime="2025-08-19 08:14:53.220920701 +0000 UTC m=+111.026551191" Aug 19 08:14:53.499331 containerd[1522]: time="2025-08-19T08:14:53.499267417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef\" id:\"019272c1f19661bda792c079290dd0ad0afc1bf531d32f475552480a3cfad83f\" pid:4832 exit_status:1 exited_at:{seconds:1755591293 nanos:498014473}" Aug 19 08:14:54.479122 kubelet[2709]: E0819 08:14:54.479058 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:54.836975 kubelet[2709]: E0819 08:14:54.836707 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:55.680085 containerd[1522]: time="2025-08-19T08:14:55.679964807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef\" id:\"42683b6a114eb40ac6f00f0074428d61857402bae69b9e73fa7e40230b71def7\" pid:5080 exit_status:1 exited_at:{seconds:1755591295 nanos:679097753}" Aug 19 08:14:56.419328 systemd-networkd[1456]: lxc_health: Link UP Aug 19 08:14:56.424993 systemd-networkd[1456]: lxc_health: Gained carrier Aug 19 08:14:56.837850 kubelet[2709]: E0819 08:14:56.837127 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:57.205620 kubelet[2709]: E0819 08:14:57.205355 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:58.005842 systemd-networkd[1456]: lxc_health: Gained IPv6LL Aug 19 08:14:58.208677 kubelet[2709]: E0819 08:14:58.208547 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 19 08:14:58.258785 containerd[1522]: time="2025-08-19T08:14:58.257735917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef\" id:\"7d9b2a5983716cad951b63e70eec2db779fee6c59346ef4b456df5e2b679eaa5\" pid:5301 exited_at:{seconds:1755591298 nanos:256161251}" Aug 19 08:15:00.500072 containerd[1522]: time="2025-08-19T08:15:00.500008090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef\" id:\"90ed0faa6e37c3e39c28f4c41c826bfaaec60e61f7c2693835cfb9d262bdca0e\" pid:5328 exited_at:{seconds:1755591300 nanos:499580143}" Aug 19 08:15:02.420526 containerd[1522]: time="2025-08-19T08:15:02.419119856Z" level=info msg="StopPodSandbox for \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\"" Aug 19 08:15:02.420526 containerd[1522]: time="2025-08-19T08:15:02.419335679Z" level=info msg="TearDown network for sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" successfully" Aug 19 08:15:02.420526 containerd[1522]: time="2025-08-19T08:15:02.419362209Z" level=info msg="StopPodSandbox for \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" returns successfully" Aug 19 08:15:02.421293 containerd[1522]: time="2025-08-19T08:15:02.420703940Z" level=info msg="RemovePodSandbox for \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\"" Aug 19 08:15:02.421293 containerd[1522]: time="2025-08-19T08:15:02.420786533Z" level=info msg="Forcibly stopping sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\"" Aug 19 08:15:02.421293 containerd[1522]: time="2025-08-19T08:15:02.420937867Z" level=info msg="TearDown network for sandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" successfully" Aug 19 08:15:02.422792 containerd[1522]: time="2025-08-19T08:15:02.422733472Z" level=info msg="Ensure that sandbox d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979 in task-service has been cleanup successfully" Aug 19 08:15:02.427511 containerd[1522]: time="2025-08-19T08:15:02.427448616Z" level=info msg="RemovePodSandbox \"d23a17b849496f7900fe8bd469cb29dc9841024375d1bd0169dc030646fec979\" returns successfully" Aug 19 08:15:02.430696 containerd[1522]: time="2025-08-19T08:15:02.429973701Z" level=info msg="StopPodSandbox for \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\"" Aug 19 08:15:02.430696 containerd[1522]: time="2025-08-19T08:15:02.430296901Z" level=info msg="TearDown network for sandbox \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" successfully" Aug 19 08:15:02.430696 containerd[1522]: time="2025-08-19T08:15:02.430324057Z" level=info msg="StopPodSandbox for \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" returns successfully" Aug 19 08:15:02.431414 containerd[1522]: time="2025-08-19T08:15:02.431175703Z" level=info msg="RemovePodSandbox for \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\"" Aug 19 08:15:02.431414 containerd[1522]: time="2025-08-19T08:15:02.431225760Z" level=info msg="Forcibly stopping sandbox \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\"" Aug 19 08:15:02.431414 containerd[1522]: time="2025-08-19T08:15:02.431367793Z" level=info msg="TearDown network for sandbox \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" successfully" Aug 19 08:15:02.435009 containerd[1522]: time="2025-08-19T08:15:02.434934759Z" level=info msg="Ensure that sandbox 8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d in task-service has been cleanup successfully" Aug 19 08:15:02.438946 containerd[1522]: time="2025-08-19T08:15:02.438848692Z" level=info msg="RemovePodSandbox \"8dff0fac8ab0b2d6273d5eee6f3126b830ab21021933c71a00dd8669d8472d5d\" returns successfully" Aug 19 08:15:02.835110 containerd[1522]: time="2025-08-19T08:15:02.834898947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"469116d24cad9f0bad558620d8d2c1a9e909cf1aeb3106f1ff2156088728a6ef\" id:\"68d7c13c6f3fe38d1e3f8e6d83bcca6d5ee90c661c17f7327a068803a91a3229\" pid:5356 exited_at:{seconds:1755591302 nanos:834089229}" Aug 19 08:15:02.878736 sshd[4518]: Connection closed by 139.178.89.65 port 47080 Aug 19 08:15:02.880006 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:02.889919 systemd-logind[1500]: Session 27 logged out. Waiting for processes to exit. Aug 19 08:15:02.890135 systemd[1]: sshd@26-143.198.65.59:22-139.178.89.65:47080.service: Deactivated successfully. Aug 19 08:15:02.897291 systemd[1]: session-27.scope: Deactivated successfully. Aug 19 08:15:02.908538 systemd-logind[1500]: Removed session 27. Aug 19 08:15:05.479752 kubelet[2709]: E0819 08:15:05.479104 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"