Sep 14 12:15:37.870897 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 20:38:35 -00 2025 Sep 14 12:15:37.870929 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 14 12:15:37.870947 kernel: BIOS-provided physical RAM map: Sep 14 12:15:37.870954 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 14 12:15:37.870960 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 14 12:15:37.870967 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 14 12:15:37.870975 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 14 12:15:37.870985 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 14 12:15:37.870995 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 14 12:15:37.871002 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 14 12:15:37.871009 kernel: NX (Execute Disable) protection: active Sep 14 12:15:37.871016 kernel: APIC: Static calls initialized Sep 14 12:15:37.871023 kernel: SMBIOS 2.8 present. Sep 14 12:15:37.871030 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 14 12:15:37.871046 kernel: DMI: Memory slots populated: 1/1 Sep 14 12:15:37.871054 kernel: Hypervisor detected: KVM Sep 14 12:15:37.871064 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 14 12:15:37.871072 kernel: kvm-clock: using sched offset of 4552176155 cycles Sep 14 12:15:37.871081 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 14 12:15:37.871089 kernel: tsc: Detected 2494.140 MHz processor Sep 14 12:15:37.871097 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 14 12:15:37.871105 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 14 12:15:37.871113 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 14 12:15:37.871125 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 14 12:15:37.871133 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 14 12:15:37.871141 kernel: ACPI: Early table checksum verification disabled Sep 14 12:15:37.871149 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 14 12:15:37.871157 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 14 12:15:37.871165 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 14 12:15:37.871173 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 14 12:15:37.871180 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 14 12:15:37.871188 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 14 12:15:37.871199 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 14 12:15:37.871207 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 14 12:15:37.871214 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 14 12:15:37.871222 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 14 12:15:37.871230 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 14 12:15:37.871238 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 14 12:15:37.871246 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 14 12:15:37.871254 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 14 12:15:37.871269 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 14 12:15:37.871277 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 14 12:15:37.871285 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 14 12:15:37.871294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 14 12:15:37.871302 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Sep 14 12:15:37.871313 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Sep 14 12:15:37.871321 kernel: Zone ranges: Sep 14 12:15:37.871329 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 14 12:15:37.871338 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 14 12:15:37.871346 kernel: Normal empty Sep 14 12:15:37.871354 kernel: Device empty Sep 14 12:15:37.871362 kernel: Movable zone start for each node Sep 14 12:15:37.871370 kernel: Early memory node ranges Sep 14 12:15:37.871378 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 14 12:15:37.871386 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 14 12:15:37.871398 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 14 12:15:37.871406 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 14 12:15:37.871414 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 14 12:15:37.871422 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 14 12:15:37.871430 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 14 12:15:37.871439 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 14 12:15:37.871450 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 14 12:15:37.871458 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 14 12:15:37.871469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 14 12:15:37.871480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 14 12:15:37.871488 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 14 12:15:37.871498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 14 12:15:37.871506 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 14 12:15:37.871514 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 14 12:15:37.871523 kernel: TSC deadline timer available Sep 14 12:15:37.871531 kernel: CPU topo: Max. logical packages: 1 Sep 14 12:15:37.871539 kernel: CPU topo: Max. logical dies: 1 Sep 14 12:15:37.871547 kernel: CPU topo: Max. dies per package: 1 Sep 14 12:15:37.871558 kernel: CPU topo: Max. threads per core: 1 Sep 14 12:15:37.871566 kernel: CPU topo: Num. cores per package: 2 Sep 14 12:15:37.871574 kernel: CPU topo: Num. threads per package: 2 Sep 14 12:15:37.871582 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 14 12:15:37.871599 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 14 12:15:37.871608 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 14 12:15:37.871617 kernel: Booting paravirtualized kernel on KVM Sep 14 12:15:37.871630 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 14 12:15:37.871641 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 14 12:15:37.871652 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 14 12:15:37.871668 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 14 12:15:37.871679 kernel: pcpu-alloc: [0] 0 1 Sep 14 12:15:37.871689 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 14 12:15:37.871702 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 14 12:15:37.871715 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 14 12:15:37.871726 kernel: random: crng init done Sep 14 12:15:37.871734 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 14 12:15:37.871742 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 14 12:15:37.871754 kernel: Fallback order for Node 0: 0 Sep 14 12:15:37.871763 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Sep 14 12:15:37.871771 kernel: Policy zone: DMA32 Sep 14 12:15:37.871779 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 14 12:15:37.871788 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 14 12:15:37.871796 kernel: Kernel/User page tables isolation: enabled Sep 14 12:15:37.871804 kernel: ftrace: allocating 40125 entries in 157 pages Sep 14 12:15:37.871812 kernel: ftrace: allocated 157 pages with 5 groups Sep 14 12:15:37.871820 kernel: Dynamic Preempt: voluntary Sep 14 12:15:37.871832 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 14 12:15:37.871846 kernel: rcu: RCU event tracing is enabled. Sep 14 12:15:37.871854 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 14 12:15:37.871863 kernel: Trampoline variant of Tasks RCU enabled. Sep 14 12:15:37.871871 kernel: Rude variant of Tasks RCU enabled. Sep 14 12:15:37.871879 kernel: Tracing variant of Tasks RCU enabled. Sep 14 12:15:37.871887 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 14 12:15:37.871895 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 14 12:15:37.871904 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 14 12:15:37.871918 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 14 12:15:37.871927 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 14 12:15:37.871935 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 14 12:15:37.871943 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 14 12:15:37.871951 kernel: Console: colour VGA+ 80x25 Sep 14 12:15:37.871959 kernel: printk: legacy console [tty0] enabled Sep 14 12:15:37.871967 kernel: printk: legacy console [ttyS0] enabled Sep 14 12:15:37.871975 kernel: ACPI: Core revision 20240827 Sep 14 12:15:37.871984 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 14 12:15:37.872010 kernel: APIC: Switch to symmetric I/O mode setup Sep 14 12:15:37.872019 kernel: x2apic enabled Sep 14 12:15:37.872028 kernel: APIC: Switched APIC routing to: physical x2apic Sep 14 12:15:37.872040 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 14 12:15:37.872051 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 14 12:15:37.872060 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 14 12:15:37.872069 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 14 12:15:37.872077 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 14 12:15:37.872086 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 14 12:15:37.872098 kernel: Spectre V2 : Mitigation: Retpolines Sep 14 12:15:37.872107 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 14 12:15:37.872116 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 14 12:15:37.872125 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 14 12:15:37.872134 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 14 12:15:37.872142 kernel: MDS: Mitigation: Clear CPU buffers Sep 14 12:15:37.872151 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 14 12:15:37.872163 kernel: active return thunk: its_return_thunk Sep 14 12:15:37.872172 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 14 12:15:37.872180 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 14 12:15:37.872189 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 14 12:15:37.872197 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 14 12:15:37.872206 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 14 12:15:37.872219 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 14 12:15:37.872228 kernel: Freeing SMP alternatives memory: 32K Sep 14 12:15:37.872236 kernel: pid_max: default: 32768 minimum: 301 Sep 14 12:15:37.872248 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 14 12:15:37.872257 kernel: landlock: Up and running. Sep 14 12:15:37.872266 kernel: SELinux: Initializing. Sep 14 12:15:37.872274 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 14 12:15:37.872283 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 14 12:15:37.872292 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 14 12:15:37.872301 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 14 12:15:37.872309 kernel: signal: max sigframe size: 1776 Sep 14 12:15:37.872318 kernel: rcu: Hierarchical SRCU implementation. Sep 14 12:15:37.872330 kernel: rcu: Max phase no-delay instances is 400. Sep 14 12:15:37.872339 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 14 12:15:37.872348 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 14 12:15:37.872356 kernel: smp: Bringing up secondary CPUs ... Sep 14 12:15:37.872367 kernel: smpboot: x86: Booting SMP configuration: Sep 14 12:15:37.872376 kernel: .... node #0, CPUs: #1 Sep 14 12:15:37.872384 kernel: smp: Brought up 1 node, 2 CPUs Sep 14 12:15:37.872393 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 14 12:15:37.872405 kernel: Memory: 1966916K/2096612K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54084K init, 2880K bss, 125140K reserved, 0K cma-reserved) Sep 14 12:15:37.872418 kernel: devtmpfs: initialized Sep 14 12:15:37.872426 kernel: x86/mm: Memory block size: 128MB Sep 14 12:15:37.872435 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 14 12:15:37.872444 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 14 12:15:37.872453 kernel: pinctrl core: initialized pinctrl subsystem Sep 14 12:15:37.872462 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 14 12:15:37.872471 kernel: audit: initializing netlink subsys (disabled) Sep 14 12:15:37.872484 kernel: audit: type=2000 audit(1757852134.351:1): state=initialized audit_enabled=0 res=1 Sep 14 12:15:37.872493 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 14 12:15:37.872505 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 14 12:15:37.872514 kernel: cpuidle: using governor menu Sep 14 12:15:37.872522 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 14 12:15:37.872531 kernel: dca service started, version 1.12.1 Sep 14 12:15:37.872539 kernel: PCI: Using configuration type 1 for base access Sep 14 12:15:37.872548 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 14 12:15:37.872557 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 14 12:15:37.872565 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 14 12:15:37.872574 kernel: ACPI: Added _OSI(Module Device) Sep 14 12:15:37.872586 kernel: ACPI: Added _OSI(Processor Device) Sep 14 12:15:37.872605 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 14 12:15:37.872614 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 14 12:15:37.872623 kernel: ACPI: Interpreter enabled Sep 14 12:15:37.872631 kernel: ACPI: PM: (supports S0 S5) Sep 14 12:15:37.872640 kernel: ACPI: Using IOAPIC for interrupt routing Sep 14 12:15:37.872649 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 14 12:15:37.872658 kernel: PCI: Using E820 reservations for host bridge windows Sep 14 12:15:37.872666 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 14 12:15:37.872678 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 14 12:15:37.872918 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 14 12:15:37.873068 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 14 12:15:37.873165 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 14 12:15:37.873177 kernel: acpiphp: Slot [3] registered Sep 14 12:15:37.873186 kernel: acpiphp: Slot [4] registered Sep 14 12:15:37.873195 kernel: acpiphp: Slot [5] registered Sep 14 12:15:37.873212 kernel: acpiphp: Slot [6] registered Sep 14 12:15:37.873221 kernel: acpiphp: Slot [7] registered Sep 14 12:15:37.873237 kernel: acpiphp: Slot [8] registered Sep 14 12:15:37.873246 kernel: acpiphp: Slot [9] registered Sep 14 12:15:37.873254 kernel: acpiphp: Slot [10] registered Sep 14 12:15:37.873263 kernel: acpiphp: Slot [11] registered Sep 14 12:15:37.873272 kernel: acpiphp: Slot [12] registered Sep 14 12:15:37.873281 kernel: acpiphp: Slot [13] registered Sep 14 12:15:37.873290 kernel: acpiphp: Slot [14] registered Sep 14 12:15:37.873299 kernel: acpiphp: Slot [15] registered Sep 14 12:15:37.873311 kernel: acpiphp: Slot [16] registered Sep 14 12:15:37.873320 kernel: acpiphp: Slot [17] registered Sep 14 12:15:37.873329 kernel: acpiphp: Slot [18] registered Sep 14 12:15:37.873338 kernel: acpiphp: Slot [19] registered Sep 14 12:15:37.873347 kernel: acpiphp: Slot [20] registered Sep 14 12:15:37.873355 kernel: acpiphp: Slot [21] registered Sep 14 12:15:37.873364 kernel: acpiphp: Slot [22] registered Sep 14 12:15:37.873372 kernel: acpiphp: Slot [23] registered Sep 14 12:15:37.873381 kernel: acpiphp: Slot [24] registered Sep 14 12:15:37.873393 kernel: acpiphp: Slot [25] registered Sep 14 12:15:37.873402 kernel: acpiphp: Slot [26] registered Sep 14 12:15:37.873410 kernel: acpiphp: Slot [27] registered Sep 14 12:15:37.873419 kernel: acpiphp: Slot [28] registered Sep 14 12:15:37.873428 kernel: acpiphp: Slot [29] registered Sep 14 12:15:37.873437 kernel: acpiphp: Slot [30] registered Sep 14 12:15:37.873451 kernel: acpiphp: Slot [31] registered Sep 14 12:15:37.873464 kernel: PCI host bridge to bus 0000:00 Sep 14 12:15:37.873617 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 14 12:15:37.873721 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 14 12:15:37.873804 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 14 12:15:37.873887 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 14 12:15:37.873991 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 14 12:15:37.874080 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 14 12:15:37.874212 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 14 12:15:37.874330 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 14 12:15:37.874443 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Sep 14 12:15:37.874548 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Sep 14 12:15:37.874686 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 14 12:15:37.876789 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 14 12:15:37.876897 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 14 12:15:37.876993 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 14 12:15:37.877110 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Sep 14 12:15:37.877204 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Sep 14 12:15:37.877337 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 14 12:15:37.877431 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 14 12:15:37.877523 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 14 12:15:37.877658 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Sep 14 12:15:37.877758 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Sep 14 12:15:37.877856 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Sep 14 12:15:37.877963 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Sep 14 12:15:37.878107 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Sep 14 12:15:37.878252 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 14 12:15:37.878391 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 14 12:15:37.878492 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Sep 14 12:15:37.879691 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Sep 14 12:15:37.879841 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Sep 14 12:15:37.879989 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 14 12:15:37.880128 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Sep 14 12:15:37.880261 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Sep 14 12:15:37.880358 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 14 12:15:37.880470 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Sep 14 12:15:37.880574 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Sep 14 12:15:37.880678 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Sep 14 12:15:37.880769 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 14 12:15:37.880919 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 14 12:15:37.881027 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Sep 14 12:15:37.881117 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Sep 14 12:15:37.881244 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Sep 14 12:15:37.881369 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 14 12:15:37.881464 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Sep 14 12:15:37.881555 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Sep 14 12:15:37.883758 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Sep 14 12:15:37.883942 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Sep 14 12:15:37.884074 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Sep 14 12:15:37.884179 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 14 12:15:37.884192 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 14 12:15:37.884201 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 14 12:15:37.884211 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 14 12:15:37.884220 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 14 12:15:37.884229 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 14 12:15:37.884239 kernel: iommu: Default domain type: Translated Sep 14 12:15:37.884248 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 14 12:15:37.884261 kernel: PCI: Using ACPI for IRQ routing Sep 14 12:15:37.884270 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 14 12:15:37.884279 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 14 12:15:37.884288 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 14 12:15:37.884385 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 14 12:15:37.884478 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 14 12:15:37.884576 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 14 12:15:37.884604 kernel: vgaarb: loaded Sep 14 12:15:37.884614 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 14 12:15:37.884627 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 14 12:15:37.884636 kernel: clocksource: Switched to clocksource kvm-clock Sep 14 12:15:37.884645 kernel: VFS: Disk quotas dquot_6.6.0 Sep 14 12:15:37.884655 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 14 12:15:37.884668 kernel: pnp: PnP ACPI init Sep 14 12:15:37.884683 kernel: pnp: PnP ACPI: found 4 devices Sep 14 12:15:37.884695 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 14 12:15:37.884708 kernel: NET: Registered PF_INET protocol family Sep 14 12:15:37.884722 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 14 12:15:37.884739 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 14 12:15:37.884751 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 14 12:15:37.884766 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 14 12:15:37.884778 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 14 12:15:37.884788 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 14 12:15:37.884797 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 14 12:15:37.884806 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 14 12:15:37.884815 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 14 12:15:37.884824 kernel: NET: Registered PF_XDP protocol family Sep 14 12:15:37.884937 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 14 12:15:37.885022 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 14 12:15:37.885113 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 14 12:15:37.885195 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 14 12:15:37.885278 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 14 12:15:37.885384 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 14 12:15:37.885482 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 14 12:15:37.885496 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 14 12:15:37.889825 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 28850 usecs Sep 14 12:15:37.889870 kernel: PCI: CLS 0 bytes, default 64 Sep 14 12:15:37.889881 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 14 12:15:37.889892 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 14 12:15:37.889901 kernel: Initialise system trusted keyrings Sep 14 12:15:37.889910 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 14 12:15:37.889920 kernel: Key type asymmetric registered Sep 14 12:15:37.889928 kernel: Asymmetric key parser 'x509' registered Sep 14 12:15:37.889961 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 14 12:15:37.889974 kernel: io scheduler mq-deadline registered Sep 14 12:15:37.889987 kernel: io scheduler kyber registered Sep 14 12:15:37.889998 kernel: io scheduler bfq registered Sep 14 12:15:37.890007 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 14 12:15:37.890018 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 14 12:15:37.890033 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 14 12:15:37.890046 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 14 12:15:37.890058 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 14 12:15:37.890071 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 14 12:15:37.890089 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 14 12:15:37.890102 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 14 12:15:37.890113 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 14 12:15:37.890299 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 14 12:15:37.890321 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 14 12:15:37.890420 kernel: rtc_cmos 00:03: registered as rtc0 Sep 14 12:15:37.890508 kernel: rtc_cmos 00:03: setting system clock to 2025-09-14T12:15:37 UTC (1757852137) Sep 14 12:15:37.890612 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 14 12:15:37.890627 kernel: intel_pstate: CPU model not supported Sep 14 12:15:37.890636 kernel: NET: Registered PF_INET6 protocol family Sep 14 12:15:37.890646 kernel: Segment Routing with IPv6 Sep 14 12:15:37.890655 kernel: In-situ OAM (IOAM) with IPv6 Sep 14 12:15:37.890664 kernel: NET: Registered PF_PACKET protocol family Sep 14 12:15:37.890673 kernel: Key type dns_resolver registered Sep 14 12:15:37.890682 kernel: IPI shorthand broadcast: enabled Sep 14 12:15:37.890691 kernel: sched_clock: Marking stable (3248004180, 86698616)->(3350310493, -15607697) Sep 14 12:15:37.890704 kernel: registered taskstats version 1 Sep 14 12:15:37.890713 kernel: Loading compiled-in X.509 certificates Sep 14 12:15:37.890722 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: c3297a5801573420030c321362a802da1fd49c4e' Sep 14 12:15:37.890732 kernel: Demotion targets for Node 0: null Sep 14 12:15:37.890740 kernel: Key type .fscrypt registered Sep 14 12:15:37.890749 kernel: Key type fscrypt-provisioning registered Sep 14 12:15:37.890782 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 14 12:15:37.890794 kernel: ima: Allocated hash algorithm: sha1 Sep 14 12:15:37.890804 kernel: ima: No architecture policies found Sep 14 12:15:37.890817 kernel: clk: Disabling unused clocks Sep 14 12:15:37.890826 kernel: Warning: unable to open an initial console. Sep 14 12:15:37.890836 kernel: Freeing unused kernel image (initmem) memory: 54084K Sep 14 12:15:37.890846 kernel: Write protecting the kernel read-only data: 24576k Sep 14 12:15:37.890855 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 14 12:15:37.890864 kernel: Run /init as init process Sep 14 12:15:37.890874 kernel: with arguments: Sep 14 12:15:37.890883 kernel: /init Sep 14 12:15:37.890893 kernel: with environment: Sep 14 12:15:37.890905 kernel: HOME=/ Sep 14 12:15:37.890914 kernel: TERM=linux Sep 14 12:15:37.890923 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 14 12:15:37.890935 systemd[1]: Successfully made /usr/ read-only. Sep 14 12:15:37.890948 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 14 12:15:37.890959 systemd[1]: Detected virtualization kvm. Sep 14 12:15:37.890968 systemd[1]: Detected architecture x86-64. Sep 14 12:15:37.890981 systemd[1]: Running in initrd. Sep 14 12:15:37.890990 systemd[1]: No hostname configured, using default hostname. Sep 14 12:15:37.891007 systemd[1]: Hostname set to . Sep 14 12:15:37.891017 systemd[1]: Initializing machine ID from VM UUID. Sep 14 12:15:37.891027 systemd[1]: Queued start job for default target initrd.target. Sep 14 12:15:37.891036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 14 12:15:37.891049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 14 12:15:37.891060 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 14 12:15:37.891073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 14 12:15:37.891083 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 14 12:15:37.891097 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 14 12:15:37.891108 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 14 12:15:37.891121 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 14 12:15:37.891131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 14 12:15:37.891141 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 14 12:15:37.891151 systemd[1]: Reached target paths.target - Path Units. Sep 14 12:15:37.891161 systemd[1]: Reached target slices.target - Slice Units. Sep 14 12:15:37.891175 systemd[1]: Reached target swap.target - Swaps. Sep 14 12:15:37.891185 systemd[1]: Reached target timers.target - Timer Units. Sep 14 12:15:37.891195 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 14 12:15:37.891205 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 14 12:15:37.891218 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 14 12:15:37.891228 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 14 12:15:37.891238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 14 12:15:37.891248 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 14 12:15:37.891258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 14 12:15:37.891268 systemd[1]: Reached target sockets.target - Socket Units. Sep 14 12:15:37.891277 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 14 12:15:37.891287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 14 12:15:37.891300 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 14 12:15:37.891311 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 14 12:15:37.891321 systemd[1]: Starting systemd-fsck-usr.service... Sep 14 12:15:37.891331 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 14 12:15:37.891341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 14 12:15:37.891351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 14 12:15:37.891361 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 14 12:15:37.891375 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 14 12:15:37.891417 systemd-journald[211]: Collecting audit messages is disabled. Sep 14 12:15:37.891446 systemd[1]: Finished systemd-fsck-usr.service. Sep 14 12:15:37.891456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 14 12:15:37.891469 systemd-journald[211]: Journal started Sep 14 12:15:37.891491 systemd-journald[211]: Runtime Journal (/run/log/journal/cb00746de3654b08b8b8ffc3eb56d95e) is 4.9M, max 39.5M, 34.6M free. Sep 14 12:15:37.896611 systemd[1]: Started systemd-journald.service - Journal Service. Sep 14 12:15:37.901999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 14 12:15:37.907935 systemd-modules-load[212]: Inserted module 'overlay' Sep 14 12:15:37.934855 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 14 12:15:37.964244 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 14 12:15:37.964279 kernel: Bridge firewalling registered Sep 14 12:15:37.946715 systemd-modules-load[212]: Inserted module 'br_netfilter' Sep 14 12:15:37.965629 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 14 12:15:37.969784 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 14 12:15:37.973551 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 14 12:15:37.977067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 14 12:15:37.980747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 14 12:15:37.981015 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 14 12:15:37.991691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 14 12:15:38.009734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 14 12:15:38.013841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 14 12:15:38.016929 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 14 12:15:38.021448 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 14 12:15:38.023779 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 14 12:15:38.058482 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 14 12:15:38.085759 systemd-resolved[248]: Positive Trust Anchors: Sep 14 12:15:38.085780 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 14 12:15:38.085823 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 14 12:15:38.093800 systemd-resolved[248]: Defaulting to hostname 'linux'. Sep 14 12:15:38.096405 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 14 12:15:38.097455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 14 12:15:38.184675 kernel: SCSI subsystem initialized Sep 14 12:15:38.209634 kernel: Loading iSCSI transport class v2.0-870. Sep 14 12:15:38.221634 kernel: iscsi: registered transport (tcp) Sep 14 12:15:38.243733 kernel: iscsi: registered transport (qla4xxx) Sep 14 12:15:38.243841 kernel: QLogic iSCSI HBA Driver Sep 14 12:15:38.269776 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 14 12:15:38.300449 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 14 12:15:38.303298 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 14 12:15:38.366239 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 14 12:15:38.368516 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 14 12:15:38.430634 kernel: raid6: avx2x4 gen() 17989 MB/s Sep 14 12:15:38.447651 kernel: raid6: avx2x2 gen() 17879 MB/s Sep 14 12:15:38.465008 kernel: raid6: avx2x1 gen() 13360 MB/s Sep 14 12:15:38.465132 kernel: raid6: using algorithm avx2x4 gen() 17989 MB/s Sep 14 12:15:38.482995 kernel: raid6: .... xor() 7507 MB/s, rmw enabled Sep 14 12:15:38.483132 kernel: raid6: using avx2x2 recovery algorithm Sep 14 12:15:38.509668 kernel: xor: automatically using best checksumming function avx Sep 14 12:15:38.730639 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 14 12:15:38.741157 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 14 12:15:38.744066 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 14 12:15:38.780685 systemd-udevd[460]: Using default interface naming scheme 'v255'. Sep 14 12:15:38.789799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 14 12:15:38.793616 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 14 12:15:38.829271 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Sep 14 12:15:38.864895 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 14 12:15:38.867523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 14 12:15:38.943083 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 14 12:15:38.946299 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 14 12:15:39.026920 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 14 12:15:39.027160 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 14 12:15:39.055083 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 14 12:15:39.055162 kernel: GPT:9289727 != 125829119 Sep 14 12:15:39.055175 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 14 12:15:39.055690 kernel: GPT:9289727 != 125829119 Sep 14 12:15:39.057182 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 14 12:15:39.057250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 14 12:15:39.058808 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Sep 14 12:15:39.061624 kernel: cryptd: max_cpu_qlen set to 1000 Sep 14 12:15:39.067897 kernel: scsi host0: Virtio SCSI HBA Sep 14 12:15:39.086715 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 14 12:15:39.089645 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 14 12:15:39.112349 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 14 12:15:39.117619 kernel: AES CTR mode by8 optimization enabled Sep 14 12:15:39.139653 kernel: ACPI: bus type USB registered Sep 14 12:15:39.141888 kernel: usbcore: registered new interface driver usbfs Sep 14 12:15:39.141991 kernel: usbcore: registered new interface driver hub Sep 14 12:15:39.142011 kernel: usbcore: registered new device driver usb Sep 14 12:15:39.167687 kernel: libata version 3.00 loaded. Sep 14 12:15:39.173651 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 14 12:15:39.178668 kernel: scsi host1: ata_piix Sep 14 12:15:39.178934 kernel: scsi host2: ata_piix Sep 14 12:15:39.179082 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Sep 14 12:15:39.179097 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Sep 14 12:15:39.183280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 14 12:15:39.183418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 14 12:15:39.185873 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 14 12:15:39.189829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 14 12:15:39.192090 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 14 12:15:39.236627 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 14 12:15:39.253363 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 14 12:15:39.286626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 14 12:15:39.303276 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 14 12:15:39.316009 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 14 12:15:39.316739 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 14 12:15:39.319499 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 14 12:15:39.362747 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 14 12:15:39.363231 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 14 12:15:39.363448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 14 12:15:39.364642 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 14 12:15:39.365539 disk-uuid[608]: Primary Header is updated. Sep 14 12:15:39.365539 disk-uuid[608]: Secondary Entries is updated. Sep 14 12:15:39.365539 disk-uuid[608]: Secondary Header is updated. Sep 14 12:15:39.369770 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 14 12:15:39.370107 kernel: hub 1-0:1.0: USB hub found Sep 14 12:15:39.370354 kernel: hub 1-0:1.0: 2 ports detected Sep 14 12:15:39.505508 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 14 12:15:39.506736 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 14 12:15:39.507309 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 14 12:15:39.508112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 14 12:15:39.510389 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 14 12:15:39.540834 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 14 12:15:40.385860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 14 12:15:40.385985 disk-uuid[609]: The operation has completed successfully. Sep 14 12:15:40.446727 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 14 12:15:40.446897 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 14 12:15:40.510820 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 14 12:15:40.544578 sh[633]: Success Sep 14 12:15:40.572969 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 14 12:15:40.573085 kernel: device-mapper: uevent: version 1.0.3 Sep 14 12:15:40.573109 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 14 12:15:40.588643 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 14 12:15:40.652392 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 14 12:15:40.660775 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 14 12:15:40.665010 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 14 12:15:40.696650 kernel: BTRFS: device fsid 5d2ab445-1154-4e47-9d7e-ff4b81d84474 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (645) Sep 14 12:15:40.699627 kernel: BTRFS info (device dm-0): first mount of filesystem 5d2ab445-1154-4e47-9d7e-ff4b81d84474 Sep 14 12:15:40.699723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 14 12:15:40.707293 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 14 12:15:40.707398 kernel: BTRFS info (device dm-0): enabling free space tree Sep 14 12:15:40.709789 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 14 12:15:40.711112 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 14 12:15:40.711670 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 14 12:15:40.712633 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 14 12:15:40.715772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 14 12:15:40.749616 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (676) Sep 14 12:15:40.753710 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 14 12:15:40.753814 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 14 12:15:40.759654 kernel: BTRFS info (device vda6): turning on async discard Sep 14 12:15:40.759776 kernel: BTRFS info (device vda6): enabling free space tree Sep 14 12:15:40.766655 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 14 12:15:40.768896 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 14 12:15:40.771208 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 14 12:15:40.892521 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 14 12:15:40.895799 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 14 12:15:40.950027 systemd-networkd[817]: lo: Link UP Sep 14 12:15:40.950038 systemd-networkd[817]: lo: Gained carrier Sep 14 12:15:40.953127 systemd-networkd[817]: Enumeration completed Sep 14 12:15:40.953309 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 14 12:15:40.954433 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 14 12:15:40.954440 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 14 12:15:40.955377 systemd[1]: Reached target network.target - Network. Sep 14 12:15:40.956012 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 14 12:15:40.956019 systemd-networkd[817]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 14 12:15:40.956974 systemd-networkd[817]: eth0: Link UP Sep 14 12:15:40.957271 systemd-networkd[817]: eth1: Link UP Sep 14 12:15:40.957605 systemd-networkd[817]: eth0: Gained carrier Sep 14 12:15:40.957626 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 14 12:15:40.963027 systemd-networkd[817]: eth1: Gained carrier Sep 14 12:15:40.963488 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 14 12:15:40.978834 systemd-networkd[817]: eth0: DHCPv4 address 143.198.142.64/20, gateway 143.198.128.1 acquired from 169.254.169.253 Sep 14 12:15:40.986706 systemd-networkd[817]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Sep 14 12:15:41.001348 ignition[723]: Ignition 2.22.0 Sep 14 12:15:41.001378 ignition[723]: Stage: fetch-offline Sep 14 12:15:41.001442 ignition[723]: no configs at "/usr/lib/ignition/base.d" Sep 14 12:15:41.001455 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 14 12:15:41.001612 ignition[723]: parsed url from cmdline: "" Sep 14 12:15:41.001617 ignition[723]: no config URL provided Sep 14 12:15:41.001622 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Sep 14 12:15:41.005151 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 14 12:15:41.001631 ignition[723]: no config at "/usr/lib/ignition/user.ign" Sep 14 12:15:41.001638 ignition[723]: failed to fetch config: resource requires networking Sep 14 12:15:41.002129 ignition[723]: Ignition finished successfully Sep 14 12:15:41.007761 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 14 12:15:41.063706 ignition[828]: Ignition 2.22.0 Sep 14 12:15:41.063721 ignition[828]: Stage: fetch Sep 14 12:15:41.063890 ignition[828]: no configs at "/usr/lib/ignition/base.d" Sep 14 12:15:41.063905 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 14 12:15:41.064034 ignition[828]: parsed url from cmdline: "" Sep 14 12:15:41.064039 ignition[828]: no config URL provided Sep 14 12:15:41.064046 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Sep 14 12:15:41.064055 ignition[828]: no config at "/usr/lib/ignition/user.ign" Sep 14 12:15:41.064084 ignition[828]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 14 12:15:41.081640 ignition[828]: GET result: OK Sep 14 12:15:41.081849 ignition[828]: parsing config with SHA512: 2dd30d202c339f4fcc5994cd4097fc214ddd194928f37c046cbd20eea67b4ea34bdd08aa309c46c516d7069ec99fa6d73a03100f28a1e18afc6a88cc65816c78 Sep 14 12:15:41.088322 unknown[828]: fetched base config from "system" Sep 14 12:15:41.089193 ignition[828]: fetch: fetch complete Sep 14 12:15:41.088348 unknown[828]: fetched base config from "system" Sep 14 12:15:41.089207 ignition[828]: fetch: fetch passed Sep 14 12:15:41.088360 unknown[828]: fetched user config from "digitalocean" Sep 14 12:15:41.089303 ignition[828]: Ignition finished successfully Sep 14 12:15:41.094150 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 14 12:15:41.096097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 14 12:15:41.144808 ignition[835]: Ignition 2.22.0 Sep 14 12:15:41.145491 ignition[835]: Stage: kargs Sep 14 12:15:41.146146 ignition[835]: no configs at "/usr/lib/ignition/base.d" Sep 14 12:15:41.146158 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 14 12:15:41.147313 ignition[835]: kargs: kargs passed Sep 14 12:15:41.147368 ignition[835]: Ignition finished successfully Sep 14 12:15:41.149126 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 14 12:15:41.151641 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 14 12:15:41.192555 ignition[841]: Ignition 2.22.0 Sep 14 12:15:41.192574 ignition[841]: Stage: disks Sep 14 12:15:41.192865 ignition[841]: no configs at "/usr/lib/ignition/base.d" Sep 14 12:15:41.192881 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 14 12:15:41.194499 ignition[841]: disks: disks passed Sep 14 12:15:41.194584 ignition[841]: Ignition finished successfully Sep 14 12:15:41.196447 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 14 12:15:41.198100 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 14 12:15:41.198579 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 14 12:15:41.199242 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 14 12:15:41.200079 systemd[1]: Reached target sysinit.target - System Initialization. Sep 14 12:15:41.200714 systemd[1]: Reached target basic.target - Basic System. Sep 14 12:15:41.202979 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 14 12:15:41.235868 systemd-fsck[849]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 14 12:15:41.239747 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 14 12:15:41.242614 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 14 12:15:41.381662 kernel: EXT4-fs (vda9): mounted filesystem d027afc5-396a-49bf-a5be-60ddd42cb089 r/w with ordered data mode. Quota mode: none. Sep 14 12:15:41.383332 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 14 12:15:41.385000 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 14 12:15:41.387719 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 14 12:15:41.390688 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 14 12:15:41.392324 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Sep 14 12:15:41.396936 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 14 12:15:41.398698 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 14 12:15:41.399546 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 14 12:15:41.411018 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 14 12:15:41.416760 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 14 12:15:41.423611 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (857) Sep 14 12:15:41.429624 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 14 12:15:41.429698 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 14 12:15:41.459380 kernel: BTRFS info (device vda6): turning on async discard Sep 14 12:15:41.459484 kernel: BTRFS info (device vda6): enabling free space tree Sep 14 12:15:41.461582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 14 12:15:41.497539 coreos-metadata[859]: Sep 14 12:15:41.497 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 14 12:15:41.508480 coreos-metadata[860]: Sep 14 12:15:41.507 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 14 12:15:41.509800 initrd-setup-root[887]: cut: /sysroot/etc/passwd: No such file or directory Sep 14 12:15:41.513691 coreos-metadata[859]: Sep 14 12:15:41.512 INFO Fetch successful Sep 14 12:15:41.518621 initrd-setup-root[894]: cut: /sysroot/etc/group: No such file or directory Sep 14 12:15:41.521954 coreos-metadata[860]: Sep 14 12:15:41.521 INFO Fetch successful Sep 14 12:15:41.522630 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Sep 14 12:15:41.523281 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Sep 14 12:15:41.531167 coreos-metadata[860]: Sep 14 12:15:41.531 INFO wrote hostname ci-4459.0.0-9-e5fa973bfc to /sysroot/etc/hostname Sep 14 12:15:41.532332 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Sep 14 12:15:41.533819 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 14 12:15:41.541339 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory Sep 14 12:15:41.654877 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 14 12:15:41.656818 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 14 12:15:41.659751 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 14 12:15:41.680627 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 14 12:15:41.695968 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 14 12:15:41.709785 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 14 12:15:41.730484 ignition[978]: INFO : Ignition 2.22.0 Sep 14 12:15:41.732622 ignition[978]: INFO : Stage: mount Sep 14 12:15:41.732622 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 14 12:15:41.732622 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 14 12:15:41.734702 ignition[978]: INFO : mount: mount passed Sep 14 12:15:41.735341 ignition[978]: INFO : Ignition finished successfully Sep 14 12:15:41.738324 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 14 12:15:41.740464 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 14 12:15:41.764996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 14 12:15:41.798832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (989) Sep 14 12:15:41.798916 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 14 12:15:41.800802 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 14 12:15:41.805677 kernel: BTRFS info (device vda6): turning on async discard Sep 14 12:15:41.805789 kernel: BTRFS info (device vda6): enabling free space tree Sep 14 12:15:41.809276 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 14 12:15:41.855787 ignition[1006]: INFO : Ignition 2.22.0 Sep 14 12:15:41.855787 ignition[1006]: INFO : Stage: files Sep 14 12:15:41.856972 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 14 12:15:41.856972 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 14 12:15:41.858414 ignition[1006]: DEBUG : files: compiled without relabeling support, skipping Sep 14 12:15:41.859646 ignition[1006]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 14 12:15:41.859646 ignition[1006]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 14 12:15:41.862429 ignition[1006]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 14 12:15:41.863169 ignition[1006]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 14 12:15:41.863169 ignition[1006]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 14 12:15:41.863060 unknown[1006]: wrote ssh authorized keys file for user: core Sep 14 12:15:41.865040 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 14 12:15:41.865660 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 14 12:15:42.008399 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 14 12:15:42.067790 systemd-networkd[817]: eth1: Gained IPv6LL Sep 14 12:15:42.108652 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 14 12:15:42.108652 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 14 12:15:42.108652 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 14 12:15:42.130967 systemd-networkd[817]: eth0: Gained IPv6LL Sep 14 12:15:42.390391 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 14 12:15:42.898114 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 14 12:15:42.898114 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 14 12:15:42.900940 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 14 12:15:42.906711 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 14 12:15:42.906711 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 14 12:15:42.906711 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 14 12:15:43.431167 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 14 12:15:43.688834 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 14 12:15:43.688834 ignition[1006]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 14 12:15:43.690819 ignition[1006]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 14 12:15:43.691684 ignition[1006]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 14 12:15:43.691684 ignition[1006]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 14 12:15:43.691684 ignition[1006]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 14 12:15:43.691684 ignition[1006]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 14 12:15:43.691684 ignition[1006]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 14 12:15:43.691684 ignition[1006]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 14 12:15:43.691684 ignition[1006]: INFO : files: files passed Sep 14 12:15:43.691684 ignition[1006]: INFO : Ignition finished successfully Sep 14 12:15:43.694447 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 14 12:15:43.696434 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 14 12:15:43.699734 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 14 12:15:43.716948 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 14 12:15:43.717660 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 14 12:15:43.725465 initrd-setup-root-after-ignition[1035]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 14 12:15:43.726428 initrd-setup-root-after-ignition[1035]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 14 12:15:43.728601 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 14 12:15:43.729955 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 14 12:15:43.731059 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 14 12:15:43.732759 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 14 12:15:43.788116 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 14 12:15:43.788254 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 14 12:15:43.789339 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 14 12:15:43.789827 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 14 12:15:43.790672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 14 12:15:43.791896 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 14 12:15:43.819657 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 14 12:15:43.823575 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 14 12:15:43.846450 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 14 12:15:43.847051 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 14 12:15:43.847924 systemd[1]: Stopped target timers.target - Timer Units. Sep 14 12:15:43.848641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 14 12:15:43.848811 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 14 12:15:43.850313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 14 12:15:43.850856 systemd[1]: Stopped target basic.target - Basic System. Sep 14 12:15:43.851466 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 14 12:15:43.852085 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 14 12:15:43.852820 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 14 12:15:43.853906 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 14 12:15:43.854715 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 14 12:15:43.855552 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 14 12:15:43.856389 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 14 12:15:43.857106 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 14 12:15:43.857752 systemd[1]: Stopped target swap.target - Swaps. Sep 14 12:15:43.858463 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 14 12:15:43.858680 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 14 12:15:43.859795 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 14 12:15:43.860693 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 14 12:15:43.861155 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 14 12:15:43.861255 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 14 12:15:43.862068 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 14 12:15:43.862228 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 14 12:15:43.863508 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 14 12:15:43.863683 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 14 12:15:43.864315 systemd[1]: ignition-files.service: Deactivated successfully. Sep 14 12:15:43.864448 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 14 12:15:43.865032 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 14 12:15:43.865161 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 14 12:15:43.866792 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 14 12:15:43.867191 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 14 12:15:43.867336 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 14 12:15:43.874842 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 14 12:15:43.875249 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 14 12:15:43.875422 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 14 12:15:43.876175 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 14 12:15:43.876307 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 14 12:15:43.882695 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 14 12:15:43.882853 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 14 12:15:43.908384 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 14 12:15:43.915289 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 14 12:15:43.916685 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 14 12:15:43.918167 ignition[1059]: INFO : Ignition 2.22.0 Sep 14 12:15:43.918167 ignition[1059]: INFO : Stage: umount Sep 14 12:15:43.919453 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 14 12:15:43.919453 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 14 12:15:43.920777 ignition[1059]: INFO : umount: umount passed Sep 14 12:15:43.920777 ignition[1059]: INFO : Ignition finished successfully Sep 14 12:15:43.923350 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 14 12:15:43.923491 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 14 12:15:43.924999 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 14 12:15:43.925122 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 14 12:15:43.925560 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 14 12:15:43.925661 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 14 12:15:43.926381 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 14 12:15:43.926426 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 14 12:15:43.927113 systemd[1]: Stopped target network.target - Network. Sep 14 12:15:43.927831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 14 12:15:43.927880 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 14 12:15:43.928623 systemd[1]: Stopped target paths.target - Path Units. Sep 14 12:15:43.929401 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 14 12:15:43.932693 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 14 12:15:43.933304 systemd[1]: Stopped target slices.target - Slice Units. Sep 14 12:15:43.934405 systemd[1]: Stopped target sockets.target - Socket Units. Sep 14 12:15:43.935218 systemd[1]: iscsid.socket: Deactivated successfully. Sep 14 12:15:43.935283 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 14 12:15:43.936087 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 14 12:15:43.936129 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 14 12:15:43.936916 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 14 12:15:43.936994 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 14 12:15:43.937749 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 14 12:15:43.937805 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 14 12:15:43.938487 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 14 12:15:43.938541 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 14 12:15:43.939448 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 14 12:15:43.940249 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 14 12:15:43.947890 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 14 12:15:43.948082 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 14 12:15:43.954614 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 14 12:15:43.954972 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 14 12:15:43.955120 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 14 12:15:43.957008 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 14 12:15:43.958109 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 14 12:15:43.958828 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 14 12:15:43.958893 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 14 12:15:43.960497 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 14 12:15:43.960883 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 14 12:15:43.960942 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 14 12:15:43.961368 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 14 12:15:43.961426 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 14 12:15:43.964865 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 14 12:15:43.964945 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 14 12:15:43.965702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 14 12:15:43.966300 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 14 12:15:43.967757 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 14 12:15:43.969960 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 14 12:15:43.970046 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 14 12:15:43.988031 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 14 12:15:43.988240 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 14 12:15:43.989387 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 14 12:15:43.989470 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 14 12:15:43.990030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 14 12:15:43.990081 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 14 12:15:43.990898 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 14 12:15:43.990952 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 14 12:15:43.991948 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 14 12:15:43.991998 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 14 12:15:43.992846 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 14 12:15:43.992896 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 14 12:15:43.994495 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 14 12:15:43.996154 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 14 12:15:43.996219 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 14 12:15:43.997795 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 14 12:15:43.997877 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 14 12:15:43.999707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 14 12:15:43.999769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 14 12:15:44.003519 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 14 12:15:44.004034 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 14 12:15:44.004079 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 14 12:15:44.004530 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 14 12:15:44.008571 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 14 12:15:44.015314 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 14 12:15:44.016006 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 14 12:15:44.017766 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 14 12:15:44.020138 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 14 12:15:44.042274 systemd[1]: Switching root. Sep 14 12:15:44.081309 systemd-journald[211]: Journal stopped Sep 14 12:15:45.363274 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). Sep 14 12:15:45.363390 kernel: SELinux: policy capability network_peer_controls=1 Sep 14 12:15:45.363415 kernel: SELinux: policy capability open_perms=1 Sep 14 12:15:45.363444 kernel: SELinux: policy capability extended_socket_class=1 Sep 14 12:15:45.363464 kernel: SELinux: policy capability always_check_network=0 Sep 14 12:15:45.363483 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 14 12:15:45.363503 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 14 12:15:45.363525 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 14 12:15:45.363546 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 14 12:15:45.363566 kernel: SELinux: policy capability userspace_initial_context=0 Sep 14 12:15:45.363586 kernel: audit: type=1403 audit(1757852144.294:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 14 12:15:45.363649 systemd[1]: Successfully loaded SELinux policy in 71.270ms. Sep 14 12:15:45.363700 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.807ms. Sep 14 12:15:45.363724 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 14 12:15:45.363748 systemd[1]: Detected virtualization kvm. Sep 14 12:15:45.363769 systemd[1]: Detected architecture x86-64. Sep 14 12:15:45.363790 systemd[1]: Detected first boot. Sep 14 12:15:45.363822 systemd[1]: Hostname set to . Sep 14 12:15:45.363843 systemd[1]: Initializing machine ID from VM UUID. Sep 14 12:15:45.363864 zram_generator::config[1103]: No configuration found. Sep 14 12:15:45.363903 kernel: Guest personality initialized and is inactive Sep 14 12:15:45.363923 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 14 12:15:45.363943 kernel: Initialized host personality Sep 14 12:15:45.363964 kernel: NET: Registered PF_VSOCK protocol family Sep 14 12:15:45.363984 systemd[1]: Populated /etc with preset unit settings. Sep 14 12:15:45.364009 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 14 12:15:45.364030 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 14 12:15:45.364052 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 14 12:15:45.364075 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 14 12:15:45.364101 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 14 12:15:45.364124 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 14 12:15:45.364146 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 14 12:15:45.364167 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 14 12:15:45.364188 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 14 12:15:45.364211 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 14 12:15:45.364232 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 14 12:15:45.364253 systemd[1]: Created slice user.slice - User and Session Slice. Sep 14 12:15:45.364277 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 14 12:15:45.364299 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 14 12:15:45.364320 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 14 12:15:45.364343 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 14 12:15:45.364367 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 14 12:15:45.364387 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 14 12:15:45.364412 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 14 12:15:45.364431 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 14 12:15:45.368678 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 14 12:15:45.368737 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 14 12:15:45.368762 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 14 12:15:45.368781 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 14 12:15:45.368801 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 14 12:15:45.368819 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 14 12:15:45.368836 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 14 12:15:45.368871 systemd[1]: Reached target slices.target - Slice Units. Sep 14 12:15:45.368892 systemd[1]: Reached target swap.target - Swaps. Sep 14 12:15:45.368913 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 14 12:15:45.368934 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 14 12:15:45.368955 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 14 12:15:45.368975 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 14 12:15:45.368994 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 14 12:15:45.369012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 14 12:15:45.369032 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 14 12:15:45.369053 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 14 12:15:45.369081 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 14 12:15:45.369101 systemd[1]: Mounting media.mount - External Media Directory... Sep 14 12:15:45.369123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:45.369142 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 14 12:15:45.369160 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 14 12:15:45.369178 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 14 12:15:45.369200 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 14 12:15:45.369223 systemd[1]: Reached target machines.target - Containers. Sep 14 12:15:45.369250 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 14 12:15:45.369270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 14 12:15:45.369289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 14 12:15:45.369308 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 14 12:15:45.369327 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 14 12:15:45.369346 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 14 12:15:45.369366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 14 12:15:45.369387 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 14 12:15:45.369409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 14 12:15:45.369438 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 14 12:15:45.369468 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 14 12:15:45.369491 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 14 12:15:45.369512 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 14 12:15:45.369532 systemd[1]: Stopped systemd-fsck-usr.service. Sep 14 12:15:45.369554 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 14 12:15:45.369576 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 14 12:15:45.369627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 14 12:15:45.369651 kernel: fuse: init (API version 7.41) Sep 14 12:15:45.369676 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 14 12:15:45.369697 kernel: loop: module loaded Sep 14 12:15:45.369718 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 14 12:15:45.369740 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 14 12:15:45.369760 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 14 12:15:45.369784 systemd[1]: verity-setup.service: Deactivated successfully. Sep 14 12:15:45.369802 systemd[1]: Stopped verity-setup.service. Sep 14 12:15:45.369824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:45.369846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 14 12:15:45.369873 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 14 12:15:45.369895 systemd[1]: Mounted media.mount - External Media Directory. Sep 14 12:15:45.369916 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 14 12:15:45.369950 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 14 12:15:45.369971 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 14 12:15:45.369992 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 14 12:15:45.370013 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 14 12:15:45.370036 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 14 12:15:45.370057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 14 12:15:45.370083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 14 12:15:45.370108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 14 12:15:45.370130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 14 12:15:45.370151 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 14 12:15:45.370177 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 14 12:15:45.370200 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 14 12:15:45.370221 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 14 12:15:45.370264 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 14 12:15:45.370289 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 14 12:15:45.370316 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 14 12:15:45.370338 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 14 12:15:45.370359 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 14 12:15:45.370382 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 14 12:15:45.370406 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 14 12:15:45.370425 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 14 12:15:45.370445 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 14 12:15:45.370465 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 14 12:15:45.370486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 14 12:15:45.370512 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 14 12:15:45.370533 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 14 12:15:45.370552 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 14 12:15:45.370571 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 14 12:15:45.370678 systemd-journald[1175]: Collecting audit messages is disabled. Sep 14 12:15:45.370723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 14 12:15:45.370743 kernel: ACPI: bus type drm_connector registered Sep 14 12:15:45.370769 systemd-journald[1175]: Journal started Sep 14 12:15:45.370808 systemd-journald[1175]: Runtime Journal (/run/log/journal/cb00746de3654b08b8b8ffc3eb56d95e) is 4.9M, max 39.5M, 34.6M free. Sep 14 12:15:45.381411 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 14 12:15:45.381517 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 14 12:15:45.381549 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 14 12:15:45.383193 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 14 12:15:44.939021 systemd[1]: Queued start job for default target multi-user.target. Sep 14 12:15:44.963838 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 14 12:15:44.964565 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 14 12:15:45.395651 systemd[1]: Started systemd-journald.service - Journal Service. Sep 14 12:15:45.394217 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 14 12:15:45.396921 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 14 12:15:45.397920 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 14 12:15:45.413451 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 14 12:15:45.440404 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 14 12:15:45.448000 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 14 12:15:45.454235 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 14 12:15:45.464966 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 14 12:15:45.491086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 14 12:15:45.494707 kernel: loop0: detected capacity change from 0 to 110984 Sep 14 12:15:45.505443 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 14 12:15:45.549643 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 14 12:15:45.557278 systemd-journald[1175]: Time spent on flushing to /var/log/journal/cb00746de3654b08b8b8ffc3eb56d95e is 63.101ms for 1022 entries. Sep 14 12:15:45.557278 systemd-journald[1175]: System Journal (/var/log/journal/cb00746de3654b08b8b8ffc3eb56d95e) is 8M, max 195.6M, 187.6M free. Sep 14 12:15:45.640095 systemd-journald[1175]: Received client request to flush runtime journal. Sep 14 12:15:45.640170 kernel: loop1: detected capacity change from 0 to 128016 Sep 14 12:15:45.594905 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 14 12:15:45.624123 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 14 12:15:45.632933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 14 12:15:45.646629 kernel: loop2: detected capacity change from 0 to 224512 Sep 14 12:15:45.646140 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 14 12:15:45.729374 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 14 12:15:45.730477 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 14 12:15:45.738685 kernel: loop3: detected capacity change from 0 to 8 Sep 14 12:15:45.750530 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 14 12:15:45.774336 kernel: loop4: detected capacity change from 0 to 110984 Sep 14 12:15:45.800709 kernel: loop5: detected capacity change from 0 to 128016 Sep 14 12:15:45.821631 kernel: loop6: detected capacity change from 0 to 224512 Sep 14 12:15:45.844643 kernel: loop7: detected capacity change from 0 to 8 Sep 14 12:15:45.848298 (sd-merge)[1253]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 14 12:15:45.851119 (sd-merge)[1253]: Merged extensions into '/usr'. Sep 14 12:15:45.859728 systemd[1]: Reload requested from client PID 1210 ('systemd-sysext') (unit systemd-sysext.service)... Sep 14 12:15:45.861916 systemd[1]: Reloading... Sep 14 12:15:46.055625 zram_generator::config[1279]: No configuration found. Sep 14 12:15:46.152637 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 14 12:15:46.349706 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 14 12:15:46.350472 systemd[1]: Reloading finished in 487 ms. Sep 14 12:15:46.385113 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 14 12:15:46.390424 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 14 12:15:46.412662 systemd[1]: Starting ensure-sysext.service... Sep 14 12:15:46.416869 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 14 12:15:46.452281 systemd[1]: Reload requested from client PID 1323 ('systemctl') (unit ensure-sysext.service)... Sep 14 12:15:46.452465 systemd[1]: Reloading... Sep 14 12:15:46.477295 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 14 12:15:46.478081 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 14 12:15:46.478565 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 14 12:15:46.479815 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 14 12:15:46.483070 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 14 12:15:46.483538 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Sep 14 12:15:46.483737 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Sep 14 12:15:46.490010 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Sep 14 12:15:46.490026 systemd-tmpfiles[1324]: Skipping /boot Sep 14 12:15:46.510759 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Sep 14 12:15:46.512644 systemd-tmpfiles[1324]: Skipping /boot Sep 14 12:15:46.580649 zram_generator::config[1351]: No configuration found. Sep 14 12:15:46.825706 systemd[1]: Reloading finished in 372 ms. Sep 14 12:15:46.849510 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 14 12:15:46.863042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 14 12:15:46.875328 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 14 12:15:46.879967 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 14 12:15:46.887111 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 14 12:15:46.893051 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 14 12:15:46.902140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 14 12:15:46.907042 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 14 12:15:46.916039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:46.916365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 14 12:15:46.920040 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 14 12:15:46.925137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 14 12:15:46.931082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 14 12:15:46.931844 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 14 12:15:46.932087 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 14 12:15:46.942245 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 14 12:15:46.942789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:46.948071 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:46.948351 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 14 12:15:46.948729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 14 12:15:46.948850 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 14 12:15:46.948971 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:46.956222 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:46.956569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 14 12:15:46.962538 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 14 12:15:46.964919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 14 12:15:46.965213 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 14 12:15:46.965416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:46.966632 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 14 12:15:46.967769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 14 12:15:46.983464 systemd[1]: Finished ensure-sysext.service. Sep 14 12:15:46.995037 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 14 12:15:47.000999 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 14 12:15:47.003259 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 14 12:15:47.006121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 14 12:15:47.006542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 14 12:15:47.007962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 14 12:15:47.017689 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 14 12:15:47.018075 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 14 12:15:47.025116 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 14 12:15:47.040406 systemd-udevd[1401]: Using default interface naming scheme 'v255'. Sep 14 12:15:47.057825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 14 12:15:47.058130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 14 12:15:47.060309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 14 12:15:47.068669 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 14 12:15:47.075791 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 14 12:15:47.087766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 14 12:15:47.094928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 14 12:15:47.097480 augenrules[1437]: No rules Sep 14 12:15:47.100085 systemd[1]: audit-rules.service: Deactivated successfully. Sep 14 12:15:47.100435 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 14 12:15:47.108477 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 14 12:15:47.125566 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 14 12:15:47.283194 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Sep 14 12:15:47.287668 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 14 12:15:47.288105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:47.288267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 14 12:15:47.291850 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 14 12:15:47.297609 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 14 12:15:47.303456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 14 12:15:47.304194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 14 12:15:47.304264 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 14 12:15:47.304308 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 14 12:15:47.304335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 14 12:15:47.325015 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 14 12:15:47.326011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 14 12:15:47.352690 kernel: ISO 9660 Extensions: RRIP_1991A Sep 14 12:15:47.357632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 14 12:15:47.358949 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 14 12:15:47.360065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 14 12:15:47.360860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 14 12:15:47.361992 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 14 12:15:47.362044 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 14 12:15:47.370448 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 14 12:15:47.437118 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 14 12:15:47.440785 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 14 12:15:47.451190 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 14 12:15:47.471764 systemd-networkd[1441]: lo: Link UP Sep 14 12:15:47.471775 systemd-networkd[1441]: lo: Gained carrier Sep 14 12:15:47.472864 systemd-networkd[1441]: Enumeration completed Sep 14 12:15:47.473024 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 14 12:15:47.477894 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 14 12:15:47.479572 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 14 12:15:47.494009 systemd-networkd[1441]: eth0: Configuring with /run/systemd/network/10-8a:40:e6:a1:81:94.network. Sep 14 12:15:47.496145 systemd-networkd[1441]: eth0: Link UP Sep 14 12:15:47.496283 systemd-networkd[1441]: eth0: Gained carrier Sep 14 12:15:47.514436 systemd-resolved[1399]: Positive Trust Anchors: Sep 14 12:15:47.514906 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 14 12:15:47.515064 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 14 12:15:47.516995 systemd-networkd[1441]: eth1: Configuring with /run/systemd/network/10-a2:41:0d:4b:d9:1d.network. Sep 14 12:15:47.518981 systemd-networkd[1441]: eth1: Link UP Sep 14 12:15:47.521084 systemd-networkd[1441]: eth1: Gained carrier Sep 14 12:15:47.526724 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 14 12:15:47.528957 systemd-resolved[1399]: Using system hostname 'ci-4459.0.0-9-e5fa973bfc'. Sep 14 12:15:47.537473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 14 12:15:47.538359 systemd[1]: Reached target network.target - Network. Sep 14 12:15:47.539695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 14 12:15:47.542105 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 14 12:15:47.554129 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 14 12:15:47.554739 systemd[1]: Reached target sysinit.target - System Initialization. Sep 14 12:15:47.555482 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 14 12:15:47.557735 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 14 12:15:47.558140 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 14 12:15:47.558497 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 14 12:15:47.558856 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 14 12:15:47.558892 systemd[1]: Reached target paths.target - Path Units. Sep 14 12:15:47.559170 systemd[1]: Reached target time-set.target - System Time Set. Sep 14 12:15:47.559715 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 14 12:15:47.560109 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 14 12:15:47.560449 systemd[1]: Reached target timers.target - Timer Units. Sep 14 12:15:47.562367 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 14 12:15:47.565434 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 14 12:15:47.571203 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 14 12:15:47.573446 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 14 12:15:47.575163 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 14 12:15:47.583050 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 14 12:15:47.584433 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 14 12:15:47.586564 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 14 12:15:47.589287 systemd[1]: Reached target sockets.target - Socket Units. Sep 14 12:15:47.590338 systemd[1]: Reached target basic.target - Basic System. Sep 14 12:15:47.590979 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 14 12:15:47.591008 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 14 12:15:47.592897 systemd[1]: Starting containerd.service - containerd container runtime... Sep 14 12:15:47.598843 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 14 12:15:47.602024 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 14 12:15:47.607471 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 14 12:15:47.613705 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 14 12:15:47.617802 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 14 12:15:47.618303 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 14 12:15:47.620427 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 14 12:15:47.627411 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 14 12:15:47.636122 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 14 12:15:47.640875 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 14 12:15:47.650306 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 14 12:15:47.655626 kernel: mousedev: PS/2 mouse device common for all mice Sep 14 12:15:47.655712 jq[1511]: false Sep 14 12:15:47.657008 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing passwd entry cache Sep 14 12:15:47.656983 oslogin_cache_refresh[1513]: Refreshing passwd entry cache Sep 14 12:15:47.665966 oslogin_cache_refresh[1513]: Failure getting users, quitting Sep 14 12:15:47.666301 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting users, quitting Sep 14 12:15:47.666301 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 14 12:15:47.666301 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing group entry cache Sep 14 12:15:47.665994 oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 14 12:15:47.666063 oslogin_cache_refresh[1513]: Refreshing group entry cache Sep 14 12:15:47.666954 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 14 12:15:47.669841 oslogin_cache_refresh[1513]: Failure getting groups, quitting Sep 14 12:15:47.671786 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting groups, quitting Sep 14 12:15:47.671786 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 14 12:15:47.668487 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 14 12:15:47.669856 oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 14 12:15:47.669118 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 14 12:15:47.671178 systemd[1]: Starting update-engine.service - Update Engine... Sep 14 12:15:47.676304 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 14 12:15:47.693654 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 14 12:15:47.695052 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 14 12:15:47.695329 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 14 12:15:47.695671 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 14 12:15:47.695889 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 14 12:15:47.704963 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 14 12:15:47.705344 systemd-timesyncd[1416]: Contacted time server 73.185.182.209:123 (0.flatcar.pool.ntp.org). Sep 14 12:15:47.705403 systemd-timesyncd[1416]: Initial clock synchronization to Sun 2025-09-14 12:15:47.801360 UTC. Sep 14 12:15:47.705789 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 14 12:15:47.730623 jq[1525]: true Sep 14 12:15:47.758533 extend-filesystems[1512]: Found /dev/vda6 Sep 14 12:15:47.768951 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 14 12:15:47.769035 update_engine[1521]: I20250914 12:15:47.763926 1521 main.cc:92] Flatcar Update Engine starting Sep 14 12:15:47.771522 tar[1529]: linux-amd64/LICENSE Sep 14 12:15:47.771522 tar[1529]: linux-amd64/helm Sep 14 12:15:47.779740 kernel: ACPI: button: Power Button [PWRF] Sep 14 12:15:47.782843 extend-filesystems[1512]: Found /dev/vda9 Sep 14 12:15:47.782835 systemd[1]: motdgen.service: Deactivated successfully. Sep 14 12:15:47.784190 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 14 12:15:47.793586 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 14 12:15:47.794516 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 14 12:15:47.803641 jq[1542]: true Sep 14 12:15:47.805126 dbus-daemon[1509]: [system] SELinux support is enabled Sep 14 12:15:47.807405 coreos-metadata[1508]: Sep 14 12:15:47.804 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 14 12:15:47.808691 extend-filesystems[1512]: Checking size of /dev/vda9 Sep 14 12:15:47.805360 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 14 12:15:47.810338 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 14 12:15:47.810372 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 14 12:15:47.810889 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 14 12:15:47.810970 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 14 12:15:47.810987 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 14 12:15:47.822631 coreos-metadata[1508]: Sep 14 12:15:47.822 INFO Fetch successful Sep 14 12:15:47.833754 systemd[1]: Started update-engine.service - Update Engine. Sep 14 12:15:47.834140 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 14 12:15:47.849419 update_engine[1521]: I20250914 12:15:47.836150 1521 update_check_scheduler.cc:74] Next update check in 11m50s Sep 14 12:15:47.837298 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 14 12:15:47.887049 extend-filesystems[1512]: Resized partition /dev/vda9 Sep 14 12:15:47.907980 extend-filesystems[1574]: resize2fs 1.47.3 (8-Jul-2025) Sep 14 12:15:47.913685 bash[1573]: Updated "/home/core/.ssh/authorized_keys" Sep 14 12:15:47.919900 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 14 12:15:47.929641 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 14 12:15:47.927008 systemd[1]: Starting sshkeys.service... Sep 14 12:15:47.980785 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 14 12:15:47.981400 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 14 12:15:48.023308 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 14 12:15:48.024263 systemd-logind[1520]: New seat seat0. Sep 14 12:15:48.033751 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 14 12:15:48.044612 systemd[1]: Started systemd-logind.service - User Login Management. Sep 14 12:15:48.069693 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 14 12:15:48.069781 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 14 12:15:48.073636 kernel: Console: switching to colour dummy device 80x25 Sep 14 12:15:48.073754 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 14 12:15:48.073774 kernel: [drm] features: -context_init Sep 14 12:15:48.075777 kernel: [drm] number of scanouts: 1 Sep 14 12:15:48.075859 kernel: [drm] number of cap sets: 0 Sep 14 12:15:48.075880 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Sep 14 12:15:48.078647 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 14 12:15:48.080910 kernel: Console: switching to colour frame buffer device 128x48 Sep 14 12:15:48.084642 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 14 12:15:48.142203 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 14 12:15:48.158696 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 14 12:15:48.158696 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 14 12:15:48.158696 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 14 12:15:48.161072 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Sep 14 12:15:48.160458 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 14 12:15:48.160760 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 14 12:15:48.259521 coreos-metadata[1581]: Sep 14 12:15:48.257 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 14 12:15:48.277621 coreos-metadata[1581]: Sep 14 12:15:48.276 INFO Fetch successful Sep 14 12:15:48.292686 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 14 12:15:48.292656 unknown[1581]: wrote ssh authorized keys file for user: core Sep 14 12:15:48.337672 update-ssh-keys[1607]: Updated "/home/core/.ssh/authorized_keys" Sep 14 12:15:48.338564 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 14 12:15:48.350891 systemd[1]: Finished sshkeys.service. Sep 14 12:15:48.373649 containerd[1546]: time="2025-09-14T12:15:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 14 12:15:48.373649 containerd[1546]: time="2025-09-14T12:15:48.372825138Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391048880Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.206µs" Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391095805Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391121037Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391320610Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391347316Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391393631Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391482562Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391501816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391761648Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391778061Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391794986Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 14 12:15:48.392908 containerd[1546]: time="2025-09-14T12:15:48.391806992Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 14 12:15:48.393447 containerd[1546]: time="2025-09-14T12:15:48.391891788Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 14 12:15:48.393447 containerd[1546]: time="2025-09-14T12:15:48.392111108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 14 12:15:48.393447 containerd[1546]: time="2025-09-14T12:15:48.392144805Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 14 12:15:48.393447 containerd[1546]: time="2025-09-14T12:15:48.392156596Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 14 12:15:48.393447 containerd[1546]: time="2025-09-14T12:15:48.392190350Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 14 12:15:48.393447 containerd[1546]: time="2025-09-14T12:15:48.392495866Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 14 12:15:48.393447 containerd[1546]: time="2025-09-14T12:15:48.392585231Z" level=info msg="metadata content store policy set" policy=shared Sep 14 12:15:48.402389 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.405971710Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406056908Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406074773Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406087358Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406123053Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406134613Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406146239Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406157982Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406181169Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406417535Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406437771Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.406454335Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.407717456Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 14 12:15:48.409727 containerd[1546]: time="2025-09-14T12:15:48.407759195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.407792913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.407815023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.407826397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.407851994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.407867503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.407878480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.407902796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.407915290Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.407927905Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.408028930Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.408053034Z" level=info msg="Start snapshots syncer" Sep 14 12:15:48.410238 containerd[1546]: time="2025-09-14T12:15:48.408089858Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 14 12:15:48.410641 containerd[1546]: time="2025-09-14T12:15:48.408372404Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 14 12:15:48.410641 containerd[1546]: time="2025-09-14T12:15:48.408445546Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 14 12:15:48.410899 containerd[1546]: time="2025-09-14T12:15:48.408541277Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414106677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414187738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414202578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414227487Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414252827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414266037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414276961Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414311002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414326961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414337130Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414407068Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414445382Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 14 12:15:48.416354 containerd[1546]: time="2025-09-14T12:15:48.414456803Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 14 12:15:48.416785 containerd[1546]: time="2025-09-14T12:15:48.414466835Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 14 12:15:48.416785 containerd[1546]: time="2025-09-14T12:15:48.414474859Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 14 12:15:48.416785 containerd[1546]: time="2025-09-14T12:15:48.414483880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 14 12:15:48.416785 containerd[1546]: time="2025-09-14T12:15:48.414494412Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 14 12:15:48.416785 containerd[1546]: time="2025-09-14T12:15:48.414514500Z" level=info msg="runtime interface created" Sep 14 12:15:48.416785 containerd[1546]: time="2025-09-14T12:15:48.414520820Z" level=info msg="created NRI interface" Sep 14 12:15:48.416785 containerd[1546]: time="2025-09-14T12:15:48.414531539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 14 12:15:48.416785 containerd[1546]: time="2025-09-14T12:15:48.414547964Z" level=info msg="Connect containerd service" Sep 14 12:15:48.416785 containerd[1546]: time="2025-09-14T12:15:48.414599057Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 14 12:15:48.420827 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 14 12:15:48.429664 containerd[1546]: time="2025-09-14T12:15:48.427196647Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 14 12:15:48.428277 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 14 12:15:48.524490 systemd[1]: issuegen.service: Deactivated successfully. Sep 14 12:15:48.527108 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 14 12:15:48.534973 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 14 12:15:48.596034 systemd-networkd[1441]: eth1: Gained IPv6LL Sep 14 12:15:48.607429 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 14 12:15:48.611300 systemd[1]: Reached target network-online.target - Network is Online. Sep 14 12:15:48.616959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 14 12:15:48.623417 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 14 12:15:48.668367 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 14 12:15:48.679142 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 14 12:15:48.684214 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 14 12:15:48.685724 systemd[1]: Reached target getty.target - Login Prompts. Sep 14 12:15:48.735418 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 14 12:15:48.822792 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 14 12:15:48.857868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.877373324Z" level=info msg="Start subscribing containerd event" Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.877465276Z" level=info msg="Start recovering state" Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.877672804Z" level=info msg="Start event monitor" Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.877700137Z" level=info msg="Start cni network conf syncer for default" Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.877741496Z" level=info msg="Start streaming server" Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.877760657Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.877771970Z" level=info msg="runtime interface starting up..." Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.877780331Z" level=info msg="starting plugins..." Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.877819596Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.880880518Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 14 12:15:48.881846 containerd[1546]: time="2025-09-14T12:15:48.881021102Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 14 12:15:48.882919 containerd[1546]: time="2025-09-14T12:15:48.882553927Z" level=info msg="containerd successfully booted in 0.510664s" Sep 14 12:15:48.882739 systemd[1]: Started containerd.service - containerd container runtime. Sep 14 12:15:48.930181 systemd-logind[1520]: Watching system buttons on /dev/input/event2 (Power Button) Sep 14 12:15:49.021516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 14 12:15:49.021957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 14 12:15:49.030020 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 14 12:15:49.037155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 14 12:15:49.082074 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 14 12:15:49.082988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 14 12:15:49.090234 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 14 12:15:49.095330 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 14 12:15:49.234851 systemd-networkd[1441]: eth0: Gained IPv6LL Sep 14 12:15:49.299453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 14 12:15:49.369562 kernel: EDAC MC: Ver: 3.0.0 Sep 14 12:15:49.469499 tar[1529]: linux-amd64/README.md Sep 14 12:15:49.497811 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 14 12:15:50.116175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 14 12:15:50.117840 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 14 12:15:50.122204 systemd[1]: Startup finished in 3.319s (kernel) + 6.646s (initrd) + 5.897s (userspace) = 15.862s. Sep 14 12:15:50.126228 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 14 12:15:50.810014 kubelet[1679]: E0914 12:15:50.809952 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 14 12:15:50.814248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 14 12:15:50.814420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 14 12:15:50.814887 systemd[1]: kubelet.service: Consumed 1.339s CPU time, 264.1M memory peak. Sep 14 12:15:51.634293 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 14 12:15:51.635787 systemd[1]: Started sshd@0-143.198.142.64:22-139.178.89.65:55616.service - OpenSSH per-connection server daemon (139.178.89.65:55616). Sep 14 12:15:51.746395 sshd[1690]: Accepted publickey for core from 139.178.89.65 port 55616 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:15:51.748882 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:15:51.757590 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 14 12:15:51.758902 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 14 12:15:51.768871 systemd-logind[1520]: New session 1 of user core. Sep 14 12:15:51.792281 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 14 12:15:51.795996 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 14 12:15:51.827692 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 14 12:15:51.832643 systemd-logind[1520]: New session c1 of user core. Sep 14 12:15:52.268942 systemd[1695]: Queued start job for default target default.target. Sep 14 12:15:52.293361 systemd[1695]: Created slice app.slice - User Application Slice. Sep 14 12:15:52.293728 systemd[1695]: Reached target paths.target - Paths. Sep 14 12:15:52.293818 systemd[1695]: Reached target timers.target - Timers. Sep 14 12:15:52.296137 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 14 12:15:52.321217 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 14 12:15:52.321366 systemd[1695]: Reached target sockets.target - Sockets. Sep 14 12:15:52.321422 systemd[1695]: Reached target basic.target - Basic System. Sep 14 12:15:52.321466 systemd[1695]: Reached target default.target - Main User Target. Sep 14 12:15:52.321510 systemd[1695]: Startup finished in 478ms. Sep 14 12:15:52.321986 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 14 12:15:52.332014 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 14 12:15:52.402951 systemd[1]: Started sshd@1-143.198.142.64:22-139.178.89.65:55630.service - OpenSSH per-connection server daemon (139.178.89.65:55630). Sep 14 12:15:52.474151 sshd[1706]: Accepted publickey for core from 139.178.89.65 port 55630 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:15:52.476391 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:15:52.484691 systemd-logind[1520]: New session 2 of user core. Sep 14 12:15:52.491884 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 14 12:15:52.554080 sshd[1709]: Connection closed by 139.178.89.65 port 55630 Sep 14 12:15:52.554683 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Sep 14 12:15:52.563722 systemd[1]: sshd@1-143.198.142.64:22-139.178.89.65:55630.service: Deactivated successfully. Sep 14 12:15:52.566260 systemd[1]: session-2.scope: Deactivated successfully. Sep 14 12:15:52.568294 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Sep 14 12:15:52.572292 systemd[1]: Started sshd@2-143.198.142.64:22-139.178.89.65:55640.service - OpenSSH per-connection server daemon (139.178.89.65:55640). Sep 14 12:15:52.574244 systemd-logind[1520]: Removed session 2. Sep 14 12:15:52.645010 sshd[1715]: Accepted publickey for core from 139.178.89.65 port 55640 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:15:52.646737 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:15:52.652905 systemd-logind[1520]: New session 3 of user core. Sep 14 12:15:52.662962 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 14 12:15:52.719633 sshd[1718]: Connection closed by 139.178.89.65 port 55640 Sep 14 12:15:52.720299 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 14 12:15:52.735192 systemd[1]: sshd@2-143.198.142.64:22-139.178.89.65:55640.service: Deactivated successfully. Sep 14 12:15:52.737775 systemd[1]: session-3.scope: Deactivated successfully. Sep 14 12:15:52.738831 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Sep 14 12:15:52.743229 systemd[1]: Started sshd@3-143.198.142.64:22-139.178.89.65:55652.service - OpenSSH per-connection server daemon (139.178.89.65:55652). Sep 14 12:15:52.744437 systemd-logind[1520]: Removed session 3. Sep 14 12:15:52.812175 sshd[1724]: Accepted publickey for core from 139.178.89.65 port 55652 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:15:52.814251 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:15:52.820355 systemd-logind[1520]: New session 4 of user core. Sep 14 12:15:52.827959 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 14 12:15:52.892084 sshd[1727]: Connection closed by 139.178.89.65 port 55652 Sep 14 12:15:52.892881 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Sep 14 12:15:52.904877 systemd[1]: sshd@3-143.198.142.64:22-139.178.89.65:55652.service: Deactivated successfully. Sep 14 12:15:52.907166 systemd[1]: session-4.scope: Deactivated successfully. Sep 14 12:15:52.909194 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Sep 14 12:15:52.912017 systemd[1]: Started sshd@4-143.198.142.64:22-139.178.89.65:55656.service - OpenSSH per-connection server daemon (139.178.89.65:55656). Sep 14 12:15:52.914902 systemd-logind[1520]: Removed session 4. Sep 14 12:15:52.986588 sshd[1733]: Accepted publickey for core from 139.178.89.65 port 55656 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:15:52.988334 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:15:52.993704 systemd-logind[1520]: New session 5 of user core. Sep 14 12:15:53.005006 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 14 12:15:53.077752 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 14 12:15:53.079483 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 14 12:15:53.097215 sudo[1737]: pam_unix(sudo:session): session closed for user root Sep 14 12:15:53.100919 sshd[1736]: Connection closed by 139.178.89.65 port 55656 Sep 14 12:15:53.101781 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 14 12:15:53.118276 systemd[1]: sshd@4-143.198.142.64:22-139.178.89.65:55656.service: Deactivated successfully. Sep 14 12:15:53.120359 systemd[1]: session-5.scope: Deactivated successfully. Sep 14 12:15:53.121484 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Sep 14 12:15:53.126067 systemd[1]: Started sshd@5-143.198.142.64:22-139.178.89.65:55668.service - OpenSSH per-connection server daemon (139.178.89.65:55668). Sep 14 12:15:53.127960 systemd-logind[1520]: Removed session 5. Sep 14 12:15:53.194084 sshd[1743]: Accepted publickey for core from 139.178.89.65 port 55668 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:15:53.195646 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:15:53.202760 systemd-logind[1520]: New session 6 of user core. Sep 14 12:15:53.210051 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 14 12:15:53.270703 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 14 12:15:53.271109 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 14 12:15:53.277375 sudo[1748]: pam_unix(sudo:session): session closed for user root Sep 14 12:15:53.285734 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 14 12:15:53.286675 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 14 12:15:53.299827 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 14 12:15:53.359733 augenrules[1770]: No rules Sep 14 12:15:53.361531 systemd[1]: audit-rules.service: Deactivated successfully. Sep 14 12:15:53.362270 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 14 12:15:53.364228 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 14 12:15:53.369122 sshd[1746]: Connection closed by 139.178.89.65 port 55668 Sep 14 12:15:53.368345 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 14 12:15:53.381680 systemd[1]: sshd@5-143.198.142.64:22-139.178.89.65:55668.service: Deactivated successfully. Sep 14 12:15:53.383867 systemd[1]: session-6.scope: Deactivated successfully. Sep 14 12:15:53.385673 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Sep 14 12:15:53.388144 systemd[1]: Started sshd@6-143.198.142.64:22-139.178.89.65:55676.service - OpenSSH per-connection server daemon (139.178.89.65:55676). Sep 14 12:15:53.389973 systemd-logind[1520]: Removed session 6. Sep 14 12:15:53.454497 sshd[1779]: Accepted publickey for core from 139.178.89.65 port 55676 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:15:53.456258 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:15:53.463128 systemd-logind[1520]: New session 7 of user core. Sep 14 12:15:53.477936 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 14 12:15:53.538000 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 14 12:15:53.538337 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 14 12:15:54.024482 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 14 12:15:54.039409 (dockerd)[1801]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 14 12:15:54.419972 dockerd[1801]: time="2025-09-14T12:15:54.419661537Z" level=info msg="Starting up" Sep 14 12:15:54.421621 dockerd[1801]: time="2025-09-14T12:15:54.421084111Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 14 12:15:54.442872 dockerd[1801]: time="2025-09-14T12:15:54.442784330Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 14 12:15:54.464666 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport850735862-merged.mount: Deactivated successfully. Sep 14 12:15:54.469153 systemd[1]: var-lib-docker-metacopy\x2dcheck939975600-merged.mount: Deactivated successfully. Sep 14 12:15:54.489285 dockerd[1801]: time="2025-09-14T12:15:54.489004547Z" level=info msg="Loading containers: start." Sep 14 12:15:54.500695 kernel: Initializing XFRM netlink socket Sep 14 12:15:54.789918 systemd-networkd[1441]: docker0: Link UP Sep 14 12:15:54.793651 dockerd[1801]: time="2025-09-14T12:15:54.793490276Z" level=info msg="Loading containers: done." Sep 14 12:15:54.812509 dockerd[1801]: time="2025-09-14T12:15:54.812107793Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 14 12:15:54.812509 dockerd[1801]: time="2025-09-14T12:15:54.812223742Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 14 12:15:54.812509 dockerd[1801]: time="2025-09-14T12:15:54.812335345Z" level=info msg="Initializing buildkit" Sep 14 12:15:54.814521 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1898651596-merged.mount: Deactivated successfully. Sep 14 12:15:54.832343 dockerd[1801]: time="2025-09-14T12:15:54.832294728Z" level=info msg="Completed buildkit initialization" Sep 14 12:15:54.840313 dockerd[1801]: time="2025-09-14T12:15:54.840252038Z" level=info msg="Daemon has completed initialization" Sep 14 12:15:54.840438 dockerd[1801]: time="2025-09-14T12:15:54.840348317Z" level=info msg="API listen on /run/docker.sock" Sep 14 12:15:54.841369 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 14 12:15:55.726645 containerd[1546]: time="2025-09-14T12:15:55.726517910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 14 12:15:56.294045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807794687.mount: Deactivated successfully. Sep 14 12:15:57.452715 containerd[1546]: time="2025-09-14T12:15:57.452654005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:15:57.453719 containerd[1546]: time="2025-09-14T12:15:57.453676441Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 14 12:15:57.454621 containerd[1546]: time="2025-09-14T12:15:57.454230268Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:15:57.456852 containerd[1546]: time="2025-09-14T12:15:57.456802384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:15:57.458203 containerd[1546]: time="2025-09-14T12:15:57.457739473Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.731181122s" Sep 14 12:15:57.458203 containerd[1546]: time="2025-09-14T12:15:57.457784487Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 14 12:15:57.458525 containerd[1546]: time="2025-09-14T12:15:57.458500031Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 14 12:15:58.848002 containerd[1546]: time="2025-09-14T12:15:58.847931210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:15:58.849182 containerd[1546]: time="2025-09-14T12:15:58.849152765Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 14 12:15:58.849819 containerd[1546]: time="2025-09-14T12:15:58.849792733Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:15:58.852665 containerd[1546]: time="2025-09-14T12:15:58.852606875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:15:58.853502 containerd[1546]: time="2025-09-14T12:15:58.853457231Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.394926064s" Sep 14 12:15:58.853502 containerd[1546]: time="2025-09-14T12:15:58.853492064Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 14 12:15:58.854198 containerd[1546]: time="2025-09-14T12:15:58.853951021Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 14 12:16:00.205641 containerd[1546]: time="2025-09-14T12:16:00.204606945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:00.206205 containerd[1546]: time="2025-09-14T12:16:00.206153945Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 14 12:16:00.207077 containerd[1546]: time="2025-09-14T12:16:00.207025310Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:00.213568 containerd[1546]: time="2025-09-14T12:16:00.213499999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:00.218046 containerd[1546]: time="2025-09-14T12:16:00.217981543Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.363989603s" Sep 14 12:16:00.218298 containerd[1546]: time="2025-09-14T12:16:00.218268576Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 14 12:16:00.219487 containerd[1546]: time="2025-09-14T12:16:00.219430944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 14 12:16:00.850573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 14 12:16:00.854124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 14 12:16:01.127450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 14 12:16:01.145328 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 14 12:16:01.246240 kubelet[2095]: E0914 12:16:01.246160 2095 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 14 12:16:01.254711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 14 12:16:01.254971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 14 12:16:01.258573 systemd[1]: kubelet.service: Consumed 307ms CPU time, 108.3M memory peak. Sep 14 12:16:01.518936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777123539.mount: Deactivated successfully. Sep 14 12:16:02.185044 containerd[1546]: time="2025-09-14T12:16:02.184983333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:02.186605 containerd[1546]: time="2025-09-14T12:16:02.186550245Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 14 12:16:02.187612 containerd[1546]: time="2025-09-14T12:16:02.187435664Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:02.189602 containerd[1546]: time="2025-09-14T12:16:02.189554241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:02.191073 containerd[1546]: time="2025-09-14T12:16:02.190915935Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.971445587s" Sep 14 12:16:02.191073 containerd[1546]: time="2025-09-14T12:16:02.190953283Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 14 12:16:02.191582 containerd[1546]: time="2025-09-14T12:16:02.191561713Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 14 12:16:02.193063 systemd-resolved[1399]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 14 12:16:02.768692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179980293.mount: Deactivated successfully. Sep 14 12:16:03.668627 containerd[1546]: time="2025-09-14T12:16:03.667811971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:03.669933 containerd[1546]: time="2025-09-14T12:16:03.669896953Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 14 12:16:03.671012 containerd[1546]: time="2025-09-14T12:16:03.670980987Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:03.673227 containerd[1546]: time="2025-09-14T12:16:03.673192102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:03.674711 containerd[1546]: time="2025-09-14T12:16:03.674405278Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.482676247s" Sep 14 12:16:03.674711 containerd[1546]: time="2025-09-14T12:16:03.674443311Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 14 12:16:03.675153 containerd[1546]: time="2025-09-14T12:16:03.675124296Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 14 12:16:04.205015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819649355.mount: Deactivated successfully. Sep 14 12:16:04.217330 containerd[1546]: time="2025-09-14T12:16:04.217243683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 14 12:16:04.219665 containerd[1546]: time="2025-09-14T12:16:04.219310340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 14 12:16:04.222407 containerd[1546]: time="2025-09-14T12:16:04.222354184Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 14 12:16:04.225626 containerd[1546]: time="2025-09-14T12:16:04.224821435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 14 12:16:04.226121 containerd[1546]: time="2025-09-14T12:16:04.226074012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 550.800507ms" Sep 14 12:16:04.226285 containerd[1546]: time="2025-09-14T12:16:04.226265857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 14 12:16:04.227023 containerd[1546]: time="2025-09-14T12:16:04.226992039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 14 12:16:04.812398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232015208.mount: Deactivated successfully. Sep 14 12:16:05.298837 systemd-resolved[1399]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 14 12:16:06.652511 containerd[1546]: time="2025-09-14T12:16:06.650983122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:06.652511 containerd[1546]: time="2025-09-14T12:16:06.652057466Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 14 12:16:06.652511 containerd[1546]: time="2025-09-14T12:16:06.652432592Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:06.656400 containerd[1546]: time="2025-09-14T12:16:06.656347536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:06.657975 containerd[1546]: time="2025-09-14T12:16:06.657921229Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.430725856s" Sep 14 12:16:06.657975 containerd[1546]: time="2025-09-14T12:16:06.657973267Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 14 12:16:09.593408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 14 12:16:09.593666 systemd[1]: kubelet.service: Consumed 307ms CPU time, 108.3M memory peak. Sep 14 12:16:09.597571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 14 12:16:09.635977 systemd[1]: Reload requested from client PID 2244 ('systemctl') (unit session-7.scope)... Sep 14 12:16:09.636001 systemd[1]: Reloading... Sep 14 12:16:09.787671 zram_generator::config[2287]: No configuration found. Sep 14 12:16:10.084082 systemd[1]: Reloading finished in 447 ms. Sep 14 12:16:10.156634 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 14 12:16:10.156774 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 14 12:16:10.157141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 14 12:16:10.157203 systemd[1]: kubelet.service: Consumed 126ms CPU time, 98.1M memory peak. Sep 14 12:16:10.159634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 14 12:16:10.333151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 14 12:16:10.345276 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 14 12:16:10.404132 kubelet[2341]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 14 12:16:10.404132 kubelet[2341]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 14 12:16:10.404132 kubelet[2341]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 14 12:16:10.404737 kubelet[2341]: I0914 12:16:10.404211 2341 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 14 12:16:10.827401 kubelet[2341]: I0914 12:16:10.827337 2341 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 14 12:16:10.827723 kubelet[2341]: I0914 12:16:10.827708 2341 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 14 12:16:10.828336 kubelet[2341]: I0914 12:16:10.828300 2341 server.go:954] "Client rotation is on, will bootstrap in background" Sep 14 12:16:10.867191 kubelet[2341]: E0914 12:16:10.867126 2341 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.142.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.142.64:6443: connect: connection refused" logger="UnhandledError" Sep 14 12:16:10.870104 kubelet[2341]: I0914 12:16:10.869028 2341 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 14 12:16:10.881773 kubelet[2341]: I0914 12:16:10.881734 2341 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 14 12:16:10.892095 kubelet[2341]: I0914 12:16:10.891631 2341 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 14 12:16:10.894206 kubelet[2341]: I0914 12:16:10.894101 2341 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 14 12:16:10.894414 kubelet[2341]: I0914 12:16:10.894185 2341 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-9-e5fa973bfc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 14 12:16:10.896024 kubelet[2341]: I0914 12:16:10.895975 2341 topology_manager.go:138] "Creating topology manager with none policy" Sep 14 12:16:10.896024 kubelet[2341]: I0914 12:16:10.896002 2341 container_manager_linux.go:304] "Creating device plugin manager" Sep 14 12:16:10.897315 kubelet[2341]: I0914 12:16:10.897265 2341 state_mem.go:36] "Initialized new in-memory state store" Sep 14 12:16:10.901118 kubelet[2341]: I0914 12:16:10.901061 2341 kubelet.go:446] "Attempting to sync node with API server" Sep 14 12:16:10.901118 kubelet[2341]: I0914 12:16:10.901115 2341 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 14 12:16:10.901287 kubelet[2341]: I0914 12:16:10.901166 2341 kubelet.go:352] "Adding apiserver pod source" Sep 14 12:16:10.901287 kubelet[2341]: I0914 12:16:10.901186 2341 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 14 12:16:10.907921 kubelet[2341]: I0914 12:16:10.907876 2341 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 14 12:16:10.912482 kubelet[2341]: I0914 12:16:10.911500 2341 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 14 12:16:10.912482 kubelet[2341]: W0914 12:16:10.912077 2341 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 14 12:16:10.913451 kubelet[2341]: I0914 12:16:10.912801 2341 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 14 12:16:10.913451 kubelet[2341]: I0914 12:16:10.912844 2341 server.go:1287] "Started kubelet" Sep 14 12:16:10.913451 kubelet[2341]: W0914 12:16:10.913037 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.142.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.142.64:6443: connect: connection refused Sep 14 12:16:10.913451 kubelet[2341]: E0914 12:16:10.913110 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.142.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.142.64:6443: connect: connection refused" logger="UnhandledError" Sep 14 12:16:10.913451 kubelet[2341]: W0914 12:16:10.913196 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.142.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-9-e5fa973bfc&limit=500&resourceVersion=0": dial tcp 143.198.142.64:6443: connect: connection refused Sep 14 12:16:10.913451 kubelet[2341]: E0914 12:16:10.913235 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.142.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-9-e5fa973bfc&limit=500&resourceVersion=0\": dial tcp 143.198.142.64:6443: connect: connection refused" logger="UnhandledError" Sep 14 12:16:10.917640 kubelet[2341]: I0914 12:16:10.917600 2341 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 14 12:16:10.918870 kubelet[2341]: I0914 12:16:10.918841 2341 server.go:479] "Adding debug handlers to kubelet server" Sep 14 12:16:10.919996 kubelet[2341]: I0914 12:16:10.919905 2341 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 14 12:16:10.920328 kubelet[2341]: I0914 12:16:10.920304 2341 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 14 12:16:10.923603 kubelet[2341]: I0914 12:16:10.923045 2341 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 14 12:16:10.924287 kubelet[2341]: E0914 12:16:10.921864 2341 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.142.64:6443/api/v1/namespaces/default/events\": dial tcp 143.198.142.64:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.0.0-9-e5fa973bfc.1865253c66e86cf3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.0.0-9-e5fa973bfc,UID:ci-4459.0.0-9-e5fa973bfc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.0.0-9-e5fa973bfc,},FirstTimestamp:2025-09-14 12:16:10.912820467 +0000 UTC m=+0.562749431,LastTimestamp:2025-09-14 12:16:10.912820467 +0000 UTC m=+0.562749431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.0.0-9-e5fa973bfc,}" Sep 14 12:16:10.925994 kubelet[2341]: I0914 12:16:10.925649 2341 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 14 12:16:10.931930 kubelet[2341]: E0914 12:16:10.931881 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:10.932063 kubelet[2341]: I0914 12:16:10.931947 2341 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 14 12:16:10.932158 kubelet[2341]: I0914 12:16:10.932145 2341 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 14 12:16:10.932219 kubelet[2341]: I0914 12:16:10.932207 2341 reconciler.go:26] "Reconciler: start to sync state" Sep 14 12:16:10.934679 kubelet[2341]: W0914 12:16:10.934467 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.142.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.142.64:6443: connect: connection refused Sep 14 12:16:10.934679 kubelet[2341]: E0914 12:16:10.934524 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.142.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.142.64:6443: connect: connection refused" logger="UnhandledError" Sep 14 12:16:10.934679 kubelet[2341]: E0914 12:16:10.934582 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.142.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-9-e5fa973bfc?timeout=10s\": dial tcp 143.198.142.64:6443: connect: connection refused" interval="200ms" Sep 14 12:16:10.934954 kubelet[2341]: I0914 12:16:10.934851 2341 factory.go:221] Registration of the systemd container factory successfully Sep 14 12:16:10.934954 kubelet[2341]: I0914 12:16:10.934936 2341 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 14 12:16:10.936749 kubelet[2341]: E0914 12:16:10.936719 2341 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 14 12:16:10.937299 kubelet[2341]: I0914 12:16:10.937276 2341 factory.go:221] Registration of the containerd container factory successfully Sep 14 12:16:10.959327 kubelet[2341]: I0914 12:16:10.959082 2341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 14 12:16:10.961555 kubelet[2341]: I0914 12:16:10.961508 2341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 14 12:16:10.961795 kubelet[2341]: I0914 12:16:10.961775 2341 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 14 12:16:10.961924 kubelet[2341]: I0914 12:16:10.961904 2341 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 14 12:16:10.962035 kubelet[2341]: I0914 12:16:10.962015 2341 kubelet.go:2382] "Starting kubelet main sync loop" Sep 14 12:16:10.962353 kubelet[2341]: E0914 12:16:10.962247 2341 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 14 12:16:10.969138 kubelet[2341]: W0914 12:16:10.968846 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.142.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.142.64:6443: connect: connection refused Sep 14 12:16:10.969138 kubelet[2341]: E0914 12:16:10.968911 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.142.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.142.64:6443: connect: connection refused" logger="UnhandledError" Sep 14 12:16:10.976970 kubelet[2341]: I0914 12:16:10.976934 2341 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 14 12:16:10.976970 kubelet[2341]: I0914 12:16:10.976958 2341 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 14 12:16:10.976970 kubelet[2341]: I0914 12:16:10.976982 2341 state_mem.go:36] "Initialized new in-memory state store" Sep 14 12:16:10.980080 kubelet[2341]: I0914 12:16:10.980050 2341 policy_none.go:49] "None policy: Start" Sep 14 12:16:10.980274 kubelet[2341]: I0914 12:16:10.980252 2341 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 14 12:16:10.980347 kubelet[2341]: I0914 12:16:10.980290 2341 state_mem.go:35] "Initializing new in-memory state store" Sep 14 12:16:10.987609 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 14 12:16:10.999712 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 14 12:16:11.004737 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 14 12:16:11.024815 kubelet[2341]: I0914 12:16:11.024781 2341 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 14 12:16:11.025327 kubelet[2341]: I0914 12:16:11.025299 2341 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 14 12:16:11.025666 kubelet[2341]: I0914 12:16:11.025607 2341 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 14 12:16:11.026500 kubelet[2341]: I0914 12:16:11.026306 2341 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 14 12:16:11.029327 kubelet[2341]: E0914 12:16:11.028981 2341 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 14 12:16:11.029484 kubelet[2341]: E0914 12:16:11.029465 2341 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:11.074799 systemd[1]: Created slice kubepods-burstable-podcae6de4071450f3dc70eefcbdf9d614c.slice - libcontainer container kubepods-burstable-podcae6de4071450f3dc70eefcbdf9d614c.slice. Sep 14 12:16:11.104389 kubelet[2341]: E0914 12:16:11.104086 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.110103 systemd[1]: Created slice kubepods-burstable-pod08eaa25b29e028acc32b51b02c147b0d.slice - libcontainer container kubepods-burstable-pod08eaa25b29e028acc32b51b02c147b0d.slice. Sep 14 12:16:11.124145 kubelet[2341]: E0914 12:16:11.123943 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.127467 kubelet[2341]: I0914 12:16:11.127437 2341 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.128127 systemd[1]: Created slice kubepods-burstable-pod1dcb8daff976de85da52a7663ab67b9f.slice - libcontainer container kubepods-burstable-pod1dcb8daff976de85da52a7663ab67b9f.slice. Sep 14 12:16:11.129523 kubelet[2341]: E0914 12:16:11.128981 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.142.64:6443/api/v1/nodes\": dial tcp 143.198.142.64:6443: connect: connection refused" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.131151 kubelet[2341]: E0914 12:16:11.131118 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.136184 kubelet[2341]: E0914 12:16:11.136123 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.142.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-9-e5fa973bfc?timeout=10s\": dial tcp 143.198.142.64:6443: connect: connection refused" interval="400ms" Sep 14 12:16:11.233786 kubelet[2341]: I0914 12:16:11.233723 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.233786 kubelet[2341]: I0914 12:16:11.233772 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08eaa25b29e028acc32b51b02c147b0d-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-9-e5fa973bfc\" (UID: \"08eaa25b29e028acc32b51b02c147b0d\") " pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.233786 kubelet[2341]: I0914 12:16:11.233793 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.233786 kubelet[2341]: I0914 12:16:11.233810 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.234124 kubelet[2341]: I0914 12:16:11.233842 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.234124 kubelet[2341]: I0914 12:16:11.233859 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.234124 kubelet[2341]: I0914 12:16:11.233874 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cae6de4071450f3dc70eefcbdf9d614c-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-9-e5fa973bfc\" (UID: \"cae6de4071450f3dc70eefcbdf9d614c\") " pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.234124 kubelet[2341]: I0914 12:16:11.233890 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cae6de4071450f3dc70eefcbdf9d614c-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-9-e5fa973bfc\" (UID: \"cae6de4071450f3dc70eefcbdf9d614c\") " pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.234124 kubelet[2341]: I0914 12:16:11.233911 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cae6de4071450f3dc70eefcbdf9d614c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-9-e5fa973bfc\" (UID: \"cae6de4071450f3dc70eefcbdf9d614c\") " pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.331261 kubelet[2341]: I0914 12:16:11.331209 2341 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.331739 kubelet[2341]: E0914 12:16:11.331692 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.142.64:6443/api/v1/nodes\": dial tcp 143.198.142.64:6443: connect: connection refused" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.406100 kubelet[2341]: E0914 12:16:11.405663 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:11.409429 containerd[1546]: time="2025-09-14T12:16:11.409251114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-9-e5fa973bfc,Uid:cae6de4071450f3dc70eefcbdf9d614c,Namespace:kube-system,Attempt:0,}" Sep 14 12:16:11.424882 kubelet[2341]: E0914 12:16:11.424831 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:11.425547 containerd[1546]: time="2025-09-14T12:16:11.425404397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-9-e5fa973bfc,Uid:08eaa25b29e028acc32b51b02c147b0d,Namespace:kube-system,Attempt:0,}" Sep 14 12:16:11.433088 kubelet[2341]: E0914 12:16:11.432253 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:11.437834 containerd[1546]: time="2025-09-14T12:16:11.437781857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-9-e5fa973bfc,Uid:1dcb8daff976de85da52a7663ab67b9f,Namespace:kube-system,Attempt:0,}" Sep 14 12:16:11.538304 containerd[1546]: time="2025-09-14T12:16:11.538176420Z" level=info msg="connecting to shim f37682f811e6b7feb9a7279911e35429e9141ea81472789266aba05f36ab77b8" address="unix:///run/containerd/s/a180684318bf0557e9606393373c46c2535ca5170f3f388866f0a894dffe37b5" namespace=k8s.io protocol=ttrpc version=3 Sep 14 12:16:11.539095 kubelet[2341]: E0914 12:16:11.538998 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.142.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-9-e5fa973bfc?timeout=10s\": dial tcp 143.198.142.64:6443: connect: connection refused" interval="800ms" Sep 14 12:16:11.540392 containerd[1546]: time="2025-09-14T12:16:11.540342309Z" level=info msg="connecting to shim d746fb7e579a086aa9678f784212c9e334f221c58801aeebabc44f71bec30da4" address="unix:///run/containerd/s/6894a66b5e6c309505511df14f180670454f8eab18359ed90c92b1a8dfee2a68" namespace=k8s.io protocol=ttrpc version=3 Sep 14 12:16:11.585013 containerd[1546]: time="2025-09-14T12:16:11.584821669Z" level=info msg="connecting to shim 40f8b2ee26601cbf8e3099f594d7881c638d2a88fa00e7d0aab1553ec0259c5b" address="unix:///run/containerd/s/a942389274ca73d9ae8da790222e6c7b67dd01132b62565e527f7f61afc7a1aa" namespace=k8s.io protocol=ttrpc version=3 Sep 14 12:16:11.670865 systemd[1]: Started cri-containerd-40f8b2ee26601cbf8e3099f594d7881c638d2a88fa00e7d0aab1553ec0259c5b.scope - libcontainer container 40f8b2ee26601cbf8e3099f594d7881c638d2a88fa00e7d0aab1553ec0259c5b. Sep 14 12:16:11.672612 systemd[1]: Started cri-containerd-d746fb7e579a086aa9678f784212c9e334f221c58801aeebabc44f71bec30da4.scope - libcontainer container d746fb7e579a086aa9678f784212c9e334f221c58801aeebabc44f71bec30da4. Sep 14 12:16:11.674031 systemd[1]: Started cri-containerd-f37682f811e6b7feb9a7279911e35429e9141ea81472789266aba05f36ab77b8.scope - libcontainer container f37682f811e6b7feb9a7279911e35429e9141ea81472789266aba05f36ab77b8. Sep 14 12:16:11.735619 kubelet[2341]: I0914 12:16:11.734469 2341 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.735619 kubelet[2341]: E0914 12:16:11.735087 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.142.64:6443/api/v1/nodes\": dial tcp 143.198.142.64:6443: connect: connection refused" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:11.769415 kubelet[2341]: W0914 12:16:11.769269 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.142.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.142.64:6443: connect: connection refused Sep 14 12:16:11.769906 kubelet[2341]: E0914 12:16:11.769711 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.142.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.142.64:6443: connect: connection refused" logger="UnhandledError" Sep 14 12:16:11.802930 kubelet[2341]: W0914 12:16:11.802856 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.142.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.142.64:6443: connect: connection refused Sep 14 12:16:11.803211 kubelet[2341]: E0914 12:16:11.802989 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.142.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.142.64:6443: connect: connection refused" logger="UnhandledError" Sep 14 12:16:11.812890 containerd[1546]: time="2025-09-14T12:16:11.812722081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-9-e5fa973bfc,Uid:08eaa25b29e028acc32b51b02c147b0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d746fb7e579a086aa9678f784212c9e334f221c58801aeebabc44f71bec30da4\"" Sep 14 12:16:11.816857 kubelet[2341]: E0914 12:16:11.816817 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:11.821148 containerd[1546]: time="2025-09-14T12:16:11.821095535Z" level=info msg="CreateContainer within sandbox \"d746fb7e579a086aa9678f784212c9e334f221c58801aeebabc44f71bec30da4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 14 12:16:11.833224 containerd[1546]: time="2025-09-14T12:16:11.833163858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-9-e5fa973bfc,Uid:cae6de4071450f3dc70eefcbdf9d614c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f37682f811e6b7feb9a7279911e35429e9141ea81472789266aba05f36ab77b8\"" Sep 14 12:16:11.833783 containerd[1546]: time="2025-09-14T12:16:11.833324077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-9-e5fa973bfc,Uid:1dcb8daff976de85da52a7663ab67b9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"40f8b2ee26601cbf8e3099f594d7881c638d2a88fa00e7d0aab1553ec0259c5b\"" Sep 14 12:16:11.835021 kubelet[2341]: E0914 12:16:11.834987 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:11.835411 kubelet[2341]: E0914 12:16:11.835377 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:11.837423 containerd[1546]: time="2025-09-14T12:16:11.837366952Z" level=info msg="CreateContainer within sandbox \"40f8b2ee26601cbf8e3099f594d7881c638d2a88fa00e7d0aab1553ec0259c5b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 14 12:16:11.840614 containerd[1546]: time="2025-09-14T12:16:11.840555303Z" level=info msg="CreateContainer within sandbox \"f37682f811e6b7feb9a7279911e35429e9141ea81472789266aba05f36ab77b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 14 12:16:11.847357 containerd[1546]: time="2025-09-14T12:16:11.847308037Z" level=info msg="Container 2826fbdb7932715b242129a067cb1b99cd0b6fa9c7bc85746af693a6c591a459: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:11.850285 containerd[1546]: time="2025-09-14T12:16:11.850231979Z" level=info msg="Container e0a0fa6e066e5722cea954c2a1124c18f825d02cfba7c5e8d0c8e1a7eb795c03: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:11.859445 containerd[1546]: time="2025-09-14T12:16:11.859386356Z" level=info msg="Container 001c28799ccefc15b1457495e942afd425e92398d4e17b153fac60c5fd79d28a: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:11.863509 containerd[1546]: time="2025-09-14T12:16:11.863446325Z" level=info msg="CreateContainer within sandbox \"d746fb7e579a086aa9678f784212c9e334f221c58801aeebabc44f71bec30da4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2826fbdb7932715b242129a067cb1b99cd0b6fa9c7bc85746af693a6c591a459\"" Sep 14 12:16:11.864991 containerd[1546]: time="2025-09-14T12:16:11.864948741Z" level=info msg="StartContainer for \"2826fbdb7932715b242129a067cb1b99cd0b6fa9c7bc85746af693a6c591a459\"" Sep 14 12:16:11.868090 containerd[1546]: time="2025-09-14T12:16:11.868038977Z" level=info msg="CreateContainer within sandbox \"40f8b2ee26601cbf8e3099f594d7881c638d2a88fa00e7d0aab1553ec0259c5b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e0a0fa6e066e5722cea954c2a1124c18f825d02cfba7c5e8d0c8e1a7eb795c03\"" Sep 14 12:16:11.868702 containerd[1546]: time="2025-09-14T12:16:11.868584275Z" level=info msg="StartContainer for \"e0a0fa6e066e5722cea954c2a1124c18f825d02cfba7c5e8d0c8e1a7eb795c03\"" Sep 14 12:16:11.868702 containerd[1546]: time="2025-09-14T12:16:11.868690058Z" level=info msg="connecting to shim 2826fbdb7932715b242129a067cb1b99cd0b6fa9c7bc85746af693a6c591a459" address="unix:///run/containerd/s/6894a66b5e6c309505511df14f180670454f8eab18359ed90c92b1a8dfee2a68" protocol=ttrpc version=3 Sep 14 12:16:11.870222 containerd[1546]: time="2025-09-14T12:16:11.870145016Z" level=info msg="connecting to shim e0a0fa6e066e5722cea954c2a1124c18f825d02cfba7c5e8d0c8e1a7eb795c03" address="unix:///run/containerd/s/a942389274ca73d9ae8da790222e6c7b67dd01132b62565e527f7f61afc7a1aa" protocol=ttrpc version=3 Sep 14 12:16:11.873605 containerd[1546]: time="2025-09-14T12:16:11.873537727Z" level=info msg="CreateContainer within sandbox \"f37682f811e6b7feb9a7279911e35429e9141ea81472789266aba05f36ab77b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"001c28799ccefc15b1457495e942afd425e92398d4e17b153fac60c5fd79d28a\"" Sep 14 12:16:11.874399 containerd[1546]: time="2025-09-14T12:16:11.874322210Z" level=info msg="StartContainer for \"001c28799ccefc15b1457495e942afd425e92398d4e17b153fac60c5fd79d28a\"" Sep 14 12:16:11.876100 containerd[1546]: time="2025-09-14T12:16:11.876031289Z" level=info msg="connecting to shim 001c28799ccefc15b1457495e942afd425e92398d4e17b153fac60c5fd79d28a" address="unix:///run/containerd/s/a180684318bf0557e9606393373c46c2535ca5170f3f388866f0a894dffe37b5" protocol=ttrpc version=3 Sep 14 12:16:11.914155 systemd[1]: Started cri-containerd-2826fbdb7932715b242129a067cb1b99cd0b6fa9c7bc85746af693a6c591a459.scope - libcontainer container 2826fbdb7932715b242129a067cb1b99cd0b6fa9c7bc85746af693a6c591a459. Sep 14 12:16:11.932902 systemd[1]: Started cri-containerd-e0a0fa6e066e5722cea954c2a1124c18f825d02cfba7c5e8d0c8e1a7eb795c03.scope - libcontainer container e0a0fa6e066e5722cea954c2a1124c18f825d02cfba7c5e8d0c8e1a7eb795c03. Sep 14 12:16:11.952884 systemd[1]: Started cri-containerd-001c28799ccefc15b1457495e942afd425e92398d4e17b153fac60c5fd79d28a.scope - libcontainer container 001c28799ccefc15b1457495e942afd425e92398d4e17b153fac60c5fd79d28a. Sep 14 12:16:12.060250 containerd[1546]: time="2025-09-14T12:16:12.060074451Z" level=info msg="StartContainer for \"2826fbdb7932715b242129a067cb1b99cd0b6fa9c7bc85746af693a6c591a459\" returns successfully" Sep 14 12:16:12.076035 containerd[1546]: time="2025-09-14T12:16:12.075839131Z" level=info msg="StartContainer for \"e0a0fa6e066e5722cea954c2a1124c18f825d02cfba7c5e8d0c8e1a7eb795c03\" returns successfully" Sep 14 12:16:12.082267 containerd[1546]: time="2025-09-14T12:16:12.082220197Z" level=info msg="StartContainer for \"001c28799ccefc15b1457495e942afd425e92398d4e17b153fac60c5fd79d28a\" returns successfully" Sep 14 12:16:12.217418 kubelet[2341]: W0914 12:16:12.217252 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.142.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-9-e5fa973bfc&limit=500&resourceVersion=0": dial tcp 143.198.142.64:6443: connect: connection refused Sep 14 12:16:12.217418 kubelet[2341]: E0914 12:16:12.217348 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.142.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-9-e5fa973bfc&limit=500&resourceVersion=0\": dial tcp 143.198.142.64:6443: connect: connection refused" logger="UnhandledError" Sep 14 12:16:12.539339 kubelet[2341]: I0914 12:16:12.539194 2341 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:13.008319 kubelet[2341]: E0914 12:16:13.008276 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:13.008517 kubelet[2341]: E0914 12:16:13.008498 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:13.016052 kubelet[2341]: E0914 12:16:13.016006 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:13.016211 kubelet[2341]: E0914 12:16:13.016192 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:13.022037 kubelet[2341]: E0914 12:16:13.021998 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:13.023950 kubelet[2341]: E0914 12:16:13.022233 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:14.037206 kubelet[2341]: E0914 12:16:14.037165 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.037659 kubelet[2341]: E0914 12:16:14.037338 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:14.038673 kubelet[2341]: E0914 12:16:14.038631 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.038851 kubelet[2341]: E0914 12:16:14.038831 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:14.039494 kubelet[2341]: E0914 12:16:14.039277 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.039494 kubelet[2341]: E0914 12:16:14.039409 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:14.145205 kubelet[2341]: E0914 12:16:14.145154 2341 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.0.0-9-e5fa973bfc\" not found" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.258559 kubelet[2341]: I0914 12:16:14.258458 2341 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.258559 kubelet[2341]: E0914 12:16:14.258518 2341 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.0.0-9-e5fa973bfc\": node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:14.279825 kubelet[2341]: E0914 12:16:14.279785 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:14.380259 kubelet[2341]: E0914 12:16:14.380196 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:14.480489 kubelet[2341]: E0914 12:16:14.480435 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:14.581587 kubelet[2341]: E0914 12:16:14.581528 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:14.682901 kubelet[2341]: E0914 12:16:14.682580 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:14.782998 kubelet[2341]: E0914 12:16:14.782939 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:14.835670 kubelet[2341]: I0914 12:16:14.834892 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.843353 kubelet[2341]: E0914 12:16:14.843289 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-9-e5fa973bfc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.843901 kubelet[2341]: I0914 12:16:14.843577 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.846243 kubelet[2341]: E0914 12:16:14.846199 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-9-e5fa973bfc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.846430 kubelet[2341]: I0914 12:16:14.846329 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.848918 kubelet[2341]: E0914 12:16:14.848887 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:14.906149 kubelet[2341]: I0914 12:16:14.906095 2341 apiserver.go:52] "Watching apiserver" Sep 14 12:16:14.932524 kubelet[2341]: I0914 12:16:14.932373 2341 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 14 12:16:15.038730 kubelet[2341]: I0914 12:16:15.038496 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:15.040855 kubelet[2341]: I0914 12:16:15.040830 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:15.041114 kubelet[2341]: I0914 12:16:15.041099 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:15.056813 kubelet[2341]: W0914 12:16:15.056768 2341 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 14 12:16:15.057155 kubelet[2341]: E0914 12:16:15.057126 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:15.057271 kubelet[2341]: W0914 12:16:15.057246 2341 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 14 12:16:15.057351 kubelet[2341]: W0914 12:16:15.057331 2341 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 14 12:16:15.057804 kubelet[2341]: E0914 12:16:15.057773 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:15.058066 kubelet[2341]: E0914 12:16:15.058036 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:16.041692 kubelet[2341]: E0914 12:16:16.041327 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:16.041692 kubelet[2341]: E0914 12:16:16.041541 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:16.041692 kubelet[2341]: E0914 12:16:16.041642 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:16.310413 systemd[1]: Reload requested from client PID 2612 ('systemctl') (unit session-7.scope)... Sep 14 12:16:16.310439 systemd[1]: Reloading... Sep 14 12:16:16.425674 zram_generator::config[2652]: No configuration found. Sep 14 12:16:16.789936 systemd[1]: Reloading finished in 478 ms. Sep 14 12:16:16.825340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 14 12:16:16.842753 systemd[1]: kubelet.service: Deactivated successfully. Sep 14 12:16:16.843172 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 14 12:16:16.843262 systemd[1]: kubelet.service: Consumed 1.007s CPU time, 126.2M memory peak. Sep 14 12:16:16.847631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 14 12:16:17.050927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 14 12:16:17.070829 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 14 12:16:17.161520 kubelet[2705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 14 12:16:17.161520 kubelet[2705]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 14 12:16:17.161520 kubelet[2705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 14 12:16:17.162641 kubelet[2705]: I0914 12:16:17.162533 2705 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 14 12:16:17.174658 kubelet[2705]: I0914 12:16:17.173873 2705 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 14 12:16:17.174658 kubelet[2705]: I0914 12:16:17.173915 2705 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 14 12:16:17.174658 kubelet[2705]: I0914 12:16:17.174218 2705 server.go:954] "Client rotation is on, will bootstrap in background" Sep 14 12:16:17.177696 kubelet[2705]: I0914 12:16:17.177649 2705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 14 12:16:17.182452 kubelet[2705]: I0914 12:16:17.182420 2705 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 14 12:16:17.187826 kubelet[2705]: I0914 12:16:17.187780 2705 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 14 12:16:17.196774 kubelet[2705]: I0914 12:16:17.196731 2705 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 14 12:16:17.197586 kubelet[2705]: I0914 12:16:17.197154 2705 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 14 12:16:17.197586 kubelet[2705]: I0914 12:16:17.197203 2705 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-9-e5fa973bfc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 14 12:16:17.197586 kubelet[2705]: I0914 12:16:17.197572 2705 topology_manager.go:138] "Creating topology manager with none policy" Sep 14 12:16:17.197586 kubelet[2705]: I0914 12:16:17.197585 2705 container_manager_linux.go:304] "Creating device plugin manager" Sep 14 12:16:17.197899 kubelet[2705]: I0914 12:16:17.197694 2705 state_mem.go:36] "Initialized new in-memory state store" Sep 14 12:16:17.198295 kubelet[2705]: I0914 12:16:17.197961 2705 kubelet.go:446] "Attempting to sync node with API server" Sep 14 12:16:17.198295 kubelet[2705]: I0914 12:16:17.198008 2705 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 14 12:16:17.198295 kubelet[2705]: I0914 12:16:17.198066 2705 kubelet.go:352] "Adding apiserver pod source" Sep 14 12:16:17.198295 kubelet[2705]: I0914 12:16:17.198080 2705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 14 12:16:17.206626 kubelet[2705]: I0914 12:16:17.205765 2705 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 14 12:16:17.206626 kubelet[2705]: I0914 12:16:17.206458 2705 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 14 12:16:17.212623 kubelet[2705]: I0914 12:16:17.212540 2705 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 14 12:16:17.212773 kubelet[2705]: I0914 12:16:17.212733 2705 server.go:1287] "Started kubelet" Sep 14 12:16:17.223951 kubelet[2705]: I0914 12:16:17.223273 2705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 14 12:16:17.225626 kubelet[2705]: I0914 12:16:17.225107 2705 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 14 12:16:17.225626 kubelet[2705]: I0914 12:16:17.225218 2705 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 14 12:16:17.226677 kubelet[2705]: I0914 12:16:17.226648 2705 server.go:479] "Adding debug handlers to kubelet server" Sep 14 12:16:17.228569 kubelet[2705]: I0914 12:16:17.228497 2705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 14 12:16:17.229008 kubelet[2705]: I0914 12:16:17.228982 2705 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 14 12:16:17.233762 kubelet[2705]: I0914 12:16:17.233717 2705 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 14 12:16:17.234158 kubelet[2705]: E0914 12:16:17.234072 2705 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-9-e5fa973bfc\" not found" Sep 14 12:16:17.234378 kubelet[2705]: I0914 12:16:17.234362 2705 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 14 12:16:17.234560 kubelet[2705]: I0914 12:16:17.234542 2705 reconciler.go:26] "Reconciler: start to sync state" Sep 14 12:16:17.245039 kubelet[2705]: I0914 12:16:17.243129 2705 factory.go:221] Registration of the systemd container factory successfully Sep 14 12:16:17.245712 kubelet[2705]: I0914 12:16:17.245385 2705 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 14 12:16:17.248144 kubelet[2705]: I0914 12:16:17.247968 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 14 12:16:17.249975 kubelet[2705]: I0914 12:16:17.249473 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 14 12:16:17.249975 kubelet[2705]: I0914 12:16:17.249514 2705 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 14 12:16:17.249975 kubelet[2705]: I0914 12:16:17.249540 2705 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 14 12:16:17.249975 kubelet[2705]: I0914 12:16:17.249547 2705 kubelet.go:2382] "Starting kubelet main sync loop" Sep 14 12:16:17.249975 kubelet[2705]: E0914 12:16:17.249610 2705 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 14 12:16:17.260283 kubelet[2705]: I0914 12:16:17.260077 2705 factory.go:221] Registration of the containerd container factory successfully Sep 14 12:16:17.327895 sudo[2736]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 14 12:16:17.328631 sudo[2736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 14 12:16:17.350310 kubelet[2705]: E0914 12:16:17.349825 2705 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 14 12:16:17.361554 kubelet[2705]: I0914 12:16:17.361476 2705 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 14 12:16:17.362659 kubelet[2705]: I0914 12:16:17.361738 2705 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 14 12:16:17.362659 kubelet[2705]: I0914 12:16:17.361778 2705 state_mem.go:36] "Initialized new in-memory state store" Sep 14 12:16:17.362659 kubelet[2705]: I0914 12:16:17.362090 2705 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 14 12:16:17.362659 kubelet[2705]: I0914 12:16:17.362107 2705 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 14 12:16:17.362659 kubelet[2705]: I0914 12:16:17.362134 2705 policy_none.go:49] "None policy: Start" Sep 14 12:16:17.362659 kubelet[2705]: I0914 12:16:17.362156 2705 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 14 12:16:17.362659 kubelet[2705]: I0914 12:16:17.362170 2705 state_mem.go:35] "Initializing new in-memory state store" Sep 14 12:16:17.362659 kubelet[2705]: I0914 12:16:17.362355 2705 state_mem.go:75] "Updated machine memory state" Sep 14 12:16:17.376003 kubelet[2705]: I0914 12:16:17.375973 2705 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 14 12:16:17.376885 kubelet[2705]: I0914 12:16:17.376862 2705 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 14 12:16:17.377055 kubelet[2705]: I0914 12:16:17.377010 2705 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 14 12:16:17.377794 kubelet[2705]: I0914 12:16:17.377657 2705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 14 12:16:17.381900 kubelet[2705]: E0914 12:16:17.381872 2705 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 14 12:16:17.494174 kubelet[2705]: I0914 12:16:17.494137 2705 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.515649 kubelet[2705]: I0914 12:16:17.515464 2705 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.515649 kubelet[2705]: I0914 12:16:17.515568 2705 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.552605 kubelet[2705]: I0914 12:16:17.551236 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.554324 kubelet[2705]: I0914 12:16:17.554209 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.557099 kubelet[2705]: I0914 12:16:17.554746 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.565114 kubelet[2705]: W0914 12:16:17.564994 2705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 14 12:16:17.565114 kubelet[2705]: E0914 12:16:17.565060 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-9-e5fa973bfc\" already exists" pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.565492 kubelet[2705]: W0914 12:16:17.565472 2705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 14 12:16:17.565652 kubelet[2705]: E0914 12:16:17.565636 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" already exists" pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.565757 kubelet[2705]: W0914 12:16:17.565498 2705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 14 12:16:17.565884 kubelet[2705]: E0914 12:16:17.565789 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-9-e5fa973bfc\" already exists" pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.637065 kubelet[2705]: I0914 12:16:17.636997 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cae6de4071450f3dc70eefcbdf9d614c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-9-e5fa973bfc\" (UID: \"cae6de4071450f3dc70eefcbdf9d614c\") " pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.637237 kubelet[2705]: I0914 12:16:17.637102 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.637237 kubelet[2705]: I0914 12:16:17.637175 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.637237 kubelet[2705]: I0914 12:16:17.637207 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.637237 kubelet[2705]: I0914 12:16:17.637235 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.637370 kubelet[2705]: I0914 12:16:17.637264 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dcb8daff976de85da52a7663ab67b9f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-9-e5fa973bfc\" (UID: \"1dcb8daff976de85da52a7663ab67b9f\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.637370 kubelet[2705]: I0914 12:16:17.637328 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08eaa25b29e028acc32b51b02c147b0d-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-9-e5fa973bfc\" (UID: \"08eaa25b29e028acc32b51b02c147b0d\") " pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.637370 kubelet[2705]: I0914 12:16:17.637351 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cae6de4071450f3dc70eefcbdf9d614c-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-9-e5fa973bfc\" (UID: \"cae6de4071450f3dc70eefcbdf9d614c\") " pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.637443 kubelet[2705]: I0914 12:16:17.637375 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cae6de4071450f3dc70eefcbdf9d614c-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-9-e5fa973bfc\" (UID: \"cae6de4071450f3dc70eefcbdf9d614c\") " pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:17.867582 kubelet[2705]: E0914 12:16:17.866649 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:17.868778 kubelet[2705]: E0914 12:16:17.868740 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:17.869417 kubelet[2705]: E0914 12:16:17.869362 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:17.900712 sudo[2736]: pam_unix(sudo:session): session closed for user root Sep 14 12:16:18.206722 kubelet[2705]: I0914 12:16:18.206442 2705 apiserver.go:52] "Watching apiserver" Sep 14 12:16:18.235351 kubelet[2705]: I0914 12:16:18.235297 2705 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 14 12:16:18.293622 kubelet[2705]: I0914 12:16:18.292019 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:18.293622 kubelet[2705]: E0914 12:16:18.292119 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:18.293622 kubelet[2705]: I0914 12:16:18.292580 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:18.313448 kubelet[2705]: W0914 12:16:18.313381 2705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 14 12:16:18.315771 kubelet[2705]: E0914 12:16:18.315741 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-9-e5fa973bfc\" already exists" pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:18.316186 kubelet[2705]: E0914 12:16:18.316141 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:18.316642 kubelet[2705]: W0914 12:16:18.316607 2705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 14 12:16:18.316739 kubelet[2705]: E0914 12:16:18.316701 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-9-e5fa973bfc\" already exists" pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" Sep 14 12:16:18.316841 kubelet[2705]: E0914 12:16:18.316826 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:18.365818 kubelet[2705]: I0914 12:16:18.365743 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.0.0-9-e5fa973bfc" podStartSLOduration=3.3657206840000002 podStartE2EDuration="3.365720684s" podCreationTimestamp="2025-09-14 12:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-14 12:16:18.338776251 +0000 UTC m=+1.255338688" watchObservedRunningTime="2025-09-14 12:16:18.365720684 +0000 UTC m=+1.282283112" Sep 14 12:16:18.379490 kubelet[2705]: I0914 12:16:18.379422 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.0.0-9-e5fa973bfc" podStartSLOduration=3.379399452 podStartE2EDuration="3.379399452s" podCreationTimestamp="2025-09-14 12:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-14 12:16:18.366425381 +0000 UTC m=+1.282987818" watchObservedRunningTime="2025-09-14 12:16:18.379399452 +0000 UTC m=+1.295961887" Sep 14 12:16:18.391732 kubelet[2705]: I0914 12:16:18.391673 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.0.0-9-e5fa973bfc" podStartSLOduration=3.391653985 podStartE2EDuration="3.391653985s" podCreationTimestamp="2025-09-14 12:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-14 12:16:18.380982717 +0000 UTC m=+1.297545148" watchObservedRunningTime="2025-09-14 12:16:18.391653985 +0000 UTC m=+1.308216412" Sep 14 12:16:19.300488 kubelet[2705]: E0914 12:16:19.300206 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:19.302166 kubelet[2705]: E0914 12:16:19.300887 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:19.874535 sudo[1783]: pam_unix(sudo:session): session closed for user root Sep 14 12:16:19.877479 sshd[1782]: Connection closed by 139.178.89.65 port 55676 Sep 14 12:16:19.878199 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Sep 14 12:16:19.882628 systemd[1]: sshd@6-143.198.142.64:22-139.178.89.65:55676.service: Deactivated successfully. Sep 14 12:16:19.886327 systemd[1]: session-7.scope: Deactivated successfully. Sep 14 12:16:19.886520 systemd[1]: session-7.scope: Consumed 5.405s CPU time, 224.2M memory peak. Sep 14 12:16:19.889577 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Sep 14 12:16:19.891156 systemd-logind[1520]: Removed session 7. Sep 14 12:16:20.299207 kubelet[2705]: E0914 12:16:20.299033 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:21.568720 kubelet[2705]: I0914 12:16:21.568678 2705 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 14 12:16:21.570144 containerd[1546]: time="2025-09-14T12:16:21.570025130Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 14 12:16:21.571229 kubelet[2705]: I0914 12:16:21.571134 2705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 14 12:16:22.495473 systemd[1]: Created slice kubepods-besteffort-pod443a9211_c5a0_4486_ade5_9b50d75653fa.slice - libcontainer container kubepods-besteffort-pod443a9211_c5a0_4486_ade5_9b50d75653fa.slice. Sep 14 12:16:22.511626 kubelet[2705]: W0914 12:16:22.510706 2705 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4459.0.0-9-e5fa973bfc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object Sep 14 12:16:22.511626 kubelet[2705]: E0914 12:16:22.510763 2705 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4459.0.0-9-e5fa973bfc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object" logger="UnhandledError" Sep 14 12:16:22.511626 kubelet[2705]: I0914 12:16:22.510809 2705 status_manager.go:890] "Failed to get status for pod" podUID="443a9211-c5a0-4486-ade5-9b50d75653fa" pod="kube-system/kube-proxy-7xvsg" err="pods \"kube-proxy-7xvsg\" is forbidden: User \"system:node:ci-4459.0.0-9-e5fa973bfc\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object" Sep 14 12:16:22.511626 kubelet[2705]: W0914 12:16:22.510971 2705 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4459.0.0-9-e5fa973bfc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object Sep 14 12:16:22.511902 kubelet[2705]: E0914 12:16:22.510987 2705 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459.0.0-9-e5fa973bfc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object" logger="UnhandledError" Sep 14 12:16:22.511902 kubelet[2705]: W0914 12:16:22.511024 2705 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4459.0.0-9-e5fa973bfc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object Sep 14 12:16:22.511902 kubelet[2705]: E0914 12:16:22.511034 2705 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4459.0.0-9-e5fa973bfc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object" logger="UnhandledError" Sep 14 12:16:22.511902 kubelet[2705]: W0914 12:16:22.511070 2705 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4459.0.0-9-e5fa973bfc" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object Sep 14 12:16:22.512047 kubelet[2705]: E0914 12:16:22.511078 2705 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4459.0.0-9-e5fa973bfc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object" logger="UnhandledError" Sep 14 12:16:22.512047 kubelet[2705]: W0914 12:16:22.511107 2705 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4459.0.0-9-e5fa973bfc" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object Sep 14 12:16:22.512047 kubelet[2705]: E0914 12:16:22.511115 2705 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4459.0.0-9-e5fa973bfc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.0.0-9-e5fa973bfc' and this object" logger="UnhandledError" Sep 14 12:16:22.517087 systemd[1]: Created slice kubepods-burstable-pod72eb0686_8c02_4409_82ed_73a28b7875c4.slice - libcontainer container kubepods-burstable-pod72eb0686_8c02_4409_82ed_73a28b7875c4.slice. Sep 14 12:16:22.570188 kubelet[2705]: I0914 12:16:22.569033 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-hostproc\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570188 kubelet[2705]: I0914 12:16:22.569070 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-etc-cni-netd\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570188 kubelet[2705]: I0914 12:16:22.569089 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72eb0686-8c02-4409-82ed-73a28b7875c4-clustermesh-secrets\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570188 kubelet[2705]: I0914 12:16:22.569104 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-host-proc-sys-net\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570188 kubelet[2705]: I0914 12:16:22.569119 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-host-proc-sys-kernel\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570749 kubelet[2705]: I0914 12:16:22.569135 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qpkz\" (UniqueName: \"kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-kube-api-access-5qpkz\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570749 kubelet[2705]: I0914 12:16:22.569151 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/443a9211-c5a0-4486-ade5-9b50d75653fa-kube-proxy\") pod \"kube-proxy-7xvsg\" (UID: \"443a9211-c5a0-4486-ade5-9b50d75653fa\") " pod="kube-system/kube-proxy-7xvsg" Sep 14 12:16:22.570749 kubelet[2705]: I0914 12:16:22.569169 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-cgroup\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570749 kubelet[2705]: I0914 12:16:22.569185 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/443a9211-c5a0-4486-ade5-9b50d75653fa-xtables-lock\") pod \"kube-proxy-7xvsg\" (UID: \"443a9211-c5a0-4486-ade5-9b50d75653fa\") " pod="kube-system/kube-proxy-7xvsg" Sep 14 12:16:22.570749 kubelet[2705]: I0914 12:16:22.569198 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cni-path\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570749 kubelet[2705]: I0914 12:16:22.569215 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-config-path\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570911 kubelet[2705]: I0914 12:16:22.569229 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-xtables-lock\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570911 kubelet[2705]: I0914 12:16:22.569243 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pplh\" (UniqueName: \"kubernetes.io/projected/443a9211-c5a0-4486-ade5-9b50d75653fa-kube-api-access-9pplh\") pod \"kube-proxy-7xvsg\" (UID: \"443a9211-c5a0-4486-ade5-9b50d75653fa\") " pod="kube-system/kube-proxy-7xvsg" Sep 14 12:16:22.570911 kubelet[2705]: I0914 12:16:22.569263 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/443a9211-c5a0-4486-ade5-9b50d75653fa-lib-modules\") pod \"kube-proxy-7xvsg\" (UID: \"443a9211-c5a0-4486-ade5-9b50d75653fa\") " pod="kube-system/kube-proxy-7xvsg" Sep 14 12:16:22.570911 kubelet[2705]: I0914 12:16:22.569286 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-run\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570911 kubelet[2705]: I0914 12:16:22.569306 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-lib-modules\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.570911 kubelet[2705]: I0914 12:16:22.569321 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-hubble-tls\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.571056 kubelet[2705]: I0914 12:16:22.569341 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-bpf-maps\") pod \"cilium-59kpz\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " pod="kube-system/cilium-59kpz" Sep 14 12:16:22.631359 systemd[1]: Created slice kubepods-besteffort-pod5b12b163_b15d_4748_910b_1a345da53ed8.slice - libcontainer container kubepods-besteffort-pod5b12b163_b15d_4748_910b_1a345da53ed8.slice. Sep 14 12:16:22.670657 kubelet[2705]: I0914 12:16:22.669638 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq7tj\" (UniqueName: \"kubernetes.io/projected/5b12b163-b15d-4748-910b-1a345da53ed8-kube-api-access-nq7tj\") pod \"cilium-operator-6c4d7847fc-lklln\" (UID: \"5b12b163-b15d-4748-910b-1a345da53ed8\") " pod="kube-system/cilium-operator-6c4d7847fc-lklln" Sep 14 12:16:22.670657 kubelet[2705]: I0914 12:16:22.669687 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b12b163-b15d-4748-910b-1a345da53ed8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lklln\" (UID: \"5b12b163-b15d-4748-910b-1a345da53ed8\") " pod="kube-system/cilium-operator-6c4d7847fc-lklln" Sep 14 12:16:23.670836 kubelet[2705]: E0914 12:16:23.670420 2705 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.670836 kubelet[2705]: E0914 12:16:23.670464 2705 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 14 12:16:23.670836 kubelet[2705]: E0914 12:16:23.670433 2705 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 14 12:16:23.670836 kubelet[2705]: E0914 12:16:23.670529 2705 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-59kpz: failed to sync secret cache: timed out waiting for the condition Sep 14 12:16:23.670836 kubelet[2705]: E0914 12:16:23.670619 2705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-config-path podName:72eb0686-8c02-4409-82ed-73a28b7875c4 nodeName:}" failed. No retries permitted until 2025-09-14 12:16:24.170536561 +0000 UTC m=+7.087098996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-config-path") pod "cilium-59kpz" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4") : failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.670836 kubelet[2705]: E0914 12:16:23.670658 2705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-hubble-tls podName:72eb0686-8c02-4409-82ed-73a28b7875c4 nodeName:}" failed. No retries permitted until 2025-09-14 12:16:24.170643321 +0000 UTC m=+7.087205759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-hubble-tls") pod "cilium-59kpz" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4") : failed to sync secret cache: timed out waiting for the condition Sep 14 12:16:23.671738 kubelet[2705]: E0914 12:16:23.670685 2705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72eb0686-8c02-4409-82ed-73a28b7875c4-clustermesh-secrets podName:72eb0686-8c02-4409-82ed-73a28b7875c4 nodeName:}" failed. No retries permitted until 2025-09-14 12:16:24.170676603 +0000 UTC m=+7.087239034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/72eb0686-8c02-4409-82ed-73a28b7875c4-clustermesh-secrets") pod "cilium-59kpz" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4") : failed to sync secret cache: timed out waiting for the condition Sep 14 12:16:23.683891 kubelet[2705]: E0914 12:16:23.683748 2705 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.683891 kubelet[2705]: E0914 12:16:23.683805 2705 projected.go:194] Error preparing data for projected volume kube-api-access-9pplh for pod kube-system/kube-proxy-7xvsg: failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.684414 kubelet[2705]: E0914 12:16:23.684146 2705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/443a9211-c5a0-4486-ade5-9b50d75653fa-kube-api-access-9pplh podName:443a9211-c5a0-4486-ade5-9b50d75653fa nodeName:}" failed. No retries permitted until 2025-09-14 12:16:24.184112827 +0000 UTC m=+7.100675261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9pplh" (UniqueName: "kubernetes.io/projected/443a9211-c5a0-4486-ade5-9b50d75653fa-kube-api-access-9pplh") pod "kube-proxy-7xvsg" (UID: "443a9211-c5a0-4486-ade5-9b50d75653fa") : failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.687627 kubelet[2705]: E0914 12:16:23.687531 2705 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.687627 kubelet[2705]: E0914 12:16:23.687568 2705 projected.go:194] Error preparing data for projected volume kube-api-access-5qpkz for pod kube-system/cilium-59kpz: failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.687770 kubelet[2705]: E0914 12:16:23.687646 2705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-kube-api-access-5qpkz podName:72eb0686-8c02-4409-82ed-73a28b7875c4 nodeName:}" failed. No retries permitted until 2025-09-14 12:16:24.187628503 +0000 UTC m=+7.104190918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5qpkz" (UniqueName: "kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-kube-api-access-5qpkz") pod "cilium-59kpz" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4") : failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.771474 kubelet[2705]: E0914 12:16:23.771047 2705 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.771474 kubelet[2705]: E0914 12:16:23.771155 2705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b12b163-b15d-4748-910b-1a345da53ed8-cilium-config-path podName:5b12b163-b15d-4748-910b-1a345da53ed8 nodeName:}" failed. No retries permitted until 2025-09-14 12:16:24.271134911 +0000 UTC m=+7.187697379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5b12b163-b15d-4748-910b-1a345da53ed8-cilium-config-path") pod "cilium-operator-6c4d7847fc-lklln" (UID: "5b12b163-b15d-4748-910b-1a345da53ed8") : failed to sync configmap cache: timed out waiting for the condition Sep 14 12:16:23.802163 kubelet[2705]: E0914 12:16:23.802101 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:24.308680 kubelet[2705]: E0914 12:16:24.308105 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:24.308680 kubelet[2705]: E0914 12:16:24.308174 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:24.309220 containerd[1546]: time="2025-09-14T12:16:24.309178807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xvsg,Uid:443a9211-c5a0-4486-ade5-9b50d75653fa,Namespace:kube-system,Attempt:0,}" Sep 14 12:16:24.322619 kubelet[2705]: E0914 12:16:24.322572 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:24.325935 containerd[1546]: time="2025-09-14T12:16:24.325880255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-59kpz,Uid:72eb0686-8c02-4409-82ed-73a28b7875c4,Namespace:kube-system,Attempt:0,}" Sep 14 12:16:24.352909 containerd[1546]: time="2025-09-14T12:16:24.352855562Z" level=info msg="connecting to shim 7ec8370a54c01e006234a521bea091e5907f8d64161033bc3d8c382df307e71b" address="unix:///run/containerd/s/745a910be0622cfa693ba99001d92cbba8958c244447efc4aea25788f372a12d" namespace=k8s.io protocol=ttrpc version=3 Sep 14 12:16:24.368207 containerd[1546]: time="2025-09-14T12:16:24.367757797Z" level=info msg="connecting to shim 567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667" address="unix:///run/containerd/s/8072a27432a95cc0bb991f86d6edbdd8084730685bb67d937550585dd51c66f0" namespace=k8s.io protocol=ttrpc version=3 Sep 14 12:16:24.387909 systemd[1]: Started cri-containerd-7ec8370a54c01e006234a521bea091e5907f8d64161033bc3d8c382df307e71b.scope - libcontainer container 7ec8370a54c01e006234a521bea091e5907f8d64161033bc3d8c382df307e71b. Sep 14 12:16:24.417092 systemd[1]: Started cri-containerd-567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667.scope - libcontainer container 567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667. Sep 14 12:16:24.437762 kubelet[2705]: E0914 12:16:24.437686 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:24.440853 containerd[1546]: time="2025-09-14T12:16:24.440815778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lklln,Uid:5b12b163-b15d-4748-910b-1a345da53ed8,Namespace:kube-system,Attempt:0,}" Sep 14 12:16:24.463882 containerd[1546]: time="2025-09-14T12:16:24.463832002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xvsg,Uid:443a9211-c5a0-4486-ade5-9b50d75653fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ec8370a54c01e006234a521bea091e5907f8d64161033bc3d8c382df307e71b\"" Sep 14 12:16:24.465042 kubelet[2705]: E0914 12:16:24.464912 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:24.472285 containerd[1546]: time="2025-09-14T12:16:24.472233304Z" level=info msg="CreateContainer within sandbox \"7ec8370a54c01e006234a521bea091e5907f8d64161033bc3d8c382df307e71b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 14 12:16:24.480587 containerd[1546]: time="2025-09-14T12:16:24.480068563Z" level=info msg="connecting to shim b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3" address="unix:///run/containerd/s/c37c5ab14f2f5261f21b6b290f229c899e04cd66617c298dcc4f45f5c2a6e2f8" namespace=k8s.io protocol=ttrpc version=3 Sep 14 12:16:24.486898 containerd[1546]: time="2025-09-14T12:16:24.486787679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-59kpz,Uid:72eb0686-8c02-4409-82ed-73a28b7875c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\"" Sep 14 12:16:24.489069 kubelet[2705]: E0914 12:16:24.487847 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:24.496236 containerd[1546]: time="2025-09-14T12:16:24.496195968Z" level=info msg="Container 17b5c98ead0730d2a4a4ffeb5da7a71fefdc23b3213fd9bda22f8662b88aa9ca: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:24.505079 containerd[1546]: time="2025-09-14T12:16:24.505044233Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 14 12:16:24.507357 systemd-resolved[1399]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Sep 14 12:16:24.512933 containerd[1546]: time="2025-09-14T12:16:24.512880519Z" level=info msg="CreateContainer within sandbox \"7ec8370a54c01e006234a521bea091e5907f8d64161033bc3d8c382df307e71b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17b5c98ead0730d2a4a4ffeb5da7a71fefdc23b3213fd9bda22f8662b88aa9ca\"" Sep 14 12:16:24.516786 containerd[1546]: time="2025-09-14T12:16:24.515794304Z" level=info msg="StartContainer for \"17b5c98ead0730d2a4a4ffeb5da7a71fefdc23b3213fd9bda22f8662b88aa9ca\"" Sep 14 12:16:24.518292 containerd[1546]: time="2025-09-14T12:16:24.518188674Z" level=info msg="connecting to shim 17b5c98ead0730d2a4a4ffeb5da7a71fefdc23b3213fd9bda22f8662b88aa9ca" address="unix:///run/containerd/s/745a910be0622cfa693ba99001d92cbba8958c244447efc4aea25788f372a12d" protocol=ttrpc version=3 Sep 14 12:16:24.526796 systemd[1]: Started cri-containerd-b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3.scope - libcontainer container b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3. Sep 14 12:16:24.547837 systemd[1]: Started cri-containerd-17b5c98ead0730d2a4a4ffeb5da7a71fefdc23b3213fd9bda22f8662b88aa9ca.scope - libcontainer container 17b5c98ead0730d2a4a4ffeb5da7a71fefdc23b3213fd9bda22f8662b88aa9ca. Sep 14 12:16:24.610707 containerd[1546]: time="2025-09-14T12:16:24.610569275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lklln,Uid:5b12b163-b15d-4748-910b-1a345da53ed8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\"" Sep 14 12:16:24.612177 kubelet[2705]: E0914 12:16:24.612066 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:24.621562 containerd[1546]: time="2025-09-14T12:16:24.621104148Z" level=info msg="StartContainer for \"17b5c98ead0730d2a4a4ffeb5da7a71fefdc23b3213fd9bda22f8662b88aa9ca\" returns successfully" Sep 14 12:16:25.316108 kubelet[2705]: E0914 12:16:25.315888 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:27.712623 kubelet[2705]: E0914 12:16:27.712551 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:27.733645 kubelet[2705]: I0914 12:16:27.733345 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7xvsg" podStartSLOduration=5.733317909 podStartE2EDuration="5.733317909s" podCreationTimestamp="2025-09-14 12:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-14 12:16:25.331411522 +0000 UTC m=+8.247973961" watchObservedRunningTime="2025-09-14 12:16:27.733317909 +0000 UTC m=+10.649880348" Sep 14 12:16:28.322713 kubelet[2705]: E0914 12:16:28.322581 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:29.205629 kubelet[2705]: E0914 12:16:29.205585 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:29.325489 kubelet[2705]: E0914 12:16:29.324588 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:31.894770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869890568.mount: Deactivated successfully. Sep 14 12:16:33.268483 update_engine[1521]: I20250914 12:16:33.267683 1521 update_attempter.cc:509] Updating boot flags... Sep 14 12:16:34.604201 containerd[1546]: time="2025-09-14T12:16:34.604146458Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:34.606094 containerd[1546]: time="2025-09-14T12:16:34.606047360Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 14 12:16:34.607406 containerd[1546]: time="2025-09-14T12:16:34.607105785Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:34.609409 containerd[1546]: time="2025-09-14T12:16:34.609352110Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.104061951s" Sep 14 12:16:34.609551 containerd[1546]: time="2025-09-14T12:16:34.609536508Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 14 12:16:34.611922 containerd[1546]: time="2025-09-14T12:16:34.611635443Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 14 12:16:34.615015 containerd[1546]: time="2025-09-14T12:16:34.614983137Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 14 12:16:34.679044 containerd[1546]: time="2025-09-14T12:16:34.678394875Z" level=info msg="Container e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:34.689368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699414853.mount: Deactivated successfully. Sep 14 12:16:34.691814 containerd[1546]: time="2025-09-14T12:16:34.691633215Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\"" Sep 14 12:16:34.693288 containerd[1546]: time="2025-09-14T12:16:34.693237080Z" level=info msg="StartContainer for \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\"" Sep 14 12:16:34.694837 containerd[1546]: time="2025-09-14T12:16:34.694763196Z" level=info msg="connecting to shim e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049" address="unix:///run/containerd/s/8072a27432a95cc0bb991f86d6edbdd8084730685bb67d937550585dd51c66f0" protocol=ttrpc version=3 Sep 14 12:16:34.724956 systemd[1]: Started cri-containerd-e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049.scope - libcontainer container e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049. Sep 14 12:16:34.769896 containerd[1546]: time="2025-09-14T12:16:34.769855060Z" level=info msg="StartContainer for \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\" returns successfully" Sep 14 12:16:34.785373 systemd[1]: cri-containerd-e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049.scope: Deactivated successfully. Sep 14 12:16:34.821433 containerd[1546]: time="2025-09-14T12:16:34.821363813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\" id:\"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\" pid:3132 exited_at:{seconds:1757852194 nanos:787273275}" Sep 14 12:16:34.830748 containerd[1546]: time="2025-09-14T12:16:34.830676122Z" level=info msg="received exit event container_id:\"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\" id:\"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\" pid:3132 exited_at:{seconds:1757852194 nanos:787273275}" Sep 14 12:16:34.865139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049-rootfs.mount: Deactivated successfully. Sep 14 12:16:35.342428 kubelet[2705]: E0914 12:16:35.342309 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:35.346727 containerd[1546]: time="2025-09-14T12:16:35.346653778Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 14 12:16:35.361676 containerd[1546]: time="2025-09-14T12:16:35.361620806Z" level=info msg="Container d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:35.368481 containerd[1546]: time="2025-09-14T12:16:35.368280186Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\"" Sep 14 12:16:35.370753 containerd[1546]: time="2025-09-14T12:16:35.370688211Z" level=info msg="StartContainer for \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\"" Sep 14 12:16:35.377042 containerd[1546]: time="2025-09-14T12:16:35.376990284Z" level=info msg="connecting to shim d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7" address="unix:///run/containerd/s/8072a27432a95cc0bb991f86d6edbdd8084730685bb67d937550585dd51c66f0" protocol=ttrpc version=3 Sep 14 12:16:35.404939 systemd[1]: Started cri-containerd-d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7.scope - libcontainer container d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7. Sep 14 12:16:35.448455 containerd[1546]: time="2025-09-14T12:16:35.448416439Z" level=info msg="StartContainer for \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\" returns successfully" Sep 14 12:16:35.463572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 14 12:16:35.463832 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 14 12:16:35.464005 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 14 12:16:35.467003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 14 12:16:35.472646 systemd[1]: cri-containerd-d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7.scope: Deactivated successfully. Sep 14 12:16:35.473664 containerd[1546]: time="2025-09-14T12:16:35.473627460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\" id:\"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\" pid:3178 exited_at:{seconds:1757852195 nanos:473199304}" Sep 14 12:16:35.473664 containerd[1546]: time="2025-09-14T12:16:35.473646930Z" level=info msg="received exit event container_id:\"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\" id:\"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\" pid:3178 exited_at:{seconds:1757852195 nanos:473199304}" Sep 14 12:16:35.506958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 14 12:16:36.349457 kubelet[2705]: E0914 12:16:36.349176 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:36.353628 containerd[1546]: time="2025-09-14T12:16:36.352958004Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 14 12:16:36.371630 containerd[1546]: time="2025-09-14T12:16:36.371561690Z" level=info msg="Container 692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:36.410821 containerd[1546]: time="2025-09-14T12:16:36.410765072Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\"" Sep 14 12:16:36.411669 containerd[1546]: time="2025-09-14T12:16:36.411638964Z" level=info msg="StartContainer for \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\"" Sep 14 12:16:36.414584 containerd[1546]: time="2025-09-14T12:16:36.414543957Z" level=info msg="connecting to shim 692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17" address="unix:///run/containerd/s/8072a27432a95cc0bb991f86d6edbdd8084730685bb67d937550585dd51c66f0" protocol=ttrpc version=3 Sep 14 12:16:36.453937 systemd[1]: Started cri-containerd-692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17.scope - libcontainer container 692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17. Sep 14 12:16:36.527825 systemd[1]: cri-containerd-692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17.scope: Deactivated successfully. Sep 14 12:16:36.529441 containerd[1546]: time="2025-09-14T12:16:36.528362488Z" level=info msg="received exit event container_id:\"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\" id:\"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\" pid:3225 exited_at:{seconds:1757852196 nanos:525932943}" Sep 14 12:16:36.540623 containerd[1546]: time="2025-09-14T12:16:36.540386556Z" level=info msg="StartContainer for \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\" returns successfully" Sep 14 12:16:36.555774 containerd[1546]: time="2025-09-14T12:16:36.555719156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\" id:\"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\" pid:3225 exited_at:{seconds:1757852196 nanos:525932943}" Sep 14 12:16:36.577530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17-rootfs.mount: Deactivated successfully. Sep 14 12:16:37.019417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255317225.mount: Deactivated successfully. Sep 14 12:16:37.361390 kubelet[2705]: E0914 12:16:37.361352 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:37.379117 containerd[1546]: time="2025-09-14T12:16:37.379052982Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 14 12:16:37.454393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500711472.mount: Deactivated successfully. Sep 14 12:16:37.479026 containerd[1546]: time="2025-09-14T12:16:37.477983494Z" level=info msg="Container 678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:37.489148 containerd[1546]: time="2025-09-14T12:16:37.489067462Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\"" Sep 14 12:16:37.490242 containerd[1546]: time="2025-09-14T12:16:37.490196798Z" level=info msg="StartContainer for \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\"" Sep 14 12:16:37.491676 containerd[1546]: time="2025-09-14T12:16:37.491637838Z" level=info msg="connecting to shim 678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed" address="unix:///run/containerd/s/8072a27432a95cc0bb991f86d6edbdd8084730685bb67d937550585dd51c66f0" protocol=ttrpc version=3 Sep 14 12:16:37.519877 systemd[1]: Started cri-containerd-678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed.scope - libcontainer container 678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed. Sep 14 12:16:37.566096 systemd[1]: cri-containerd-678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed.scope: Deactivated successfully. Sep 14 12:16:37.568190 containerd[1546]: time="2025-09-14T12:16:37.568138185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\" id:\"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\" pid:3275 exited_at:{seconds:1757852197 nanos:567241053}" Sep 14 12:16:37.568519 containerd[1546]: time="2025-09-14T12:16:37.568457088Z" level=info msg="received exit event container_id:\"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\" id:\"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\" pid:3275 exited_at:{seconds:1757852197 nanos:567241053}" Sep 14 12:16:37.583493 containerd[1546]: time="2025-09-14T12:16:37.583441799Z" level=info msg="StartContainer for \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\" returns successfully" Sep 14 12:16:38.008760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469946353.mount: Deactivated successfully. Sep 14 12:16:38.369113 kubelet[2705]: E0914 12:16:38.369076 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:38.375736 containerd[1546]: time="2025-09-14T12:16:38.375207480Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 14 12:16:38.428618 containerd[1546]: time="2025-09-14T12:16:38.427271666Z" level=info msg="Container 451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:38.431407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4136159177.mount: Deactivated successfully. Sep 14 12:16:38.446690 containerd[1546]: time="2025-09-14T12:16:38.446651195Z" level=info msg="CreateContainer within sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\"" Sep 14 12:16:38.449900 containerd[1546]: time="2025-09-14T12:16:38.449858154Z" level=info msg="StartContainer for \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\"" Sep 14 12:16:38.452537 containerd[1546]: time="2025-09-14T12:16:38.452488090Z" level=info msg="connecting to shim 451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97" address="unix:///run/containerd/s/8072a27432a95cc0bb991f86d6edbdd8084730685bb67d937550585dd51c66f0" protocol=ttrpc version=3 Sep 14 12:16:38.489892 systemd[1]: Started cri-containerd-451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97.scope - libcontainer container 451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97. Sep 14 12:16:38.563873 containerd[1546]: time="2025-09-14T12:16:38.563806142Z" level=info msg="StartContainer for \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" returns successfully" Sep 14 12:16:38.755345 containerd[1546]: time="2025-09-14T12:16:38.755199795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" id:\"97e58ddc6e71c5b6bf9f0740c40d989463b131a303d15285e84d7fff7b108032\" pid:3346 exited_at:{seconds:1757852198 nanos:751480365}" Sep 14 12:16:38.759256 kubelet[2705]: I0914 12:16:38.759221 2705 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 14 12:16:38.836236 systemd[1]: Created slice kubepods-burstable-pod6c94075f_d407_4865_b88d_91df20358854.slice - libcontainer container kubepods-burstable-pod6c94075f_d407_4865_b88d_91df20358854.slice. Sep 14 12:16:38.846846 systemd[1]: Created slice kubepods-burstable-podc12f7bb8_d45f_4cd0_a591_68a2403e41b5.slice - libcontainer container kubepods-burstable-podc12f7bb8_d45f_4cd0_a591_68a2403e41b5.slice. Sep 14 12:16:38.898332 kubelet[2705]: I0914 12:16:38.898278 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c12f7bb8-d45f-4cd0-a591-68a2403e41b5-config-volume\") pod \"coredns-668d6bf9bc-hwm4g\" (UID: \"c12f7bb8-d45f-4cd0-a591-68a2403e41b5\") " pod="kube-system/coredns-668d6bf9bc-hwm4g" Sep 14 12:16:38.898332 kubelet[2705]: I0914 12:16:38.898332 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c94075f-d407-4865-b88d-91df20358854-config-volume\") pod \"coredns-668d6bf9bc-xfddv\" (UID: \"6c94075f-d407-4865-b88d-91df20358854\") " pod="kube-system/coredns-668d6bf9bc-xfddv" Sep 14 12:16:38.898574 kubelet[2705]: I0914 12:16:38.898360 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wlsc\" (UniqueName: \"kubernetes.io/projected/c12f7bb8-d45f-4cd0-a591-68a2403e41b5-kube-api-access-4wlsc\") pod \"coredns-668d6bf9bc-hwm4g\" (UID: \"c12f7bb8-d45f-4cd0-a591-68a2403e41b5\") " pod="kube-system/coredns-668d6bf9bc-hwm4g" Sep 14 12:16:38.898574 kubelet[2705]: I0914 12:16:38.898380 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znwqz\" (UniqueName: \"kubernetes.io/projected/6c94075f-d407-4865-b88d-91df20358854-kube-api-access-znwqz\") pod \"coredns-668d6bf9bc-xfddv\" (UID: \"6c94075f-d407-4865-b88d-91df20358854\") " pod="kube-system/coredns-668d6bf9bc-xfddv" Sep 14 12:16:39.142294 kubelet[2705]: E0914 12:16:39.142232 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:39.143991 containerd[1546]: time="2025-09-14T12:16:39.143951153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfddv,Uid:6c94075f-d407-4865-b88d-91df20358854,Namespace:kube-system,Attempt:0,}" Sep 14 12:16:39.158892 kubelet[2705]: E0914 12:16:39.158658 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:39.160380 containerd[1546]: time="2025-09-14T12:16:39.159871179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hwm4g,Uid:c12f7bb8-d45f-4cd0-a591-68a2403e41b5,Namespace:kube-system,Attempt:0,}" Sep 14 12:16:39.380795 kubelet[2705]: E0914 12:16:39.380727 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:39.411032 containerd[1546]: time="2025-09-14T12:16:39.410701907Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:39.414224 containerd[1546]: time="2025-09-14T12:16:39.411696245Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 14 12:16:39.415883 containerd[1546]: time="2025-09-14T12:16:39.415803014Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 14 12:16:39.424035 containerd[1546]: time="2025-09-14T12:16:39.423971193Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.812287323s" Sep 14 12:16:39.424836 containerd[1546]: time="2025-09-14T12:16:39.424138020Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 14 12:16:39.425360 kubelet[2705]: I0914 12:16:39.424517 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-59kpz" podStartSLOduration=7.312763913 podStartE2EDuration="17.424497571s" podCreationTimestamp="2025-09-14 12:16:22 +0000 UTC" firstStartedPulling="2025-09-14 12:16:24.498926115 +0000 UTC m=+7.415488533" lastFinishedPulling="2025-09-14 12:16:34.610659763 +0000 UTC m=+17.527222191" observedRunningTime="2025-09-14 12:16:39.41825534 +0000 UTC m=+22.334817777" watchObservedRunningTime="2025-09-14 12:16:39.424497571 +0000 UTC m=+22.341060008" Sep 14 12:16:39.430923 containerd[1546]: time="2025-09-14T12:16:39.430880815Z" level=info msg="CreateContainer within sandbox \"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 14 12:16:39.439865 containerd[1546]: time="2025-09-14T12:16:39.439813159Z" level=info msg="Container 8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:39.448549 containerd[1546]: time="2025-09-14T12:16:39.448502369Z" level=info msg="CreateContainer within sandbox \"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\"" Sep 14 12:16:39.449386 containerd[1546]: time="2025-09-14T12:16:39.449329036Z" level=info msg="StartContainer for \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\"" Sep 14 12:16:39.451808 containerd[1546]: time="2025-09-14T12:16:39.451766408Z" level=info msg="connecting to shim 8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3" address="unix:///run/containerd/s/c37c5ab14f2f5261f21b6b290f229c899e04cd66617c298dcc4f45f5c2a6e2f8" protocol=ttrpc version=3 Sep 14 12:16:39.480203 systemd[1]: Started cri-containerd-8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3.scope - libcontainer container 8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3. Sep 14 12:16:39.542898 containerd[1546]: time="2025-09-14T12:16:39.542857772Z" level=info msg="StartContainer for \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" returns successfully" Sep 14 12:16:40.385611 kubelet[2705]: E0914 12:16:40.385097 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:40.385611 kubelet[2705]: E0914 12:16:40.385337 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:41.387917 kubelet[2705]: E0914 12:16:41.387850 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:41.388940 kubelet[2705]: E0914 12:16:41.388022 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:43.078835 systemd-networkd[1441]: cilium_host: Link UP Sep 14 12:16:43.079043 systemd-networkd[1441]: cilium_net: Link UP Sep 14 12:16:43.080426 systemd-networkd[1441]: cilium_net: Gained carrier Sep 14 12:16:43.081063 systemd-networkd[1441]: cilium_host: Gained carrier Sep 14 12:16:43.236521 systemd-networkd[1441]: cilium_vxlan: Link UP Sep 14 12:16:43.236530 systemd-networkd[1441]: cilium_vxlan: Gained carrier Sep 14 12:16:43.426947 systemd-networkd[1441]: cilium_host: Gained IPv6LL Sep 14 12:16:43.498804 systemd-networkd[1441]: cilium_net: Gained IPv6LL Sep 14 12:16:43.725116 kernel: NET: Registered PF_ALG protocol family Sep 14 12:16:44.638379 systemd-networkd[1441]: lxc_health: Link UP Sep 14 12:16:44.654913 systemd-networkd[1441]: lxc_health: Gained carrier Sep 14 12:16:45.106923 systemd-networkd[1441]: cilium_vxlan: Gained IPv6LL Sep 14 12:16:45.261985 kernel: eth0: renamed from tmp5f878 Sep 14 12:16:45.263566 systemd-networkd[1441]: lxc26fa254b8839: Link UP Sep 14 12:16:45.266942 systemd-networkd[1441]: lxc26fa254b8839: Gained carrier Sep 14 12:16:45.288314 kernel: eth0: renamed from tmp87409 Sep 14 12:16:45.289709 systemd-networkd[1441]: lxc932e1975108f: Link UP Sep 14 12:16:45.293357 systemd-networkd[1441]: lxc932e1975108f: Gained carrier Sep 14 12:16:46.322875 systemd-networkd[1441]: lxc26fa254b8839: Gained IPv6LL Sep 14 12:16:46.336103 kubelet[2705]: E0914 12:16:46.335752 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:46.371847 kubelet[2705]: I0914 12:16:46.371757 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lklln" podStartSLOduration=9.561854832 podStartE2EDuration="24.371630517s" podCreationTimestamp="2025-09-14 12:16:22 +0000 UTC" firstStartedPulling="2025-09-14 12:16:24.616748235 +0000 UTC m=+7.533310664" lastFinishedPulling="2025-09-14 12:16:39.426523921 +0000 UTC m=+22.343086349" observedRunningTime="2025-09-14 12:16:40.46385329 +0000 UTC m=+23.380415728" watchObservedRunningTime="2025-09-14 12:16:46.371630517 +0000 UTC m=+29.288192954" Sep 14 12:16:46.421683 kubelet[2705]: E0914 12:16:46.421368 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:46.450772 systemd-networkd[1441]: lxc_health: Gained IPv6LL Sep 14 12:16:46.706785 systemd-networkd[1441]: lxc932e1975108f: Gained IPv6LL Sep 14 12:16:47.423449 kubelet[2705]: E0914 12:16:47.423405 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:50.197386 containerd[1546]: time="2025-09-14T12:16:50.197288114Z" level=info msg="connecting to shim 87409853b08872b6ebda9ab03c9cc800443d918bca968123208ed7d5abc92ff2" address="unix:///run/containerd/s/97277f65b1e2ba54d0ae475d559ecfa6e603df167730215ab81e70be22c19bfa" namespace=k8s.io protocol=ttrpc version=3 Sep 14 12:16:50.250881 systemd[1]: Started cri-containerd-87409853b08872b6ebda9ab03c9cc800443d918bca968123208ed7d5abc92ff2.scope - libcontainer container 87409853b08872b6ebda9ab03c9cc800443d918bca968123208ed7d5abc92ff2. Sep 14 12:16:50.273938 containerd[1546]: time="2025-09-14T12:16:50.273874862Z" level=info msg="connecting to shim 5f8783802e705ed0555ba67f18f76fe412e27ee7683c64ef80dce5e7df92f4d6" address="unix:///run/containerd/s/27fec6f046aade4ac0e8bb0593c19cafc63bd797ff14c9f9927f8cddc2d00701" namespace=k8s.io protocol=ttrpc version=3 Sep 14 12:16:50.325965 systemd[1]: Started cri-containerd-5f8783802e705ed0555ba67f18f76fe412e27ee7683c64ef80dce5e7df92f4d6.scope - libcontainer container 5f8783802e705ed0555ba67f18f76fe412e27ee7683c64ef80dce5e7df92f4d6. Sep 14 12:16:50.397227 containerd[1546]: time="2025-09-14T12:16:50.397169964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hwm4g,Uid:c12f7bb8-d45f-4cd0-a591-68a2403e41b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"87409853b08872b6ebda9ab03c9cc800443d918bca968123208ed7d5abc92ff2\"" Sep 14 12:16:50.398647 kubelet[2705]: E0914 12:16:50.398519 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:50.412228 containerd[1546]: time="2025-09-14T12:16:50.412020783Z" level=info msg="CreateContainer within sandbox \"87409853b08872b6ebda9ab03c9cc800443d918bca968123208ed7d5abc92ff2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 14 12:16:50.432166 containerd[1546]: time="2025-09-14T12:16:50.431787489Z" level=info msg="Container 6fdfea3cdaedbd02126555c72dab01e28ebdfabc27f05e919cf27fa0ca344567: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:50.437224 containerd[1546]: time="2025-09-14T12:16:50.437098551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xfddv,Uid:6c94075f-d407-4865-b88d-91df20358854,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f8783802e705ed0555ba67f18f76fe412e27ee7683c64ef80dce5e7df92f4d6\"" Sep 14 12:16:50.438745 kubelet[2705]: E0914 12:16:50.438712 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:50.442930 containerd[1546]: time="2025-09-14T12:16:50.441484167Z" level=info msg="CreateContainer within sandbox \"87409853b08872b6ebda9ab03c9cc800443d918bca968123208ed7d5abc92ff2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fdfea3cdaedbd02126555c72dab01e28ebdfabc27f05e919cf27fa0ca344567\"" Sep 14 12:16:50.445258 containerd[1546]: time="2025-09-14T12:16:50.445201607Z" level=info msg="CreateContainer within sandbox \"5f8783802e705ed0555ba67f18f76fe412e27ee7683c64ef80dce5e7df92f4d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 14 12:16:50.453034 containerd[1546]: time="2025-09-14T12:16:50.452902849Z" level=info msg="Container 13234dd41a3a54519bb5f6665a3c9df26fb8736ff742af38dc462ee8da706251: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:16:50.466409 containerd[1546]: time="2025-09-14T12:16:50.465606628Z" level=info msg="StartContainer for \"6fdfea3cdaedbd02126555c72dab01e28ebdfabc27f05e919cf27fa0ca344567\"" Sep 14 12:16:50.467099 containerd[1546]: time="2025-09-14T12:16:50.467002431Z" level=info msg="connecting to shim 6fdfea3cdaedbd02126555c72dab01e28ebdfabc27f05e919cf27fa0ca344567" address="unix:///run/containerd/s/97277f65b1e2ba54d0ae475d559ecfa6e603df167730215ab81e70be22c19bfa" protocol=ttrpc version=3 Sep 14 12:16:50.471051 containerd[1546]: time="2025-09-14T12:16:50.471006601Z" level=info msg="CreateContainer within sandbox \"5f8783802e705ed0555ba67f18f76fe412e27ee7683c64ef80dce5e7df92f4d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13234dd41a3a54519bb5f6665a3c9df26fb8736ff742af38dc462ee8da706251\"" Sep 14 12:16:50.474124 containerd[1546]: time="2025-09-14T12:16:50.474073029Z" level=info msg="StartContainer for \"13234dd41a3a54519bb5f6665a3c9df26fb8736ff742af38dc462ee8da706251\"" Sep 14 12:16:50.475258 containerd[1546]: time="2025-09-14T12:16:50.475222716Z" level=info msg="connecting to shim 13234dd41a3a54519bb5f6665a3c9df26fb8736ff742af38dc462ee8da706251" address="unix:///run/containerd/s/27fec6f046aade4ac0e8bb0593c19cafc63bd797ff14c9f9927f8cddc2d00701" protocol=ttrpc version=3 Sep 14 12:16:50.497860 systemd[1]: Started cri-containerd-6fdfea3cdaedbd02126555c72dab01e28ebdfabc27f05e919cf27fa0ca344567.scope - libcontainer container 6fdfea3cdaedbd02126555c72dab01e28ebdfabc27f05e919cf27fa0ca344567. Sep 14 12:16:50.509852 systemd[1]: Started cri-containerd-13234dd41a3a54519bb5f6665a3c9df26fb8736ff742af38dc462ee8da706251.scope - libcontainer container 13234dd41a3a54519bb5f6665a3c9df26fb8736ff742af38dc462ee8da706251. Sep 14 12:16:50.557929 containerd[1546]: time="2025-09-14T12:16:50.557843813Z" level=info msg="StartContainer for \"6fdfea3cdaedbd02126555c72dab01e28ebdfabc27f05e919cf27fa0ca344567\" returns successfully" Sep 14 12:16:50.571670 containerd[1546]: time="2025-09-14T12:16:50.571546342Z" level=info msg="StartContainer for \"13234dd41a3a54519bb5f6665a3c9df26fb8736ff742af38dc462ee8da706251\" returns successfully" Sep 14 12:16:51.185832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301180837.mount: Deactivated successfully. Sep 14 12:16:51.443183 kubelet[2705]: E0914 12:16:51.443019 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:51.447262 kubelet[2705]: E0914 12:16:51.447209 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:51.463681 kubelet[2705]: I0914 12:16:51.462666 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hwm4g" podStartSLOduration=29.462640696 podStartE2EDuration="29.462640696s" podCreationTimestamp="2025-09-14 12:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-14 12:16:51.46068884 +0000 UTC m=+34.377251280" watchObservedRunningTime="2025-09-14 12:16:51.462640696 +0000 UTC m=+34.379203131" Sep 14 12:16:52.451111 kubelet[2705]: E0914 12:16:52.450108 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:52.451111 kubelet[2705]: E0914 12:16:52.450976 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:52.470658 kubelet[2705]: I0914 12:16:52.470460 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xfddv" podStartSLOduration=30.470421406 podStartE2EDuration="30.470421406s" podCreationTimestamp="2025-09-14 12:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-14 12:16:51.481815423 +0000 UTC m=+34.398377859" watchObservedRunningTime="2025-09-14 12:16:52.470421406 +0000 UTC m=+35.386983843" Sep 14 12:16:53.451970 kubelet[2705]: E0914 12:16:53.451847 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:53.451970 kubelet[2705]: E0914 12:16:53.451896 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:54.453961 kubelet[2705]: E0914 12:16:54.453931 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:16:58.725499 systemd[1]: Started sshd@7-143.198.142.64:22-139.178.89.65:55822.service - OpenSSH per-connection server daemon (139.178.89.65:55822). Sep 14 12:16:58.824202 sshd[4036]: Accepted publickey for core from 139.178.89.65 port 55822 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:16:58.825756 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:16:58.833131 systemd-logind[1520]: New session 8 of user core. Sep 14 12:16:58.841913 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 14 12:16:59.408284 sshd[4039]: Connection closed by 139.178.89.65 port 55822 Sep 14 12:16:59.409250 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Sep 14 12:16:59.415625 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Sep 14 12:16:59.416206 systemd[1]: sshd@7-143.198.142.64:22-139.178.89.65:55822.service: Deactivated successfully. Sep 14 12:16:59.420543 systemd[1]: session-8.scope: Deactivated successfully. Sep 14 12:16:59.425158 systemd-logind[1520]: Removed session 8. Sep 14 12:17:04.424772 systemd[1]: Started sshd@8-143.198.142.64:22-139.178.89.65:54498.service - OpenSSH per-connection server daemon (139.178.89.65:54498). Sep 14 12:17:04.511515 sshd[4052]: Accepted publickey for core from 139.178.89.65 port 54498 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:04.513511 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:04.520707 systemd-logind[1520]: New session 9 of user core. Sep 14 12:17:04.524937 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 14 12:17:04.663252 sshd[4055]: Connection closed by 139.178.89.65 port 54498 Sep 14 12:17:04.662411 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:04.667668 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Sep 14 12:17:04.667994 systemd[1]: sshd@8-143.198.142.64:22-139.178.89.65:54498.service: Deactivated successfully. Sep 14 12:17:04.670838 systemd[1]: session-9.scope: Deactivated successfully. Sep 14 12:17:04.673648 systemd-logind[1520]: Removed session 9. Sep 14 12:17:09.683661 systemd[1]: Started sshd@9-143.198.142.64:22-139.178.89.65:54500.service - OpenSSH per-connection server daemon (139.178.89.65:54500). Sep 14 12:17:09.765116 sshd[4067]: Accepted publickey for core from 139.178.89.65 port 54500 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:09.766776 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:09.774585 systemd-logind[1520]: New session 10 of user core. Sep 14 12:17:09.777889 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 14 12:17:09.922358 sshd[4070]: Connection closed by 139.178.89.65 port 54500 Sep 14 12:17:09.923050 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:09.929127 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Sep 14 12:17:09.929512 systemd[1]: sshd@9-143.198.142.64:22-139.178.89.65:54500.service: Deactivated successfully. Sep 14 12:17:09.932507 systemd[1]: session-10.scope: Deactivated successfully. Sep 14 12:17:09.934525 systemd-logind[1520]: Removed session 10. Sep 14 12:17:14.943142 systemd[1]: Started sshd@10-143.198.142.64:22-139.178.89.65:58422.service - OpenSSH per-connection server daemon (139.178.89.65:58422). Sep 14 12:17:15.014168 sshd[4082]: Accepted publickey for core from 139.178.89.65 port 58422 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:15.015808 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:15.021296 systemd-logind[1520]: New session 11 of user core. Sep 14 12:17:15.026847 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 14 12:17:15.169835 sshd[4085]: Connection closed by 139.178.89.65 port 58422 Sep 14 12:17:15.170560 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:15.187228 systemd[1]: sshd@10-143.198.142.64:22-139.178.89.65:58422.service: Deactivated successfully. Sep 14 12:17:15.191256 systemd[1]: session-11.scope: Deactivated successfully. Sep 14 12:17:15.192746 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Sep 14 12:17:15.198568 systemd[1]: Started sshd@11-143.198.142.64:22-139.178.89.65:58428.service - OpenSSH per-connection server daemon (139.178.89.65:58428). Sep 14 12:17:15.199762 systemd-logind[1520]: Removed session 11. Sep 14 12:17:15.275427 sshd[4098]: Accepted publickey for core from 139.178.89.65 port 58428 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:15.277324 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:15.285281 systemd-logind[1520]: New session 12 of user core. Sep 14 12:17:15.296027 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 14 12:17:15.527757 sshd[4101]: Connection closed by 139.178.89.65 port 58428 Sep 14 12:17:15.532370 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:15.544157 systemd[1]: sshd@11-143.198.142.64:22-139.178.89.65:58428.service: Deactivated successfully. Sep 14 12:17:15.548453 systemd[1]: session-12.scope: Deactivated successfully. Sep 14 12:17:15.551090 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Sep 14 12:17:15.556999 systemd[1]: Started sshd@12-143.198.142.64:22-139.178.89.65:58432.service - OpenSSH per-connection server daemon (139.178.89.65:58432). Sep 14 12:17:15.560631 systemd-logind[1520]: Removed session 12. Sep 14 12:17:15.700135 sshd[4111]: Accepted publickey for core from 139.178.89.65 port 58432 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:15.702915 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:15.715134 systemd-logind[1520]: New session 13 of user core. Sep 14 12:17:15.725946 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 14 12:17:15.892108 sshd[4114]: Connection closed by 139.178.89.65 port 58432 Sep 14 12:17:15.892723 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:15.898426 systemd[1]: sshd@12-143.198.142.64:22-139.178.89.65:58432.service: Deactivated successfully. Sep 14 12:17:15.901265 systemd[1]: session-13.scope: Deactivated successfully. Sep 14 12:17:15.903695 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Sep 14 12:17:15.905413 systemd-logind[1520]: Removed session 13. Sep 14 12:17:20.914036 systemd[1]: Started sshd@13-143.198.142.64:22-139.178.89.65:57328.service - OpenSSH per-connection server daemon (139.178.89.65:57328). Sep 14 12:17:20.993120 sshd[4128]: Accepted publickey for core from 139.178.89.65 port 57328 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:20.994997 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:21.002937 systemd-logind[1520]: New session 14 of user core. Sep 14 12:17:21.012906 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 14 12:17:21.155143 sshd[4131]: Connection closed by 139.178.89.65 port 57328 Sep 14 12:17:21.156073 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:21.160536 systemd[1]: sshd@13-143.198.142.64:22-139.178.89.65:57328.service: Deactivated successfully. Sep 14 12:17:21.164270 systemd[1]: session-14.scope: Deactivated successfully. Sep 14 12:17:21.166688 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Sep 14 12:17:21.168428 systemd-logind[1520]: Removed session 14. Sep 14 12:17:26.175335 systemd[1]: Started sshd@14-143.198.142.64:22-139.178.89.65:57330.service - OpenSSH per-connection server daemon (139.178.89.65:57330). Sep 14 12:17:26.254448 sshd[4145]: Accepted publickey for core from 139.178.89.65 port 57330 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:26.254217 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:26.260219 systemd-logind[1520]: New session 15 of user core. Sep 14 12:17:26.267920 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 14 12:17:26.428742 sshd[4148]: Connection closed by 139.178.89.65 port 57330 Sep 14 12:17:26.429444 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:26.439170 systemd[1]: sshd@14-143.198.142.64:22-139.178.89.65:57330.service: Deactivated successfully. Sep 14 12:17:26.441817 systemd[1]: session-15.scope: Deactivated successfully. Sep 14 12:17:26.443137 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Sep 14 12:17:26.447560 systemd[1]: Started sshd@15-143.198.142.64:22-139.178.89.65:57334.service - OpenSSH per-connection server daemon (139.178.89.65:57334). Sep 14 12:17:26.450988 systemd-logind[1520]: Removed session 15. Sep 14 12:17:26.527497 sshd[4160]: Accepted publickey for core from 139.178.89.65 port 57334 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:26.529185 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:26.535232 systemd-logind[1520]: New session 16 of user core. Sep 14 12:17:26.544932 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 14 12:17:26.861421 sshd[4163]: Connection closed by 139.178.89.65 port 57334 Sep 14 12:17:26.862775 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:26.876346 systemd[1]: sshd@15-143.198.142.64:22-139.178.89.65:57334.service: Deactivated successfully. Sep 14 12:17:26.880642 systemd[1]: session-16.scope: Deactivated successfully. Sep 14 12:17:26.882223 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Sep 14 12:17:26.887455 systemd[1]: Started sshd@16-143.198.142.64:22-139.178.89.65:57338.service - OpenSSH per-connection server daemon (139.178.89.65:57338). Sep 14 12:17:26.888465 systemd-logind[1520]: Removed session 16. Sep 14 12:17:26.975780 sshd[4173]: Accepted publickey for core from 139.178.89.65 port 57338 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:26.977733 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:26.985403 systemd-logind[1520]: New session 17 of user core. Sep 14 12:17:27.000930 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 14 12:17:27.739347 sshd[4176]: Connection closed by 139.178.89.65 port 57338 Sep 14 12:17:27.740126 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:27.755617 systemd[1]: sshd@16-143.198.142.64:22-139.178.89.65:57338.service: Deactivated successfully. Sep 14 12:17:27.758238 systemd[1]: session-17.scope: Deactivated successfully. Sep 14 12:17:27.760405 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Sep 14 12:17:27.765077 systemd[1]: Started sshd@17-143.198.142.64:22-139.178.89.65:57352.service - OpenSSH per-connection server daemon (139.178.89.65:57352). Sep 14 12:17:27.768569 systemd-logind[1520]: Removed session 17. Sep 14 12:17:27.860148 sshd[4193]: Accepted publickey for core from 139.178.89.65 port 57352 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:27.862106 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:27.870482 systemd-logind[1520]: New session 18 of user core. Sep 14 12:17:27.879983 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 14 12:17:28.267555 sshd[4196]: Connection closed by 139.178.89.65 port 57352 Sep 14 12:17:28.268468 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:28.284618 systemd[1]: sshd@17-143.198.142.64:22-139.178.89.65:57352.service: Deactivated successfully. Sep 14 12:17:28.289511 systemd[1]: session-18.scope: Deactivated successfully. Sep 14 12:17:28.291805 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Sep 14 12:17:28.298696 systemd[1]: Started sshd@18-143.198.142.64:22-139.178.89.65:57356.service - OpenSSH per-connection server daemon (139.178.89.65:57356). Sep 14 12:17:28.301073 systemd-logind[1520]: Removed session 18. Sep 14 12:17:28.373858 sshd[4206]: Accepted publickey for core from 139.178.89.65 port 57356 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:28.376127 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:28.383875 systemd-logind[1520]: New session 19 of user core. Sep 14 12:17:28.393968 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 14 12:17:28.543433 sshd[4209]: Connection closed by 139.178.89.65 port 57356 Sep 14 12:17:28.544155 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:28.551184 systemd[1]: sshd@18-143.198.142.64:22-139.178.89.65:57356.service: Deactivated successfully. Sep 14 12:17:28.553525 systemd[1]: session-19.scope: Deactivated successfully. Sep 14 12:17:28.554870 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Sep 14 12:17:28.557108 systemd-logind[1520]: Removed session 19. Sep 14 12:17:33.253240 kubelet[2705]: E0914 12:17:33.253183 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:33.559823 systemd[1]: Started sshd@19-143.198.142.64:22-139.178.89.65:54954.service - OpenSSH per-connection server daemon (139.178.89.65:54954). Sep 14 12:17:33.636194 sshd[4223]: Accepted publickey for core from 139.178.89.65 port 54954 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:33.637848 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:33.643956 systemd-logind[1520]: New session 20 of user core. Sep 14 12:17:33.650933 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 14 12:17:33.791073 sshd[4226]: Connection closed by 139.178.89.65 port 54954 Sep 14 12:17:33.791731 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:33.796355 systemd[1]: sshd@19-143.198.142.64:22-139.178.89.65:54954.service: Deactivated successfully. Sep 14 12:17:33.799414 systemd[1]: session-20.scope: Deactivated successfully. Sep 14 12:17:33.802644 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Sep 14 12:17:33.803973 systemd-logind[1520]: Removed session 20. Sep 14 12:17:34.250333 kubelet[2705]: E0914 12:17:34.250212 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:37.251551 kubelet[2705]: E0914 12:17:37.251451 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:38.811181 systemd[1]: Started sshd@20-143.198.142.64:22-139.178.89.65:54960.service - OpenSSH per-connection server daemon (139.178.89.65:54960). Sep 14 12:17:38.880340 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 54960 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:38.881863 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:38.887247 systemd-logind[1520]: New session 21 of user core. Sep 14 12:17:38.895882 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 14 12:17:39.034094 sshd[4240]: Connection closed by 139.178.89.65 port 54960 Sep 14 12:17:39.035012 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:39.039494 systemd[1]: sshd@20-143.198.142.64:22-139.178.89.65:54960.service: Deactivated successfully. Sep 14 12:17:39.042168 systemd[1]: session-21.scope: Deactivated successfully. Sep 14 12:17:39.043538 systemd-logind[1520]: Session 21 logged out. Waiting for processes to exit. Sep 14 12:17:39.045564 systemd-logind[1520]: Removed session 21. Sep 14 12:17:44.049348 systemd[1]: Started sshd@21-143.198.142.64:22-139.178.89.65:42920.service - OpenSSH per-connection server daemon (139.178.89.65:42920). Sep 14 12:17:44.124433 sshd[4252]: Accepted publickey for core from 139.178.89.65 port 42920 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:44.126270 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:44.132348 systemd-logind[1520]: New session 22 of user core. Sep 14 12:17:44.140898 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 14 12:17:44.250502 kubelet[2705]: E0914 12:17:44.250460 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:44.293935 sshd[4255]: Connection closed by 139.178.89.65 port 42920 Sep 14 12:17:44.294718 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:44.306496 systemd[1]: sshd@21-143.198.142.64:22-139.178.89.65:42920.service: Deactivated successfully. Sep 14 12:17:44.309083 systemd[1]: session-22.scope: Deactivated successfully. Sep 14 12:17:44.310712 systemd-logind[1520]: Session 22 logged out. Waiting for processes to exit. Sep 14 12:17:44.314393 systemd[1]: Started sshd@22-143.198.142.64:22-139.178.89.65:42936.service - OpenSSH per-connection server daemon (139.178.89.65:42936). Sep 14 12:17:44.317001 systemd-logind[1520]: Removed session 22. Sep 14 12:17:44.381561 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 42936 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:44.383972 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:44.390606 systemd-logind[1520]: New session 23 of user core. Sep 14 12:17:44.399935 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 14 12:17:46.564853 containerd[1546]: time="2025-09-14T12:17:46.564095382Z" level=info msg="StopContainer for \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" with timeout 30 (s)" Sep 14 12:17:46.581722 containerd[1546]: time="2025-09-14T12:17:46.581375425Z" level=info msg="Stop container \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" with signal terminated" Sep 14 12:17:46.598995 containerd[1546]: time="2025-09-14T12:17:46.598733020Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 14 12:17:46.604626 systemd[1]: cri-containerd-8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3.scope: Deactivated successfully. Sep 14 12:17:46.609170 containerd[1546]: time="2025-09-14T12:17:46.608544011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" id:\"ba49235d49f545b7bac8322846c4b985784e6599a15dacdfded2dd771bf2e8d6\" pid:4290 exited_at:{seconds:1757852266 nanos:606412265}" Sep 14 12:17:46.612165 containerd[1546]: time="2025-09-14T12:17:46.611996202Z" level=info msg="StopContainer for \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" with timeout 2 (s)" Sep 14 12:17:46.612493 containerd[1546]: time="2025-09-14T12:17:46.612406010Z" level=info msg="received exit event container_id:\"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" id:\"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" pid:3458 exited_at:{seconds:1757852266 nanos:610958124}" Sep 14 12:17:46.612976 containerd[1546]: time="2025-09-14T12:17:46.612950629Z" level=info msg="Stop container \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" with signal terminated" Sep 14 12:17:46.613570 containerd[1546]: time="2025-09-14T12:17:46.613539792Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" id:\"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" pid:3458 exited_at:{seconds:1757852266 nanos:610958124}" Sep 14 12:17:46.626940 systemd-networkd[1441]: lxc_health: Link DOWN Sep 14 12:17:46.626949 systemd-networkd[1441]: lxc_health: Lost carrier Sep 14 12:17:46.662730 containerd[1546]: time="2025-09-14T12:17:46.662564841Z" level=info msg="received exit event container_id:\"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" id:\"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" pid:3316 exited_at:{seconds:1757852266 nanos:662373807}" Sep 14 12:17:46.663216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3-rootfs.mount: Deactivated successfully. Sep 14 12:17:46.665027 containerd[1546]: time="2025-09-14T12:17:46.664915711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" id:\"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" pid:3316 exited_at:{seconds:1757852266 nanos:662373807}" Sep 14 12:17:46.665083 systemd[1]: cri-containerd-451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97.scope: Deactivated successfully. Sep 14 12:17:46.665452 systemd[1]: cri-containerd-451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97.scope: Consumed 8.574s CPU time, 193M memory peak, 70M read from disk, 13.3M written to disk. Sep 14 12:17:46.671504 containerd[1546]: time="2025-09-14T12:17:46.671398218Z" level=info msg="StopContainer for \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" returns successfully" Sep 14 12:17:46.672242 containerd[1546]: time="2025-09-14T12:17:46.672150901Z" level=info msg="StopPodSandbox for \"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\"" Sep 14 12:17:46.672608 containerd[1546]: time="2025-09-14T12:17:46.672491341Z" level=info msg="Container to stop \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 14 12:17:46.684493 systemd[1]: cri-containerd-b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3.scope: Deactivated successfully. Sep 14 12:17:46.686913 containerd[1546]: time="2025-09-14T12:17:46.686820487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" id:\"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" pid:2903 exit_status:137 exited_at:{seconds:1757852266 nanos:686226903}" Sep 14 12:17:46.713401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97-rootfs.mount: Deactivated successfully. Sep 14 12:17:46.724084 containerd[1546]: time="2025-09-14T12:17:46.724037485Z" level=info msg="StopContainer for \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" returns successfully" Sep 14 12:17:46.725035 containerd[1546]: time="2025-09-14T12:17:46.724936006Z" level=info msg="StopPodSandbox for \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\"" Sep 14 12:17:46.725035 containerd[1546]: time="2025-09-14T12:17:46.725031698Z" level=info msg="Container to stop \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 14 12:17:46.725180 containerd[1546]: time="2025-09-14T12:17:46.725049353Z" level=info msg="Container to stop \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 14 12:17:46.725180 containerd[1546]: time="2025-09-14T12:17:46.725062495Z" level=info msg="Container to stop \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 14 12:17:46.725180 containerd[1546]: time="2025-09-14T12:17:46.725074859Z" level=info msg="Container to stop \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 14 12:17:46.725180 containerd[1546]: time="2025-09-14T12:17:46.725088984Z" level=info msg="Container to stop \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 14 12:17:46.736912 systemd[1]: cri-containerd-567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667.scope: Deactivated successfully. Sep 14 12:17:46.750961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3-rootfs.mount: Deactivated successfully. Sep 14 12:17:46.756949 containerd[1546]: time="2025-09-14T12:17:46.756905964Z" level=info msg="shim disconnected" id=b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3 namespace=k8s.io Sep 14 12:17:46.756949 containerd[1546]: time="2025-09-14T12:17:46.756939530Z" level=warning msg="cleaning up after shim disconnected" id=b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3 namespace=k8s.io Sep 14 12:17:46.757229 containerd[1546]: time="2025-09-14T12:17:46.756946841Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 14 12:17:46.781056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667-rootfs.mount: Deactivated successfully. Sep 14 12:17:46.791028 containerd[1546]: time="2025-09-14T12:17:46.790728215Z" level=info msg="shim disconnected" id=567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667 namespace=k8s.io Sep 14 12:17:46.791028 containerd[1546]: time="2025-09-14T12:17:46.790784092Z" level=warning msg="cleaning up after shim disconnected" id=567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667 namespace=k8s.io Sep 14 12:17:46.791028 containerd[1546]: time="2025-09-14T12:17:46.790793392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 14 12:17:46.804620 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3-shm.mount: Deactivated successfully. Sep 14 12:17:46.806229 containerd[1546]: time="2025-09-14T12:17:46.802671577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" id:\"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" pid:2849 exit_status:137 exited_at:{seconds:1757852266 nanos:737106177}" Sep 14 12:17:46.806439 containerd[1546]: time="2025-09-14T12:17:46.806332232Z" level=info msg="Events for \"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" is in backoff, enqueue event container_id:\"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" id:\"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" pid:2903 exit_status:137 exited_at:{seconds:1757852266 nanos:775829299}" Sep 14 12:17:46.806860 containerd[1546]: time="2025-09-14T12:17:46.806771071Z" level=info msg="received exit event sandbox_id:\"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" exit_status:137 exited_at:{seconds:1757852266 nanos:737106177}" Sep 14 12:17:46.811578 containerd[1546]: time="2025-09-14T12:17:46.811305246Z" level=info msg="TearDown network for sandbox \"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" successfully" Sep 14 12:17:46.811578 containerd[1546]: time="2025-09-14T12:17:46.811340072Z" level=info msg="StopPodSandbox for \"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" returns successfully" Sep 14 12:17:46.811578 containerd[1546]: time="2025-09-14T12:17:46.811465037Z" level=info msg="TearDown network for sandbox \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" successfully" Sep 14 12:17:46.811578 containerd[1546]: time="2025-09-14T12:17:46.811475862Z" level=info msg="StopPodSandbox for \"567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667\" returns successfully" Sep 14 12:17:46.815274 containerd[1546]: time="2025-09-14T12:17:46.815053693Z" level=info msg="received exit event sandbox_id:\"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" exit_status:137 exited_at:{seconds:1757852266 nanos:686226903}" Sep 14 12:17:46.878818 kubelet[2705]: I0914 12:17:46.878769 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-hostproc\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.878818 kubelet[2705]: I0914 12:17:46.878823 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-host-proc-sys-kernel\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.879997 kubelet[2705]: I0914 12:17:46.878853 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-hubble-tls\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.879997 kubelet[2705]: I0914 12:17:46.878871 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-etc-cni-netd\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.879997 kubelet[2705]: I0914 12:17:46.878884 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-lib-modules\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.879997 kubelet[2705]: I0914 12:17:46.878901 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-host-proc-sys-net\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.879997 kubelet[2705]: I0914 12:17:46.878919 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq7tj\" (UniqueName: \"kubernetes.io/projected/5b12b163-b15d-4748-910b-1a345da53ed8-kube-api-access-nq7tj\") pod \"5b12b163-b15d-4748-910b-1a345da53ed8\" (UID: \"5b12b163-b15d-4748-910b-1a345da53ed8\") " Sep 14 12:17:46.879997 kubelet[2705]: I0914 12:17:46.878937 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-config-path\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.880239 kubelet[2705]: I0914 12:17:46.878952 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-bpf-maps\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.880239 kubelet[2705]: I0914 12:17:46.878952 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.880239 kubelet[2705]: I0914 12:17:46.878971 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qpkz\" (UniqueName: \"kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-kube-api-access-5qpkz\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.880239 kubelet[2705]: I0914 12:17:46.879035 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cni-path\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.880239 kubelet[2705]: I0914 12:17:46.879053 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-xtables-lock\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.880239 kubelet[2705]: I0914 12:17:46.879071 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-cgroup\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.880410 kubelet[2705]: I0914 12:17:46.879095 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72eb0686-8c02-4409-82ed-73a28b7875c4-clustermesh-secrets\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.880410 kubelet[2705]: I0914 12:17:46.879113 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b12b163-b15d-4748-910b-1a345da53ed8-cilium-config-path\") pod \"5b12b163-b15d-4748-910b-1a345da53ed8\" (UID: \"5b12b163-b15d-4748-910b-1a345da53ed8\") " Sep 14 12:17:46.880410 kubelet[2705]: I0914 12:17:46.879129 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-run\") pod \"72eb0686-8c02-4409-82ed-73a28b7875c4\" (UID: \"72eb0686-8c02-4409-82ed-73a28b7875c4\") " Sep 14 12:17:46.880410 kubelet[2705]: I0914 12:17:46.879172 2705 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-host-proc-sys-kernel\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.880410 kubelet[2705]: I0914 12:17:46.879191 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.880410 kubelet[2705]: I0914 12:17:46.879209 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.880566 kubelet[2705]: I0914 12:17:46.879221 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.880566 kubelet[2705]: I0914 12:17:46.879233 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.880566 kubelet[2705]: I0914 12:17:46.879246 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.882973 kubelet[2705]: I0914 12:17:46.882908 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.883077 kubelet[2705]: I0914 12:17:46.882988 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.883077 kubelet[2705]: I0914 12:17:46.883004 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.886531 kubelet[2705]: I0914 12:17:46.886482 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 14 12:17:46.887405 kubelet[2705]: I0914 12:17:46.886687 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-kube-api-access-5qpkz" (OuterVolumeSpecName: "kube-api-access-5qpkz") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "kube-api-access-5qpkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 14 12:17:46.889321 kubelet[2705]: I0914 12:17:46.889272 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eb0686-8c02-4409-82ed-73a28b7875c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 14 12:17:46.891006 kubelet[2705]: I0914 12:17:46.890972 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 14 12:17:46.891577 kubelet[2705]: I0914 12:17:46.891541 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b12b163-b15d-4748-910b-1a345da53ed8-kube-api-access-nq7tj" (OuterVolumeSpecName: "kube-api-access-nq7tj") pod "5b12b163-b15d-4748-910b-1a345da53ed8" (UID: "5b12b163-b15d-4748-910b-1a345da53ed8"). InnerVolumeSpecName "kube-api-access-nq7tj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 14 12:17:46.892771 kubelet[2705]: I0914 12:17:46.892733 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b12b163-b15d-4748-910b-1a345da53ed8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b12b163-b15d-4748-910b-1a345da53ed8" (UID: "5b12b163-b15d-4748-910b-1a345da53ed8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 14 12:17:46.894231 kubelet[2705]: I0914 12:17:46.894187 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "72eb0686-8c02-4409-82ed-73a28b7875c4" (UID: "72eb0686-8c02-4409-82ed-73a28b7875c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 14 12:17:46.980072 kubelet[2705]: I0914 12:17:46.979856 2705 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-etc-cni-netd\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980072 kubelet[2705]: I0914 12:17:46.979906 2705 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-hubble-tls\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980072 kubelet[2705]: I0914 12:17:46.979922 2705 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-host-proc-sys-net\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980072 kubelet[2705]: I0914 12:17:46.979934 2705 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-lib-modules\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980072 kubelet[2705]: I0914 12:17:46.979948 2705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nq7tj\" (UniqueName: \"kubernetes.io/projected/5b12b163-b15d-4748-910b-1a345da53ed8-kube-api-access-nq7tj\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980072 kubelet[2705]: I0914 12:17:46.979956 2705 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-bpf-maps\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980072 kubelet[2705]: I0914 12:17:46.979965 2705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5qpkz\" (UniqueName: \"kubernetes.io/projected/72eb0686-8c02-4409-82ed-73a28b7875c4-kube-api-access-5qpkz\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980072 kubelet[2705]: I0914 12:17:46.979976 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-config-path\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980443 kubelet[2705]: I0914 12:17:46.979987 2705 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cni-path\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980443 kubelet[2705]: I0914 12:17:46.979996 2705 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-xtables-lock\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980443 kubelet[2705]: I0914 12:17:46.980004 2705 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72eb0686-8c02-4409-82ed-73a28b7875c4-clustermesh-secrets\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980443 kubelet[2705]: I0914 12:17:46.980011 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-cgroup\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980443 kubelet[2705]: I0914 12:17:46.980023 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-cilium-run\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980443 kubelet[2705]: I0914 12:17:46.980032 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b12b163-b15d-4748-910b-1a345da53ed8-cilium-config-path\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:46.980443 kubelet[2705]: I0914 12:17:46.980044 2705 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72eb0686-8c02-4409-82ed-73a28b7875c4-hostproc\") on node \"ci-4459.0.0-9-e5fa973bfc\" DevicePath \"\"" Sep 14 12:17:47.269290 systemd[1]: Removed slice kubepods-burstable-pod72eb0686_8c02_4409_82ed_73a28b7875c4.slice - libcontainer container kubepods-burstable-pod72eb0686_8c02_4409_82ed_73a28b7875c4.slice. Sep 14 12:17:47.269445 systemd[1]: kubepods-burstable-pod72eb0686_8c02_4409_82ed_73a28b7875c4.slice: Consumed 8.686s CPU time, 193.4M memory peak, 70.1M read from disk, 13.3M written to disk. Sep 14 12:17:47.272217 systemd[1]: Removed slice kubepods-besteffort-pod5b12b163_b15d_4748_910b_1a345da53ed8.slice - libcontainer container kubepods-besteffort-pod5b12b163_b15d_4748_910b_1a345da53ed8.slice. Sep 14 12:17:47.431844 kubelet[2705]: E0914 12:17:47.431786 2705 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 14 12:17:47.585174 kubelet[2705]: I0914 12:17:47.584950 2705 scope.go:117] "RemoveContainer" containerID="8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3" Sep 14 12:17:47.594500 containerd[1546]: time="2025-09-14T12:17:47.594450090Z" level=info msg="RemoveContainer for \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\"" Sep 14 12:17:47.616305 containerd[1546]: time="2025-09-14T12:17:47.616066940Z" level=info msg="RemoveContainer for \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" returns successfully" Sep 14 12:17:47.617219 kubelet[2705]: I0914 12:17:47.617129 2705 scope.go:117] "RemoveContainer" containerID="8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3" Sep 14 12:17:47.618517 containerd[1546]: time="2025-09-14T12:17:47.617924130Z" level=error msg="ContainerStatus for \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\": not found" Sep 14 12:17:47.618657 kubelet[2705]: E0914 12:17:47.618509 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\": not found" containerID="8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3" Sep 14 12:17:47.622570 kubelet[2705]: I0914 12:17:47.618560 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3"} err="failed to get container status \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e036c60a566c3b604ba80d2c4cbeaea6893ea596066d1d2f6a338e328bd2ac3\": not found" Sep 14 12:17:47.622570 kubelet[2705]: I0914 12:17:47.621717 2705 scope.go:117] "RemoveContainer" containerID="451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97" Sep 14 12:17:47.633058 containerd[1546]: time="2025-09-14T12:17:47.633013993Z" level=info msg="RemoveContainer for \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\"" Sep 14 12:17:47.655377 containerd[1546]: time="2025-09-14T12:17:47.655162867Z" level=info msg="RemoveContainer for \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" returns successfully" Sep 14 12:17:47.658019 kubelet[2705]: I0914 12:17:47.657959 2705 scope.go:117] "RemoveContainer" containerID="678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed" Sep 14 12:17:47.662960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-567bd99f49e99f614599e0e1d7f2710c0e3397cc325870bf77ee098617979667-shm.mount: Deactivated successfully. Sep 14 12:17:47.663127 systemd[1]: var-lib-kubelet-pods-72eb0686\x2d8c02\x2d4409\x2d82ed\x2d73a28b7875c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5qpkz.mount: Deactivated successfully. Sep 14 12:17:47.663228 systemd[1]: var-lib-kubelet-pods-72eb0686\x2d8c02\x2d4409\x2d82ed\x2d73a28b7875c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 14 12:17:47.663327 systemd[1]: var-lib-kubelet-pods-72eb0686\x2d8c02\x2d4409\x2d82ed\x2d73a28b7875c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 14 12:17:47.663412 systemd[1]: var-lib-kubelet-pods-5b12b163\x2db15d\x2d4748\x2d910b\x2d1a345da53ed8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnq7tj.mount: Deactivated successfully. Sep 14 12:17:47.669644 containerd[1546]: time="2025-09-14T12:17:47.667566468Z" level=info msg="RemoveContainer for \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\"" Sep 14 12:17:47.679008 containerd[1546]: time="2025-09-14T12:17:47.678955743Z" level=info msg="RemoveContainer for \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\" returns successfully" Sep 14 12:17:47.680635 kubelet[2705]: I0914 12:17:47.680447 2705 scope.go:117] "RemoveContainer" containerID="692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17" Sep 14 12:17:47.691045 containerd[1546]: time="2025-09-14T12:17:47.690883209Z" level=info msg="RemoveContainer for \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\"" Sep 14 12:17:47.697870 containerd[1546]: time="2025-09-14T12:17:47.697812329Z" level=info msg="RemoveContainer for \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\" returns successfully" Sep 14 12:17:47.698134 kubelet[2705]: I0914 12:17:47.698103 2705 scope.go:117] "RemoveContainer" containerID="d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7" Sep 14 12:17:47.700620 containerd[1546]: time="2025-09-14T12:17:47.700503014Z" level=info msg="RemoveContainer for \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\"" Sep 14 12:17:47.708170 containerd[1546]: time="2025-09-14T12:17:47.708091581Z" level=info msg="RemoveContainer for \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\" returns successfully" Sep 14 12:17:47.708661 kubelet[2705]: I0914 12:17:47.708440 2705 scope.go:117] "RemoveContainer" containerID="e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049" Sep 14 12:17:47.711808 containerd[1546]: time="2025-09-14T12:17:47.711537662Z" level=info msg="RemoveContainer for \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\"" Sep 14 12:17:47.717640 containerd[1546]: time="2025-09-14T12:17:47.717480374Z" level=info msg="RemoveContainer for \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\" returns successfully" Sep 14 12:17:47.718072 kubelet[2705]: I0914 12:17:47.718038 2705 scope.go:117] "RemoveContainer" containerID="451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97" Sep 14 12:17:47.718747 containerd[1546]: time="2025-09-14T12:17:47.718576813Z" level=error msg="ContainerStatus for \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\": not found" Sep 14 12:17:47.719033 kubelet[2705]: E0914 12:17:47.718983 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\": not found" containerID="451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97" Sep 14 12:17:47.719128 kubelet[2705]: I0914 12:17:47.719102 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97"} err="failed to get container status \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\": rpc error: code = NotFound desc = an error occurred when try to find container \"451ddb1a44404004dda947fedd21f431b9febbfe1bfc37f465de1e4d65711d97\": not found" Sep 14 12:17:47.719320 kubelet[2705]: I0914 12:17:47.719192 2705 scope.go:117] "RemoveContainer" containerID="678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed" Sep 14 12:17:47.719882 containerd[1546]: time="2025-09-14T12:17:47.719837226Z" level=error msg="ContainerStatus for \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\": not found" Sep 14 12:17:47.720103 kubelet[2705]: E0914 12:17:47.720062 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\": not found" containerID="678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed" Sep 14 12:17:47.720162 kubelet[2705]: I0914 12:17:47.720120 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed"} err="failed to get container status \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\": rpc error: code = NotFound desc = an error occurred when try to find container \"678846a03814d5457feb19daedd6478e032759792b9730d2fe2e938257740eed\": not found" Sep 14 12:17:47.720162 kubelet[2705]: I0914 12:17:47.720147 2705 scope.go:117] "RemoveContainer" containerID="692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17" Sep 14 12:17:47.720430 containerd[1546]: time="2025-09-14T12:17:47.720388764Z" level=error msg="ContainerStatus for \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\": not found" Sep 14 12:17:47.720622 kubelet[2705]: E0914 12:17:47.720567 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\": not found" containerID="692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17" Sep 14 12:17:47.720734 kubelet[2705]: I0914 12:17:47.720638 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17"} err="failed to get container status \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\": rpc error: code = NotFound desc = an error occurred when try to find container \"692fb13cc84342ba080e5b47eb22455ec30ddf818aaa239780e3406ea6b1cc17\": not found" Sep 14 12:17:47.720734 kubelet[2705]: I0914 12:17:47.720665 2705 scope.go:117] "RemoveContainer" containerID="d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7" Sep 14 12:17:47.721029 containerd[1546]: time="2025-09-14T12:17:47.720890930Z" level=error msg="ContainerStatus for \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\": not found" Sep 14 12:17:47.721190 kubelet[2705]: E0914 12:17:47.721168 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\": not found" containerID="d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7" Sep 14 12:17:47.721482 kubelet[2705]: I0914 12:17:47.721348 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7"} err="failed to get container status \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d36f312b0052886339aef8cb3fb492855930fe72c57650782b90fea6cf212ff7\": not found" Sep 14 12:17:47.721482 kubelet[2705]: I0914 12:17:47.721379 2705 scope.go:117] "RemoveContainer" containerID="e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049" Sep 14 12:17:47.721949 containerd[1546]: time="2025-09-14T12:17:47.721891528Z" level=error msg="ContainerStatus for \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\": not found" Sep 14 12:17:47.722219 kubelet[2705]: E0914 12:17:47.722191 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\": not found" containerID="e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049" Sep 14 12:17:47.722329 kubelet[2705]: I0914 12:17:47.722300 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049"} err="failed to get container status \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\": rpc error: code = NotFound desc = an error occurred when try to find container \"e047a3bfa57e15e9ad75640e265eb677dacbcbd71668d39d9ba2cfd735a09049\": not found" Sep 14 12:17:48.409466 containerd[1546]: time="2025-09-14T12:17:48.409288131Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1757852266 nanos:686226903}" Sep 14 12:17:48.409466 containerd[1546]: time="2025-09-14T12:17:48.409355689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" id:\"b661a634fa5c8407397ce07c2ab2301ef9bb0332adb3778dee26d63f00b605f3\" pid:2903 exit_status:137 exited_at:{seconds:1757852266 nanos:775829299}" Sep 14 12:17:48.496984 sshd[4270]: Connection closed by 139.178.89.65 port 42936 Sep 14 12:17:48.497824 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:48.510663 systemd[1]: sshd@22-143.198.142.64:22-139.178.89.65:42936.service: Deactivated successfully. Sep 14 12:17:48.513965 systemd[1]: session-23.scope: Deactivated successfully. Sep 14 12:17:48.514358 systemd[1]: session-23.scope: Consumed 1.446s CPU time, 28.3M memory peak. Sep 14 12:17:48.515626 systemd-logind[1520]: Session 23 logged out. Waiting for processes to exit. Sep 14 12:17:48.521923 systemd[1]: Started sshd@23-143.198.142.64:22-139.178.89.65:42952.service - OpenSSH per-connection server daemon (139.178.89.65:42952). Sep 14 12:17:48.525788 systemd-logind[1520]: Removed session 23. Sep 14 12:17:48.618556 sshd[4423]: Accepted publickey for core from 139.178.89.65 port 42952 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:48.620141 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:48.628202 systemd-logind[1520]: New session 24 of user core. Sep 14 12:17:48.634911 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 14 12:17:49.254241 kubelet[2705]: I0914 12:17:49.253835 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b12b163-b15d-4748-910b-1a345da53ed8" path="/var/lib/kubelet/pods/5b12b163-b15d-4748-910b-1a345da53ed8/volumes" Sep 14 12:17:49.255328 kubelet[2705]: I0914 12:17:49.255300 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72eb0686-8c02-4409-82ed-73a28b7875c4" path="/var/lib/kubelet/pods/72eb0686-8c02-4409-82ed-73a28b7875c4/volumes" Sep 14 12:17:49.399571 sshd[4426]: Connection closed by 139.178.89.65 port 42952 Sep 14 12:17:49.400509 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:49.416225 systemd[1]: sshd@23-143.198.142.64:22-139.178.89.65:42952.service: Deactivated successfully. Sep 14 12:17:49.422483 systemd[1]: session-24.scope: Deactivated successfully. Sep 14 12:17:49.424255 systemd-logind[1520]: Session 24 logged out. Waiting for processes to exit. Sep 14 12:17:49.431425 systemd[1]: Started sshd@24-143.198.142.64:22-139.178.89.65:42960.service - OpenSSH per-connection server daemon (139.178.89.65:42960). Sep 14 12:17:49.436448 systemd-logind[1520]: Removed session 24. Sep 14 12:17:49.439540 kubelet[2705]: I0914 12:17:49.439485 2705 memory_manager.go:355] "RemoveStaleState removing state" podUID="5b12b163-b15d-4748-910b-1a345da53ed8" containerName="cilium-operator" Sep 14 12:17:49.439540 kubelet[2705]: I0914 12:17:49.439525 2705 memory_manager.go:355] "RemoveStaleState removing state" podUID="72eb0686-8c02-4409-82ed-73a28b7875c4" containerName="cilium-agent" Sep 14 12:17:49.462119 systemd[1]: Created slice kubepods-burstable-poddc69f403_34c4_425b_b170_6fe1e1b37483.slice - libcontainer container kubepods-burstable-poddc69f403_34c4_425b_b170_6fe1e1b37483.slice. Sep 14 12:17:49.500496 kubelet[2705]: I0914 12:17:49.498927 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc69f403-34c4-425b-b170-6fe1e1b37483-cilium-config-path\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500496 kubelet[2705]: I0914 12:17:49.498985 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl7jf\" (UniqueName: \"kubernetes.io/projected/dc69f403-34c4-425b-b170-6fe1e1b37483-kube-api-access-kl7jf\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500496 kubelet[2705]: I0914 12:17:49.499020 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-etc-cni-netd\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500496 kubelet[2705]: I0914 12:17:49.499057 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-host-proc-sys-net\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500496 kubelet[2705]: I0914 12:17:49.499080 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-hostproc\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500496 kubelet[2705]: I0914 12:17:49.499104 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-xtables-lock\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500831 kubelet[2705]: I0914 12:17:49.499131 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-cilium-run\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500831 kubelet[2705]: I0914 12:17:49.499156 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-host-proc-sys-kernel\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500831 kubelet[2705]: I0914 12:17:49.499182 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-lib-modules\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500831 kubelet[2705]: I0914 12:17:49.499207 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-cilium-cgroup\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500831 kubelet[2705]: I0914 12:17:49.499245 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-cni-path\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500831 kubelet[2705]: I0914 12:17:49.499269 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc69f403-34c4-425b-b170-6fe1e1b37483-cilium-ipsec-secrets\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500983 kubelet[2705]: I0914 12:17:49.499295 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc69f403-34c4-425b-b170-6fe1e1b37483-hubble-tls\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500983 kubelet[2705]: I0914 12:17:49.499322 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc69f403-34c4-425b-b170-6fe1e1b37483-bpf-maps\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.500983 kubelet[2705]: I0914 12:17:49.499346 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc69f403-34c4-425b-b170-6fe1e1b37483-clustermesh-secrets\") pod \"cilium-dzdpw\" (UID: \"dc69f403-34c4-425b-b170-6fe1e1b37483\") " pod="kube-system/cilium-dzdpw" Sep 14 12:17:49.503211 kubelet[2705]: I0914 12:17:49.503042 2705 setters.go:602] "Node became not ready" node="ci-4459.0.0-9-e5fa973bfc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-14T12:17:49Z","lastTransitionTime":"2025-09-14T12:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 14 12:17:49.560492 sshd[4438]: Accepted publickey for core from 139.178.89.65 port 42960 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:49.563497 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:49.575216 systemd-logind[1520]: New session 25 of user core. Sep 14 12:17:49.579858 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 14 12:17:49.659749 sshd[4441]: Connection closed by 139.178.89.65 port 42960 Sep 14 12:17:49.655044 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Sep 14 12:17:49.680082 systemd[1]: sshd@24-143.198.142.64:22-139.178.89.65:42960.service: Deactivated successfully. Sep 14 12:17:49.683536 systemd[1]: session-25.scope: Deactivated successfully. Sep 14 12:17:49.684761 systemd-logind[1520]: Session 25 logged out. Waiting for processes to exit. Sep 14 12:17:49.688957 systemd[1]: Started sshd@25-143.198.142.64:22-139.178.89.65:42964.service - OpenSSH per-connection server daemon (139.178.89.65:42964). Sep 14 12:17:49.690153 systemd-logind[1520]: Removed session 25. Sep 14 12:17:49.757324 sshd[4452]: Accepted publickey for core from 139.178.89.65 port 42964 ssh2: RSA SHA256:KDZMV9+ReDenPGiv1QjO8ktejqlv9SCNv3ZZszU5bsU Sep 14 12:17:49.758987 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 14 12:17:49.765372 systemd-logind[1520]: New session 26 of user core. Sep 14 12:17:49.769889 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 14 12:17:49.774683 kubelet[2705]: E0914 12:17:49.773450 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:49.775246 containerd[1546]: time="2025-09-14T12:17:49.775184598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dzdpw,Uid:dc69f403-34c4-425b-b170-6fe1e1b37483,Namespace:kube-system,Attempt:0,}" Sep 14 12:17:49.800638 containerd[1546]: time="2025-09-14T12:17:49.800572930Z" level=info msg="connecting to shim 7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927" address="unix:///run/containerd/s/2c9448d549c88e1cb55fa00d1a669db4c96edf671f7618d38e1a4103093d2c9c" namespace=k8s.io protocol=ttrpc version=3 Sep 14 12:17:49.832835 systemd[1]: Started cri-containerd-7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927.scope - libcontainer container 7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927. Sep 14 12:17:49.884948 containerd[1546]: time="2025-09-14T12:17:49.884686557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dzdpw,Uid:dc69f403-34c4-425b-b170-6fe1e1b37483,Namespace:kube-system,Attempt:0,} returns sandbox id \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\"" Sep 14 12:17:49.887075 kubelet[2705]: E0914 12:17:49.886942 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:49.893411 containerd[1546]: time="2025-09-14T12:17:49.892755873Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 14 12:17:49.903284 containerd[1546]: time="2025-09-14T12:17:49.903113749Z" level=info msg="Container 8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:17:49.910869 containerd[1546]: time="2025-09-14T12:17:49.910815011Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe\"" Sep 14 12:17:49.912618 containerd[1546]: time="2025-09-14T12:17:49.912090304Z" level=info msg="StartContainer for \"8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe\"" Sep 14 12:17:49.913205 containerd[1546]: time="2025-09-14T12:17:49.913169749Z" level=info msg="connecting to shim 8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe" address="unix:///run/containerd/s/2c9448d549c88e1cb55fa00d1a669db4c96edf671f7618d38e1a4103093d2c9c" protocol=ttrpc version=3 Sep 14 12:17:49.946325 systemd[1]: Started cri-containerd-8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe.scope - libcontainer container 8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe. Sep 14 12:17:49.999390 containerd[1546]: time="2025-09-14T12:17:49.999340894Z" level=info msg="StartContainer for \"8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe\" returns successfully" Sep 14 12:17:50.014724 systemd[1]: cri-containerd-8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe.scope: Deactivated successfully. Sep 14 12:17:50.015287 systemd[1]: cri-containerd-8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe.scope: Consumed 28ms CPU time, 9.8M memory peak, 3.2M read from disk. Sep 14 12:17:50.018020 containerd[1546]: time="2025-09-14T12:17:50.017932346Z" level=info msg="received exit event container_id:\"8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe\" id:\"8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe\" pid:4522 exited_at:{seconds:1757852270 nanos:17321169}" Sep 14 12:17:50.018387 containerd[1546]: time="2025-09-14T12:17:50.017975521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe\" id:\"8742bf15fe933fa647f44b737abde09f6a11237a9370cd32d9397935c50dcefe\" pid:4522 exited_at:{seconds:1757852270 nanos:17321169}" Sep 14 12:17:50.640341 kubelet[2705]: E0914 12:17:50.638793 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:50.643466 containerd[1546]: time="2025-09-14T12:17:50.642777063Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 14 12:17:50.652737 containerd[1546]: time="2025-09-14T12:17:50.652055256Z" level=info msg="Container 35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:17:50.665714 containerd[1546]: time="2025-09-14T12:17:50.665638231Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b\"" Sep 14 12:17:50.667105 containerd[1546]: time="2025-09-14T12:17:50.667065223Z" level=info msg="StartContainer for \"35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b\"" Sep 14 12:17:50.669951 containerd[1546]: time="2025-09-14T12:17:50.669818338Z" level=info msg="connecting to shim 35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b" address="unix:///run/containerd/s/2c9448d549c88e1cb55fa00d1a669db4c96edf671f7618d38e1a4103093d2c9c" protocol=ttrpc version=3 Sep 14 12:17:50.701909 systemd[1]: Started cri-containerd-35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b.scope - libcontainer container 35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b. Sep 14 12:17:50.750155 containerd[1546]: time="2025-09-14T12:17:50.749545631Z" level=info msg="StartContainer for \"35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b\" returns successfully" Sep 14 12:17:50.760804 systemd[1]: cri-containerd-35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b.scope: Deactivated successfully. Sep 14 12:17:50.761801 systemd[1]: cri-containerd-35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b.scope: Consumed 26ms CPU time, 7.5M memory peak, 2.2M read from disk. Sep 14 12:17:50.763075 containerd[1546]: time="2025-09-14T12:17:50.761805983Z" level=info msg="received exit event container_id:\"35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b\" id:\"35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b\" pid:4568 exited_at:{seconds:1757852270 nanos:760968597}" Sep 14 12:17:50.763314 containerd[1546]: time="2025-09-14T12:17:50.763267473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b\" id:\"35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b\" pid:4568 exited_at:{seconds:1757852270 nanos:760968597}" Sep 14 12:17:50.790475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35835a326bfea1886b2c379bf25c895e111a2ef9105b2e209badbf7c031b294b-rootfs.mount: Deactivated successfully. Sep 14 12:17:51.645518 kubelet[2705]: E0914 12:17:51.645229 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:51.651646 containerd[1546]: time="2025-09-14T12:17:51.651552225Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 14 12:17:51.666756 containerd[1546]: time="2025-09-14T12:17:51.666696625Z" level=info msg="Container ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:17:51.676948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104442306.mount: Deactivated successfully. Sep 14 12:17:51.685653 containerd[1546]: time="2025-09-14T12:17:51.685584416Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0\"" Sep 14 12:17:51.688702 containerd[1546]: time="2025-09-14T12:17:51.688025896Z" level=info msg="StartContainer for \"ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0\"" Sep 14 12:17:51.693397 containerd[1546]: time="2025-09-14T12:17:51.693325670Z" level=info msg="connecting to shim ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0" address="unix:///run/containerd/s/2c9448d549c88e1cb55fa00d1a669db4c96edf671f7618d38e1a4103093d2c9c" protocol=ttrpc version=3 Sep 14 12:17:51.734901 systemd[1]: Started cri-containerd-ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0.scope - libcontainer container ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0. Sep 14 12:17:51.792608 containerd[1546]: time="2025-09-14T12:17:51.792220532Z" level=info msg="StartContainer for \"ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0\" returns successfully" Sep 14 12:17:51.797848 systemd[1]: cri-containerd-ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0.scope: Deactivated successfully. Sep 14 12:17:51.802010 containerd[1546]: time="2025-09-14T12:17:51.801816280Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0\" id:\"ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0\" pid:4613 exited_at:{seconds:1757852271 nanos:801081872}" Sep 14 12:17:51.802462 containerd[1546]: time="2025-09-14T12:17:51.802238801Z" level=info msg="received exit event container_id:\"ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0\" id:\"ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0\" pid:4613 exited_at:{seconds:1757852271 nanos:801081872}" Sep 14 12:17:51.832075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff24c771230c0089df0acd8641c553fd42a871c2b39140c4cd74a3c273ec02c0-rootfs.mount: Deactivated successfully. Sep 14 12:17:52.433951 kubelet[2705]: E0914 12:17:52.433894 2705 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 14 12:17:52.655668 kubelet[2705]: E0914 12:17:52.655579 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:52.660116 containerd[1546]: time="2025-09-14T12:17:52.660055615Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 14 12:17:52.681022 containerd[1546]: time="2025-09-14T12:17:52.680961976Z" level=info msg="Container cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:17:52.692965 containerd[1546]: time="2025-09-14T12:17:52.692755391Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11\"" Sep 14 12:17:52.696004 containerd[1546]: time="2025-09-14T12:17:52.695959005Z" level=info msg="StartContainer for \"cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11\"" Sep 14 12:17:52.699030 containerd[1546]: time="2025-09-14T12:17:52.698917779Z" level=info msg="connecting to shim cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11" address="unix:///run/containerd/s/2c9448d549c88e1cb55fa00d1a669db4c96edf671f7618d38e1a4103093d2c9c" protocol=ttrpc version=3 Sep 14 12:17:52.744950 systemd[1]: Started cri-containerd-cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11.scope - libcontainer container cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11. Sep 14 12:17:52.788117 systemd[1]: cri-containerd-cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11.scope: Deactivated successfully. Sep 14 12:17:52.790422 containerd[1546]: time="2025-09-14T12:17:52.790146236Z" level=info msg="received exit event container_id:\"cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11\" id:\"cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11\" pid:4653 exited_at:{seconds:1757852272 nanos:789942375}" Sep 14 12:17:52.790422 containerd[1546]: time="2025-09-14T12:17:52.790385378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11\" id:\"cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11\" pid:4653 exited_at:{seconds:1757852272 nanos:789942375}" Sep 14 12:17:52.792759 containerd[1546]: time="2025-09-14T12:17:52.792620840Z" level=info msg="StartContainer for \"cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11\" returns successfully" Sep 14 12:17:52.831499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb6894e03684292f258e2a36419f77f83a13a6215094b17fd4621cacd74d0b11-rootfs.mount: Deactivated successfully. Sep 14 12:17:53.662246 kubelet[2705]: E0914 12:17:53.662204 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:53.668092 containerd[1546]: time="2025-09-14T12:17:53.668025814Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 14 12:17:53.683879 containerd[1546]: time="2025-09-14T12:17:53.683812849Z" level=info msg="Container 939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a: CDI devices from CRI Config.CDIDevices: []" Sep 14 12:17:53.694613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339427666.mount: Deactivated successfully. Sep 14 12:17:53.698966 containerd[1546]: time="2025-09-14T12:17:53.698792565Z" level=info msg="CreateContainer within sandbox \"7942b6152e680c461840a3a43c1ff3ae0e6c5d8bfe70a0d13385db8811a96927\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a\"" Sep 14 12:17:53.702312 containerd[1546]: time="2025-09-14T12:17:53.702158443Z" level=info msg="StartContainer for \"939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a\"" Sep 14 12:17:53.703766 containerd[1546]: time="2025-09-14T12:17:53.703689494Z" level=info msg="connecting to shim 939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a" address="unix:///run/containerd/s/2c9448d549c88e1cb55fa00d1a669db4c96edf671f7618d38e1a4103093d2c9c" protocol=ttrpc version=3 Sep 14 12:17:53.735072 systemd[1]: Started cri-containerd-939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a.scope - libcontainer container 939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a. Sep 14 12:17:53.777634 containerd[1546]: time="2025-09-14T12:17:53.777565295Z" level=info msg="StartContainer for \"939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a\" returns successfully" Sep 14 12:17:53.870430 containerd[1546]: time="2025-09-14T12:17:53.870290200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a\" id:\"201402e29f55e6fbdd7e488ffbca65bfa9cd4e9d16a5defbf67413be68c450b6\" pid:4723 exited_at:{seconds:1757852273 nanos:869963019}" Sep 14 12:17:54.291624 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 14 12:17:54.670667 kubelet[2705]: E0914 12:17:54.670393 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:55.775910 kubelet[2705]: E0914 12:17:55.775869 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:56.293735 containerd[1546]: time="2025-09-14T12:17:56.293674815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a\" id:\"b192b82d96ce41e4f4b8ee88a7072b4bd02face6d95e422e42e0732e4cb34199\" pid:4890 exit_status:1 exited_at:{seconds:1757852276 nanos:292193099}" Sep 14 12:17:57.669454 systemd-networkd[1441]: lxc_health: Link UP Sep 14 12:17:57.671996 systemd-networkd[1441]: lxc_health: Gained carrier Sep 14 12:17:57.777373 kubelet[2705]: E0914 12:17:57.775580 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:57.803420 kubelet[2705]: I0914 12:17:57.803277 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dzdpw" podStartSLOduration=8.803233595 podStartE2EDuration="8.803233595s" podCreationTimestamp="2025-09-14 12:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-14 12:17:54.689295936 +0000 UTC m=+97.605858373" watchObservedRunningTime="2025-09-14 12:17:57.803233595 +0000 UTC m=+100.719796032" Sep 14 12:17:58.686579 kubelet[2705]: E0914 12:17:58.685278 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 14 12:17:58.707756 containerd[1546]: time="2025-09-14T12:17:58.707684362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a\" id:\"c0a298339dd05356b422c5698ea16f14c03a981d5dbd3f2962734e84244882ff\" pid:5255 exited_at:{seconds:1757852278 nanos:707086774}" Sep 14 12:17:59.666893 systemd-networkd[1441]: lxc_health: Gained IPv6LL Sep 14 12:18:00.896429 containerd[1546]: time="2025-09-14T12:18:00.896318939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a\" id:\"835eb1b6ec93122f649dc1e5f48bcee020bbcb4b7c310d012a77b9445508794d\" pid:5288 exited_at:{seconds:1757852280 nanos:895850446}" Sep 14 12:18:03.100665 containerd[1546]: time="2025-09-14T12:18:03.100571052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"939d7b9de3624ea626cef84d5d09f5c76ccacca6ad94f280745dbca116ff2c5a\" id:\"a0409f24a3ef63a3c72fb8889e15df5c9a20a780aaab12ac83d2ae2eb124357b\" pid:5315 exited_at:{seconds:1757852283 nanos:99761795}" Sep 14 12:18:03.112322 sshd[4455]: Connection closed by 139.178.89.65 port 42964 Sep 14 12:18:03.115915 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Sep 14 12:18:03.130523 systemd-logind[1520]: Session 26 logged out. Waiting for processes to exit. Sep 14 12:18:03.131925 systemd[1]: sshd@25-143.198.142.64:22-139.178.89.65:42964.service: Deactivated successfully. Sep 14 12:18:03.137994 systemd[1]: session-26.scope: Deactivated successfully. Sep 14 12:18:03.143215 systemd-logind[1520]: Removed session 26.