Jan 17 00:17:08.954368 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:17:08.954408 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:17:08.954428 kernel: BIOS-provided physical RAM map: Jan 17 00:17:08.954438 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 00:17:08.954448 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 00:17:08.954458 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 00:17:08.954471 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 00:17:08.954483 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 00:17:08.954493 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:17:08.954508 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 00:17:08.954519 kernel: NX (Execute Disable) protection: active Jan 17 00:17:08.954529 kernel: APIC: Static calls initialized Jan 17 00:17:08.954547 kernel: SMBIOS 2.8 present. Jan 17 00:17:08.954559 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 00:17:08.954573 kernel: Hypervisor detected: KVM Jan 17 00:17:08.954590 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:17:08.954607 kernel: kvm-clock: using sched offset of 3277402429 cycles Jan 17 00:17:08.954620 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:17:08.954633 kernel: tsc: Detected 2494.140 MHz processor Jan 17 00:17:08.954645 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:17:08.954658 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:17:08.954671 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 00:17:08.954683 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 00:17:08.954695 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:17:08.954713 kernel: ACPI: Early table checksum verification disabled Jan 17 00:17:08.954740 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 00:17:08.954753 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:17:08.954765 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:17:08.954777 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:17:08.954789 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 00:17:08.954801 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:17:08.954815 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:17:08.954827 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:17:08.954844 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:17:08.954856 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Jan 17 00:17:08.954869 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Jan 17 00:17:08.954880 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 00:17:08.954891 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Jan 17 00:17:08.954902 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Jan 17 00:17:08.954914 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Jan 17 00:17:08.954932 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Jan 17 00:17:08.954948 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:17:08.954961 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:17:08.954976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:17:08.954990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 00:17:08.955010 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 00:17:08.955026 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 00:17:08.955044 kernel: Zone ranges: Jan 17 00:17:08.955057 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:17:08.955069 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 00:17:08.955083 kernel: Normal empty Jan 17 00:17:08.955094 kernel: Movable zone start for each node Jan 17 00:17:08.955107 kernel: Early memory node ranges Jan 17 00:17:08.955119 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 00:17:08.955132 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 00:17:08.955144 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 00:17:08.955161 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:17:08.955173 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 00:17:08.955189 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 00:17:08.955202 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:17:08.955230 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:17:08.955243 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:17:08.955256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:17:08.955269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:17:08.955281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:17:08.955294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:17:08.955313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:17:08.955325 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:17:08.955340 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:17:08.955353 kernel: TSC deadline timer available Jan 17 00:17:08.955367 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:17:08.955382 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:17:08.955396 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 00:17:08.955416 kernel: Booting paravirtualized kernel on KVM Jan 17 00:17:08.955431 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:17:08.955450 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:17:08.955463 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:17:08.955476 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:17:08.955488 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:17:08.955500 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 00:17:08.955517 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:17:08.955532 kernel: random: crng init done Jan 17 00:17:08.955545 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:17:08.955564 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:17:08.955576 kernel: Fallback order for Node 0: 0 Jan 17 00:17:08.955589 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 00:17:08.955601 kernel: Policy zone: DMA32 Jan 17 00:17:08.955615 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:17:08.955629 kernel: Memory: 1971208K/2096612K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 125144K reserved, 0K cma-reserved) Jan 17 00:17:08.955642 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:17:08.955656 kernel: Kernel/User page tables isolation: enabled Jan 17 00:17:08.955669 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:17:08.955687 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:17:08.955701 kernel: Dynamic Preempt: voluntary Jan 17 00:17:08.955714 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:17:08.957014 kernel: rcu: RCU event tracing is enabled. Jan 17 00:17:08.957034 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:17:08.957048 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:17:08.957061 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:17:08.957075 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:17:08.957090 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:17:08.957112 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:17:08.957126 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:17:08.957139 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:17:08.957152 kernel: Console: colour VGA+ 80x25 Jan 17 00:17:08.957173 kernel: printk: console [tty0] enabled Jan 17 00:17:08.957187 kernel: printk: console [ttyS0] enabled Jan 17 00:17:08.957200 kernel: ACPI: Core revision 20230628 Jan 17 00:17:08.957213 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:17:08.957227 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:17:08.957244 kernel: x2apic enabled Jan 17 00:17:08.957256 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:17:08.957269 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:17:08.957282 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 17 00:17:08.957297 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jan 17 00:17:08.957310 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 00:17:08.957323 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 00:17:08.957337 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:17:08.957369 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:17:08.957385 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:17:08.957399 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 00:17:08.957413 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:17:08.957432 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:17:08.957446 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:17:08.957461 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:17:08.957474 kernel: active return thunk: its_return_thunk Jan 17 00:17:08.957493 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:17:08.957513 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:17:08.957528 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:17:08.957543 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:17:08.957557 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:17:08.957572 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:17:08.957586 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:17:08.957600 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:17:08.957615 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:17:08.957635 kernel: landlock: Up and running. Jan 17 00:17:08.957649 kernel: SELinux: Initializing. Jan 17 00:17:08.957663 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:17:08.957679 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:17:08.957694 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 00:17:08.957709 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:17:08.957744 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:17:08.957760 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:17:08.957775 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 00:17:08.957796 kernel: signal: max sigframe size: 1776 Jan 17 00:17:08.957810 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:17:08.957826 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:17:08.957842 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:17:08.957857 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:17:08.957871 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:17:08.957885 kernel: .... node #0, CPUs: #1 Jan 17 00:17:08.957900 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:17:08.957922 kernel: smpboot: Max logical packages: 1 Jan 17 00:17:08.957942 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jan 17 00:17:08.957957 kernel: devtmpfs: initialized Jan 17 00:17:08.957971 kernel: x86/mm: Memory block size: 128MB Jan 17 00:17:08.957986 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:17:08.958000 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:17:08.958013 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:17:08.958026 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:17:08.958040 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:17:08.958054 kernel: audit: type=2000 audit(1768609028.398:1): state=initialized audit_enabled=0 res=1 Jan 17 00:17:08.958074 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:17:08.958089 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:17:08.958103 kernel: cpuidle: using governor menu Jan 17 00:17:08.958118 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:17:08.958133 kernel: dca service started, version 1.12.1 Jan 17 00:17:08.958147 kernel: PCI: Using configuration type 1 for base access Jan 17 00:17:08.958159 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:17:08.958173 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:17:08.958186 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:17:08.958206 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:17:08.958221 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:17:08.958235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:17:08.958249 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:17:08.958262 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:17:08.958276 kernel: ACPI: Interpreter enabled Jan 17 00:17:08.958290 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:17:08.958304 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:17:08.958320 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:17:08.958339 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:17:08.958352 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:17:08.958366 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:17:08.958664 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:17:08.961963 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:17:08.962165 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:17:08.962189 kernel: acpiphp: Slot [3] registered Jan 17 00:17:08.962215 kernel: acpiphp: Slot [4] registered Jan 17 00:17:08.962231 kernel: acpiphp: Slot [5] registered Jan 17 00:17:08.962246 kernel: acpiphp: Slot [6] registered Jan 17 00:17:08.962261 kernel: acpiphp: Slot [7] registered Jan 17 00:17:08.962275 kernel: acpiphp: Slot [8] registered Jan 17 00:17:08.962288 kernel: acpiphp: Slot [9] registered Jan 17 00:17:08.962302 kernel: acpiphp: Slot [10] registered Jan 17 00:17:08.962317 kernel: acpiphp: Slot [11] registered Jan 17 00:17:08.962331 kernel: acpiphp: Slot [12] registered Jan 17 00:17:08.962345 kernel: acpiphp: Slot [13] registered Jan 17 00:17:08.962366 kernel: acpiphp: Slot [14] registered Jan 17 00:17:08.962380 kernel: acpiphp: Slot [15] registered Jan 17 00:17:08.962395 kernel: acpiphp: Slot [16] registered Jan 17 00:17:08.962410 kernel: acpiphp: Slot [17] registered Jan 17 00:17:08.962423 kernel: acpiphp: Slot [18] registered Jan 17 00:17:08.962438 kernel: acpiphp: Slot [19] registered Jan 17 00:17:08.962452 kernel: acpiphp: Slot [20] registered Jan 17 00:17:08.962467 kernel: acpiphp: Slot [21] registered Jan 17 00:17:08.962482 kernel: acpiphp: Slot [22] registered Jan 17 00:17:08.962501 kernel: acpiphp: Slot [23] registered Jan 17 00:17:08.962515 kernel: acpiphp: Slot [24] registered Jan 17 00:17:08.962530 kernel: acpiphp: Slot [25] registered Jan 17 00:17:08.962544 kernel: acpiphp: Slot [26] registered Jan 17 00:17:08.962558 kernel: acpiphp: Slot [27] registered Jan 17 00:17:08.962572 kernel: acpiphp: Slot [28] registered Jan 17 00:17:08.962586 kernel: acpiphp: Slot [29] registered Jan 17 00:17:08.962601 kernel: acpiphp: Slot [30] registered Jan 17 00:17:08.962616 kernel: acpiphp: Slot [31] registered Jan 17 00:17:08.962630 kernel: PCI host bridge to bus 0000:00 Jan 17 00:17:08.962895 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:17:08.963051 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:17:08.963192 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:17:08.963353 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:17:08.963496 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 00:17:08.963632 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:17:08.965002 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:17:08.965213 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:17:08.965432 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 00:17:08.965601 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 00:17:08.965781 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 00:17:08.965940 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 00:17:08.966092 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 00:17:08.966246 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 00:17:08.966409 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 00:17:08.969378 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 00:17:08.969592 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:17:08.969792 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 00:17:08.969961 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 00:17:08.970166 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 00:17:08.970331 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 00:17:08.970485 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 00:17:08.970638 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 00:17:08.970900 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 00:17:08.971061 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:17:08.971256 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:17:08.971427 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 00:17:08.971585 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 00:17:08.971757 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 00:17:08.971927 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:17:08.972088 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 00:17:08.972240 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 00:17:08.972391 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 00:17:08.972575 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 00:17:08.973587 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 00:17:08.973797 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 00:17:08.973955 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 00:17:08.974212 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:17:08.974404 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 00:17:08.974560 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 00:17:08.974747 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 00:17:08.974947 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:17:08.975108 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 00:17:08.975286 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 00:17:08.975449 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 00:17:08.975633 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 00:17:08.975819 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 00:17:08.975993 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 00:17:08.976015 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:17:08.976028 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:17:08.976042 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:17:08.976056 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:17:08.976071 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:17:08.976086 kernel: iommu: Default domain type: Translated Jan 17 00:17:08.976109 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:17:08.976124 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:17:08.976139 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:17:08.976154 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 00:17:08.976168 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 00:17:08.976349 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 00:17:08.976514 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 00:17:08.976681 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:17:08.976703 kernel: vgaarb: loaded Jan 17 00:17:08.976742 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:17:08.976757 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:17:08.976772 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:17:08.976786 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:17:08.976800 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:17:08.976814 kernel: pnp: PnP ACPI init Jan 17 00:17:08.976827 kernel: pnp: PnP ACPI: found 4 devices Jan 17 00:17:08.976842 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:17:08.976858 kernel: NET: Registered PF_INET protocol family Jan 17 00:17:08.976880 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:17:08.976892 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:17:08.976908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:17:08.976922 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:17:08.976937 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:17:08.976951 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:17:08.976965 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:17:08.976980 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:17:08.977001 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:17:08.977017 kernel: NET: Registered PF_XDP protocol family Jan 17 00:17:08.977189 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:17:08.977329 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:17:08.977465 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:17:08.977606 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:17:08.977850 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 00:17:08.978063 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 00:17:08.979858 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:17:08.979901 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:17:08.980067 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 30505 usecs Jan 17 00:17:08.980090 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:17:08.980104 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:17:08.980119 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 17 00:17:08.980134 kernel: Initialise system trusted keyrings Jan 17 00:17:08.980150 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:17:08.980166 kernel: Key type asymmetric registered Jan 17 00:17:08.980190 kernel: Asymmetric key parser 'x509' registered Jan 17 00:17:08.980204 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:17:08.980219 kernel: io scheduler mq-deadline registered Jan 17 00:17:08.980234 kernel: io scheduler kyber registered Jan 17 00:17:08.980249 kernel: io scheduler bfq registered Jan 17 00:17:08.980264 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:17:08.980281 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 00:17:08.980296 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:17:08.980312 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:17:08.980333 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:17:08.980350 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:17:08.980365 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:17:08.980380 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:17:08.980395 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:17:08.980596 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 00:17:08.980621 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:17:08.980801 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 00:17:08.980953 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T00:17:08 UTC (1768609028) Jan 17 00:17:08.981095 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 00:17:08.981116 kernel: intel_pstate: CPU model not supported Jan 17 00:17:08.981132 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:17:08.981147 kernel: Segment Routing with IPv6 Jan 17 00:17:08.981162 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:17:08.981177 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:17:08.981192 kernel: Key type dns_resolver registered Jan 17 00:17:08.981208 kernel: IPI shorthand broadcast: enabled Jan 17 00:17:08.981232 kernel: sched_clock: Marking stable (1067003905, 138366695)->(1228727365, -23356765) Jan 17 00:17:08.981247 kernel: registered taskstats version 1 Jan 17 00:17:08.981263 kernel: Loading compiled-in X.509 certificates Jan 17 00:17:08.981279 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:17:08.981293 kernel: Key type .fscrypt registered Jan 17 00:17:08.981308 kernel: Key type fscrypt-provisioning registered Jan 17 00:17:08.981324 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:17:08.981339 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:17:08.981355 kernel: ima: No architecture policies found Jan 17 00:17:08.981376 kernel: clk: Disabling unused clocks Jan 17 00:17:08.981392 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:17:08.981407 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:17:08.981421 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:17:08.981461 kernel: Run /init as init process Jan 17 00:17:08.981478 kernel: with arguments: Jan 17 00:17:08.981491 kernel: /init Jan 17 00:17:08.981506 kernel: with environment: Jan 17 00:17:08.981519 kernel: HOME=/ Jan 17 00:17:08.981536 kernel: TERM=linux Jan 17 00:17:08.981553 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:17:08.981573 systemd[1]: Detected virtualization kvm. Jan 17 00:17:08.981601 systemd[1]: Detected architecture x86-64. Jan 17 00:17:08.981616 systemd[1]: Running in initrd. Jan 17 00:17:08.981631 systemd[1]: No hostname configured, using default hostname. Jan 17 00:17:08.981647 systemd[1]: Hostname set to . Jan 17 00:17:08.981669 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:17:08.981685 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:17:08.981703 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:17:08.983772 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:17:08.983810 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:17:08.983825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:17:08.983840 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:17:08.983855 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:17:08.983884 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:17:08.983899 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:17:08.983915 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:17:08.983932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:17:08.983949 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:17:08.983964 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:17:08.983978 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:17:08.983997 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:17:08.984013 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:17:08.984028 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:17:08.984043 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:17:08.984057 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:17:08.984072 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:17:08.984091 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:17:08.984107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:17:08.984123 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:17:08.984140 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:17:08.984156 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:17:08.984172 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:17:08.984190 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:17:08.984206 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:17:08.984228 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:17:08.984296 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 00:17:08.984337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:17:08.984354 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:17:08.984378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:17:08.984396 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:17:08.984415 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:17:08.984435 systemd-journald[184]: Journal started Jan 17 00:17:08.984473 systemd-journald[184]: Runtime Journal (/run/log/journal/528d93b2840c4510800e3b7e86e0b205) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:17:08.987943 systemd-modules-load[186]: Inserted module 'overlay' Jan 17 00:17:09.028757 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:17:09.028860 kernel: Bridge firewalling registered Jan 17 00:17:09.028870 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 17 00:17:09.059006 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:17:09.059061 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:17:09.059806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:17:09.073104 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:17:09.076932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:17:09.078223 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:17:09.087119 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:17:09.097915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:17:09.113958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:17:09.115648 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:17:09.117079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:17:09.118376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:17:09.124996 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:17:09.128945 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:17:09.141289 dracut-cmdline[218]: dracut-dracut-053 Jan 17 00:17:09.144094 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:17:09.167361 systemd-resolved[221]: Positive Trust Anchors: Jan 17 00:17:09.167376 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:17:09.167412 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:17:09.170437 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 17 00:17:09.171705 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:17:09.173091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:17:09.240791 kernel: SCSI subsystem initialized Jan 17 00:17:09.251749 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:17:09.263770 kernel: iscsi: registered transport (tcp) Jan 17 00:17:09.287193 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:17:09.287383 kernel: QLogic iSCSI HBA Driver Jan 17 00:17:09.339655 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:17:09.343965 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:17:09.374477 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:17:09.374556 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:17:09.376150 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:17:09.419758 kernel: raid6: avx2x4 gen() 18062 MB/s Jan 17 00:17:09.436761 kernel: raid6: avx2x2 gen() 17868 MB/s Jan 17 00:17:09.454930 kernel: raid6: avx2x1 gen() 13253 MB/s Jan 17 00:17:09.455075 kernel: raid6: using algorithm avx2x4 gen() 18062 MB/s Jan 17 00:17:09.472948 kernel: raid6: .... xor() 9312 MB/s, rmw enabled Jan 17 00:17:09.473077 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:17:09.495784 kernel: xor: automatically using best checksumming function avx Jan 17 00:17:09.657779 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:17:09.674276 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:17:09.681988 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:17:09.707312 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 17 00:17:09.713206 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:17:09.722366 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:17:09.744123 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 17 00:17:09.789202 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:17:09.803082 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:17:09.864047 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:17:09.869659 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:17:09.903281 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:17:09.906712 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:17:09.908216 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:17:09.910272 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:17:09.917321 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:17:09.940012 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:17:09.945752 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:17:09.953756 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 00:17:09.976852 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 00:17:09.987921 kernel: libata version 3.00 loaded. Jan 17 00:17:10.003660 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:17:10.003767 kernel: GPT:9289727 != 125829119 Jan 17 00:17:10.003786 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:17:10.003806 kernel: GPT:9289727 != 125829119 Jan 17 00:17:10.003822 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:17:10.003844 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:17:10.006555 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 00:17:10.008749 kernel: scsi host1: ata_piix Jan 17 00:17:10.012412 kernel: scsi host2: ata_piix Jan 17 00:17:10.012738 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 00:17:10.012756 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 00:17:10.016753 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 00:17:10.019766 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 17 00:17:10.034761 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:17:10.047653 kernel: ACPI: bus type USB registered Jan 17 00:17:10.047746 kernel: usbcore: registered new interface driver usbfs Jan 17 00:17:10.048582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:17:10.050588 kernel: usbcore: registered new interface driver hub Jan 17 00:17:10.048744 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:17:10.052321 kernel: usbcore: registered new device driver usb Jan 17 00:17:10.052147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:17:10.052705 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:17:10.052938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:17:10.053882 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:17:10.071445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:17:10.145121 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:17:10.149135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:17:10.192769 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:17:10.195173 kernel: AES CTR mode by8 optimization enabled Jan 17 00:17:10.194864 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:17:10.237411 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Jan 17 00:17:10.245795 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (448) Jan 17 00:17:10.256484 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:17:10.268588 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:17:10.280047 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 00:17:10.280376 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 00:17:10.281559 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 00:17:10.284145 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 00:17:10.284424 kernel: hub 1-0:1.0: USB hub found Jan 17 00:17:10.283416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:17:10.286984 kernel: hub 1-0:1.0: 2 ports detected Jan 17 00:17:10.290525 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:17:10.292229 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:17:10.308143 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:17:10.316002 disk-uuid[550]: Primary Header is updated. Jan 17 00:17:10.316002 disk-uuid[550]: Secondary Entries is updated. Jan 17 00:17:10.316002 disk-uuid[550]: Secondary Header is updated. Jan 17 00:17:10.325761 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:17:10.334769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:17:11.334777 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:17:11.336078 disk-uuid[551]: The operation has completed successfully. Jan 17 00:17:11.395921 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:17:11.396080 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:17:11.403114 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:17:11.422751 sh[562]: Success Jan 17 00:17:11.441809 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:17:11.513154 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:17:11.515897 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:17:11.520276 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:17:11.550917 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:17:11.551006 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:17:11.553793 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:17:11.553863 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:17:11.555311 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:17:11.565204 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:17:11.567338 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:17:11.571942 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:17:11.585565 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:17:11.603092 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:17:11.603202 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:17:11.603224 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:17:11.608783 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:17:11.621084 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:17:11.623525 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:17:11.631625 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:17:11.637979 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:17:11.726886 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:17:11.737014 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:17:11.781022 systemd-networkd[746]: lo: Link UP Jan 17 00:17:11.781034 systemd-networkd[746]: lo: Gained carrier Jan 17 00:17:11.784429 ignition[663]: Ignition 2.19.0 Jan 17 00:17:11.785170 systemd-networkd[746]: Enumeration completed Jan 17 00:17:11.784442 ignition[663]: Stage: fetch-offline Jan 17 00:17:11.785317 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:17:11.784495 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:17:11.786006 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:17:11.784506 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:17:11.786010 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 00:17:11.784624 ignition[663]: parsed url from cmdline: "" Jan 17 00:17:11.787022 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:17:11.784628 ignition[663]: no config URL provided Jan 17 00:17:11.787026 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:17:11.784634 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:17:11.788022 systemd[1]: Reached target network.target - Network. Jan 17 00:17:11.784643 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:17:11.788794 systemd-networkd[746]: eth0: Link UP Jan 17 00:17:11.784649 ignition[663]: failed to fetch config: resource requires networking Jan 17 00:17:11.788801 systemd-networkd[746]: eth0: Gained carrier Jan 17 00:17:11.784870 ignition[663]: Ignition finished successfully Jan 17 00:17:11.788815 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 00:17:11.789961 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:17:11.793580 systemd-networkd[746]: eth1: Link UP Jan 17 00:17:11.793584 systemd-networkd[746]: eth1: Gained carrier Jan 17 00:17:11.793598 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:17:11.799980 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:17:11.807832 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 Jan 17 00:17:11.810842 systemd-networkd[746]: eth0: DHCPv4 address 146.190.166.4/20, gateway 146.190.160.1 acquired from 169.254.169.253 Jan 17 00:17:11.819469 ignition[755]: Ignition 2.19.0 Jan 17 00:17:11.819504 ignition[755]: Stage: fetch Jan 17 00:17:11.819837 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:17:11.819856 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:17:11.820011 ignition[755]: parsed url from cmdline: "" Jan 17 00:17:11.820017 ignition[755]: no config URL provided Jan 17 00:17:11.820026 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:17:11.820040 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:17:11.820083 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 00:17:11.837373 ignition[755]: GET result: OK Jan 17 00:17:11.837520 ignition[755]: parsing config with SHA512: fa802fee69248640cb7836c8ea36706e7106ec4108db49e82a85d30474b6a9427a955035b712880b452fe120d8fe15c697e16162988798f1c0464691b1adb569 Jan 17 00:17:11.842004 unknown[755]: fetched base config from "system" Jan 17 00:17:11.842015 unknown[755]: fetched base config from "system" Jan 17 00:17:11.842454 ignition[755]: fetch: fetch complete Jan 17 00:17:11.842022 unknown[755]: fetched user config from "digitalocean" Jan 17 00:17:11.842459 ignition[755]: fetch: fetch passed Jan 17 00:17:11.845085 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:17:11.842507 ignition[755]: Ignition finished successfully Jan 17 00:17:11.850969 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:17:11.878022 ignition[762]: Ignition 2.19.0 Jan 17 00:17:11.878034 ignition[762]: Stage: kargs Jan 17 00:17:11.878216 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:17:11.878227 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:17:11.879333 ignition[762]: kargs: kargs passed Jan 17 00:17:11.879404 ignition[762]: Ignition finished successfully Jan 17 00:17:11.882122 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:17:11.889029 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:17:11.907646 ignition[768]: Ignition 2.19.0 Jan 17 00:17:11.907663 ignition[768]: Stage: disks Jan 17 00:17:11.907971 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:17:11.907989 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:17:11.909503 ignition[768]: disks: disks passed Jan 17 00:17:11.909587 ignition[768]: Ignition finished successfully Jan 17 00:17:11.912184 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:17:11.917374 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:17:11.917943 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:17:11.918973 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:17:11.919989 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:17:11.920864 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:17:11.926992 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:17:11.954749 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:17:11.958275 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:17:11.962959 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:17:12.071025 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:17:12.071658 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:17:12.072842 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:17:12.079933 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:17:12.084905 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:17:12.088048 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 00:17:12.096753 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (784) Jan 17 00:17:12.097041 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:17:12.098997 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:17:12.101769 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:17:12.099515 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:17:12.106432 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:17:12.106515 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:17:12.106536 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:17:12.119149 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:17:12.120918 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:17:12.130093 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:17:12.202745 coreos-metadata[786]: Jan 17 00:17:12.202 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:17:12.207662 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:17:12.215953 coreos-metadata[786]: Jan 17 00:17:12.215 INFO Fetch successful Jan 17 00:17:12.219794 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:17:12.220989 coreos-metadata[787]: Jan 17 00:17:12.220 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:17:12.224207 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 00:17:12.224891 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 00:17:12.230939 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:17:12.233483 coreos-metadata[787]: Jan 17 00:17:12.233 INFO Fetch successful Jan 17 00:17:12.239315 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:17:12.240437 coreos-metadata[787]: Jan 17 00:17:12.239 INFO wrote hostname ci-4081.3.6-n-2808572c0d to /sysroot/etc/hostname Jan 17 00:17:12.241738 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:17:12.347547 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:17:12.351890 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:17:12.353954 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:17:12.368791 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:17:12.386018 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:17:12.411066 ignition[906]: INFO : Ignition 2.19.0 Jan 17 00:17:12.411066 ignition[906]: INFO : Stage: mount Jan 17 00:17:12.412433 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:17:12.412433 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:17:12.414557 ignition[906]: INFO : mount: mount passed Jan 17 00:17:12.414557 ignition[906]: INFO : Ignition finished successfully Jan 17 00:17:12.414823 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:17:12.429057 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:17:12.549279 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:17:12.556096 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:17:12.567954 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (917) Jan 17 00:17:12.568014 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:17:12.570080 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:17:12.572443 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:17:12.577771 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:17:12.578401 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:17:12.603438 ignition[933]: INFO : Ignition 2.19.0 Jan 17 00:17:12.605782 ignition[933]: INFO : Stage: files Jan 17 00:17:12.605782 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:17:12.605782 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:17:12.605782 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:17:12.608393 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:17:12.608393 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:17:12.610907 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:17:12.612020 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:17:12.613327 unknown[933]: wrote ssh authorized keys file for user: core Jan 17 00:17:12.614194 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:17:12.615377 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:17:12.616137 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:17:12.665347 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:17:12.731787 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:17:12.731787 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:17:12.733898 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:17:12.933752 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:17:13.035425 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:17:13.036507 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:17:13.036507 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:17:13.036507 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:17:13.036507 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:17:13.036507 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:17:13.036507 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:17:13.036507 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:17:13.036507 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:17:13.047859 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:17:13.047859 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:17:13.047859 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:17:13.047859 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:17:13.047859 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:17:13.047859 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:17:13.390783 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:17:13.410958 systemd-networkd[746]: eth0: Gained IPv6LL Jan 17 00:17:13.796744 systemd-networkd[746]: eth1: Gained IPv6LL Jan 17 00:17:13.832929 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:17:13.834054 ignition[933]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:17:13.834643 ignition[933]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:17:13.835494 ignition[933]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:17:13.835494 ignition[933]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:17:13.835494 ignition[933]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:17:13.835494 ignition[933]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:17:13.835494 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:17:13.835494 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:17:13.835494 ignition[933]: INFO : files: files passed Jan 17 00:17:13.842522 ignition[933]: INFO : Ignition finished successfully Jan 17 00:17:13.836956 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:17:13.845971 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:17:13.848906 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:17:13.852893 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:17:13.853011 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:17:13.873104 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:17:13.873104 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:17:13.875282 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:17:13.878817 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:17:13.880144 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:17:13.889486 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:17:13.937405 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:17:13.937589 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:17:13.939507 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:17:13.940171 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:17:13.941330 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:17:13.948006 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:17:13.967605 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:17:13.977006 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:17:13.990584 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:17:13.992026 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:17:13.993359 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:17:13.993965 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:17:13.994111 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:17:13.995788 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:17:13.997001 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:17:13.998008 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:17:13.998960 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:17:14.000152 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:17:14.001190 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:17:14.002281 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:17:14.003426 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:17:14.004632 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:17:14.005785 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:17:14.006697 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:17:14.006899 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:17:14.008102 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:17:14.008867 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:17:14.009875 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:17:14.010158 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:17:14.011080 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:17:14.011318 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:17:14.012800 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:17:14.012991 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:17:14.014460 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:17:14.014625 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:17:14.015768 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:17:14.015989 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:17:14.023167 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:17:14.023690 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:17:14.023906 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:17:14.028940 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:17:14.029551 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:17:14.032182 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:17:14.033126 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:17:14.034912 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:17:14.046086 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:17:14.047802 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:17:14.059761 ignition[987]: INFO : Ignition 2.19.0 Jan 17 00:17:14.059761 ignition[987]: INFO : Stage: umount Jan 17 00:17:14.059761 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:17:14.059761 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 00:17:14.069821 ignition[987]: INFO : umount: umount passed Jan 17 00:17:14.069821 ignition[987]: INFO : Ignition finished successfully Jan 17 00:17:14.069001 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:17:14.070442 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:17:14.073171 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:17:14.092520 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:17:14.092665 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:17:14.097095 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:17:14.097172 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:17:14.098214 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:17:14.098267 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:17:14.103486 systemd[1]: Stopped target network.target - Network. Jan 17 00:17:14.104399 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:17:14.104515 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:17:14.105671 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:17:14.119683 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:17:14.122857 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:17:14.123614 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:17:14.125379 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:17:14.126554 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:17:14.126612 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:17:14.128017 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:17:14.128069 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:17:14.129003 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:17:14.129087 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:17:14.129966 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:17:14.130021 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:17:14.131754 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:17:14.132878 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:17:14.134405 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:17:14.134572 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:17:14.136614 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:17:14.136799 systemd-networkd[746]: eth0: DHCPv6 lease lost Jan 17 00:17:14.137305 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:17:14.140875 systemd-networkd[746]: eth1: DHCPv6 lease lost Jan 17 00:17:14.143865 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:17:14.144171 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:17:14.145772 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:17:14.145928 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:17:14.151192 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:17:14.151274 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:17:14.157922 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:17:14.158439 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:17:14.158527 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:17:14.159436 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:17:14.159522 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:17:14.160384 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:17:14.160471 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:17:14.164218 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:17:14.164317 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:17:14.166039 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:17:14.187585 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:17:14.188631 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:17:14.189979 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:17:14.190120 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:17:14.192330 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:17:14.192428 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:17:14.193741 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:17:14.193803 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:17:14.194908 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:17:14.194990 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:17:14.196145 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:17:14.196222 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:17:14.197238 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:17:14.197310 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:17:14.217120 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:17:14.219087 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:17:14.219364 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:17:14.219921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:17:14.219972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:17:14.227302 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:17:14.227470 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:17:14.229695 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:17:14.242038 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:17:14.253264 systemd[1]: Switching root. Jan 17 00:17:14.302819 systemd-journald[184]: Journal stopped Jan 17 00:17:15.617850 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 00:17:15.617975 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:17:15.618004 kernel: SELinux: policy capability open_perms=1 Jan 17 00:17:15.618021 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:17:15.618038 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:17:15.618068 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:17:15.618085 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:17:15.618103 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:17:15.618132 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:17:15.618152 kernel: audit: type=1403 audit(1768609034.450:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:17:15.618194 systemd[1]: Successfully loaded SELinux policy in 42.811ms. Jan 17 00:17:15.618231 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.543ms. Jan 17 00:17:15.618253 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:17:15.618273 systemd[1]: Detected virtualization kvm. Jan 17 00:17:15.618291 systemd[1]: Detected architecture x86-64. Jan 17 00:17:15.618307 systemd[1]: Detected first boot. Jan 17 00:17:15.618339 systemd[1]: Hostname set to . Jan 17 00:17:15.618365 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:17:15.618388 zram_generator::config[1030]: No configuration found. Jan 17 00:17:15.618417 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:17:15.618439 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:17:15.618458 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:17:15.618486 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:17:15.618527 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:17:15.618553 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:17:15.618574 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:17:15.622829 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:17:15.622906 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:17:15.622934 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:17:15.622959 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:17:15.622982 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:17:15.623003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:17:15.623027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:17:15.623050 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:17:15.623085 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:17:15.623195 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:17:15.623221 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:17:15.623243 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:17:15.623266 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:17:15.623288 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:17:15.623311 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:17:15.623341 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:17:15.623380 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:17:15.623404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:17:15.623427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:17:15.623448 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:17:15.623470 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:17:15.623492 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:17:15.623514 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:17:15.623536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:17:15.623575 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:17:15.623601 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:17:15.623623 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:17:15.623647 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:17:15.623669 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:17:15.623688 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:17:15.623707 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:15.623743 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:17:15.623768 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:17:15.623785 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:17:15.623832 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:17:15.623854 systemd[1]: Reached target machines.target - Containers. Jan 17 00:17:15.623876 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:17:15.623897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:17:15.623918 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:17:15.623939 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:17:15.623961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:17:15.623983 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:17:15.624009 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:17:15.624029 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:17:15.624050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:17:15.624085 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:17:15.624104 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:17:15.624124 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:17:15.624149 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:17:15.624174 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:17:15.624200 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:17:15.624220 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:17:15.624242 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:17:15.624263 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:17:15.624282 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:17:15.624303 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:17:15.624323 systemd[1]: Stopped verity-setup.service. Jan 17 00:17:15.624360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:15.624380 kernel: fuse: init (API version 7.39) Jan 17 00:17:15.624407 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:17:15.624429 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:17:15.624451 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:17:15.624472 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:17:15.624492 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:17:15.624515 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:17:15.624532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:17:15.624552 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:17:15.624573 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:17:15.624595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:17:15.624637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:17:15.624662 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:17:15.624682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:17:15.624703 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:17:15.628787 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:17:15.628838 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:17:15.628862 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:17:15.628883 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:17:15.628905 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:17:15.628984 systemd-journald[1100]: Collecting audit messages is disabled. Jan 17 00:17:15.629026 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:17:15.629048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:17:15.629068 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:17:15.629085 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:17:15.629105 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:17:15.629142 systemd-journald[1100]: Journal started Jan 17 00:17:15.629189 systemd-journald[1100]: Runtime Journal (/run/log/journal/528d93b2840c4510800e3b7e86e0b205) is 4.9M, max 39.3M, 34.4M free. Jan 17 00:17:15.211150 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:17:15.228025 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:17:15.228553 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:17:15.637843 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:17:15.647754 kernel: loop: module loaded Jan 17 00:17:15.653013 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:17:15.653136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:17:15.661746 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:17:15.669762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:17:15.669876 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:17:15.681871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:17:15.689851 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:17:15.689963 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:17:15.693775 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:17:15.693965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:17:15.695039 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:17:15.695949 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:17:15.696656 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:17:15.713093 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:17:15.720085 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:17:15.748216 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:17:15.762082 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:17:15.777007 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:17:15.778190 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:17:15.789689 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:17:15.806118 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:17:15.829341 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:17:15.835357 kernel: ACPI: bus type drm_connector registered Jan 17 00:17:15.845001 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:17:15.856232 systemd-journald[1100]: Time spent on flushing to /var/log/journal/528d93b2840c4510800e3b7e86e0b205 is 57.682ms for 992 entries. Jan 17 00:17:15.856232 systemd-journald[1100]: System Journal (/var/log/journal/528d93b2840c4510800e3b7e86e0b205) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:17:15.925597 systemd-journald[1100]: Received client request to flush runtime journal. Jan 17 00:17:15.925675 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:17:15.870423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:17:15.871947 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:17:15.872168 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:17:15.907014 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:17:15.919948 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:17:15.922975 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:17:15.937898 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:17:15.940390 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:17:15.956875 kernel: loop1: detected capacity change from 0 to 8 Jan 17 00:17:15.964059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:17:15.994767 kernel: loop2: detected capacity change from 0 to 219144 Jan 17 00:17:16.058354 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 17 00:17:16.058377 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 17 00:17:16.077375 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:17:16.078059 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:17:16.128755 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 00:17:16.156757 kernel: loop5: detected capacity change from 0 to 8 Jan 17 00:17:16.160795 kernel: loop6: detected capacity change from 0 to 219144 Jan 17 00:17:16.198756 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 00:17:16.235122 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 00:17:16.238059 (sd-merge)[1175]: Merged extensions into '/usr'. Jan 17 00:17:16.246881 systemd[1]: Reloading requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:17:16.246902 systemd[1]: Reloading... Jan 17 00:17:16.442820 zram_generator::config[1201]: No configuration found. Jan 17 00:17:16.595402 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:17:16.663940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:17:16.723006 systemd[1]: Reloading finished in 475 ms. Jan 17 00:17:16.754079 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:17:16.755712 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:17:16.769033 systemd[1]: Starting ensure-sysext.service... Jan 17 00:17:16.773108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:17:16.782126 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:17:16.782144 systemd[1]: Reloading... Jan 17 00:17:16.842112 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:17:16.842464 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:17:16.843539 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:17:16.847013 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 17 00:17:16.847147 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 17 00:17:16.854705 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:17:16.854735 systemd-tmpfiles[1245]: Skipping /boot Jan 17 00:17:16.885458 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:17:16.885474 systemd-tmpfiles[1245]: Skipping /boot Jan 17 00:17:16.899763 zram_generator::config[1271]: No configuration found. Jan 17 00:17:17.098488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:17:17.184580 systemd[1]: Reloading finished in 401 ms. Jan 17 00:17:17.204398 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:17:17.210618 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:17:17.224090 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:17:17.237833 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:17:17.241131 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:17:17.252129 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:17:17.262990 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:17:17.267361 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:17:17.278753 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:17.279187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:17:17.285198 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:17:17.290204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:17:17.297445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:17:17.299092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:17:17.299323 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:17.301975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:17:17.302815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:17:17.312699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:17.313666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:17:17.325167 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:17:17.327049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:17:17.327312 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:17.341225 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:17:17.346851 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:17.347266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:17:17.361177 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:17:17.362096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:17:17.362349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:17.369069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:17:17.369406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:17:17.377530 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Jan 17 00:17:17.378795 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:17:17.381509 systemd[1]: Finished ensure-sysext.service. Jan 17 00:17:17.398908 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:17:17.410093 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:17:17.413500 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:17:17.414845 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:17:17.418547 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:17:17.420475 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:17:17.433587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:17:17.433934 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:17:17.442647 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:17:17.442999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:17:17.444334 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:17:17.454045 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:17:17.460749 augenrules[1353]: No rules Jan 17 00:17:17.468062 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:17:17.468744 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:17:17.469684 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:17:17.480199 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:17:17.514280 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:17:17.518966 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:17:17.660895 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 00:17:17.661617 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:17.661876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:17:17.670020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:17:17.674010 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:17:17.686136 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:17:17.686974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:17:17.687053 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:17:17.687098 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:17:17.687439 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:17:17.688813 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:17:17.701662 systemd-networkd[1368]: lo: Link UP Jan 17 00:17:17.725022 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 00:17:17.701672 systemd-networkd[1368]: lo: Gained carrier Jan 17 00:17:17.707157 systemd-networkd[1368]: Enumeration completed Jan 17 00:17:17.707365 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:17:17.722680 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:17:17.727400 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 00:17:17.739286 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:17:17.739553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:17:17.740711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:17:17.740901 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:17:17.742523 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:17:17.751716 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:17:17.752688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:17:17.754143 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:17:17.761415 systemd-resolved[1320]: Positive Trust Anchors: Jan 17 00:17:17.761438 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:17:17.761476 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:17:17.768109 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1373) Jan 17 00:17:17.769237 systemd-resolved[1320]: Using system hostname 'ci-4081.3.6-n-2808572c0d'. Jan 17 00:17:17.773893 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:17:17.774716 systemd[1]: Reached target network.target - Network. Jan 17 00:17:17.775375 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:17:17.779320 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:17:17.868762 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:17:17.875585 systemd-networkd[1368]: eth1: Configuring with /run/systemd/network/10-3a:fa:3b:9c:7c:dd.network. Jan 17 00:17:17.878350 systemd-networkd[1368]: eth1: Link UP Jan 17 00:17:17.878359 systemd-networkd[1368]: eth1: Gained carrier Jan 17 00:17:17.880793 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:17:17.882136 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 17 00:17:17.893673 systemd-networkd[1368]: eth0: Configuring with /run/systemd/network/10-e2:c9:7b:d5:33:81.network. Jan 17 00:17:17.895008 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 17 00:17:17.895986 systemd-networkd[1368]: eth0: Link UP Jan 17 00:17:17.895994 systemd-networkd[1368]: eth0: Gained carrier Jan 17 00:17:17.902141 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 17 00:17:17.929824 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:17:17.939029 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 00:17:17.966270 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:17:17.977234 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:17:18.021886 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:17:18.022810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:17:18.027526 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:17:18.044776 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 00:17:18.047755 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 00:17:18.053697 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:17:18.053809 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:17:18.053827 kernel: [drm] features: -context_init Jan 17 00:17:18.055757 kernel: [drm] number of scanouts: 1 Jan 17 00:17:18.055840 kernel: [drm] number of cap sets: 0 Jan 17 00:17:18.060886 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 00:17:18.069793 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 00:17:18.069885 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:17:18.082842 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:17:18.102759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:17:18.105121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:17:18.114157 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:17:18.225889 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:17:18.292171 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:17:18.318527 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:17:18.326060 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:17:18.358417 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:17:18.388303 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:17:18.389229 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:17:18.389392 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:17:18.389611 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:17:18.389710 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:17:18.392452 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:17:18.392646 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:17:18.392739 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:17:18.392804 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:17:18.392832 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:17:18.392881 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:17:18.393574 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:17:18.395751 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:17:18.403436 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:17:18.405500 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:17:18.407907 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:17:18.408528 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:17:18.410864 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:17:18.411575 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:17:18.411607 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:17:18.418956 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:17:18.424986 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:17:18.431778 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:17:18.442001 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:17:18.446965 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:17:18.453042 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:17:18.456621 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:17:18.467171 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:17:18.470753 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:17:18.479036 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:17:18.486013 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:17:18.498164 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:17:18.499519 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:17:18.506751 jq[1436]: false Jan 17 00:17:18.502029 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:17:18.509775 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:17:18.523884 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:17:18.529473 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:17:18.539371 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:17:18.548923 extend-filesystems[1437]: Found loop4 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found loop5 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found loop6 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found loop7 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found vda Jan 17 00:17:18.548923 extend-filesystems[1437]: Found vda1 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found vda2 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found vda3 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found usr Jan 17 00:17:18.548923 extend-filesystems[1437]: Found vda4 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found vda6 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found vda7 Jan 17 00:17:18.548923 extend-filesystems[1437]: Found vda9 Jan 17 00:17:18.548923 extend-filesystems[1437]: Checking size of /dev/vda9 Jan 17 00:17:18.721497 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 00:17:18.721577 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 00:17:18.721601 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1373) Jan 17 00:17:18.540600 dbus-daemon[1435]: [system] SELinux support is enabled Jan 17 00:17:18.539608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:17:18.740408 update_engine[1445]: I20260117 00:17:18.718127 1445 main.cc:92] Flatcar Update Engine starting Jan 17 00:17:18.740408 update_engine[1445]: I20260117 00:17:18.733788 1445 update_check_scheduler.cc:74] Next update check in 5m25s Jan 17 00:17:18.740676 extend-filesystems[1437]: Resized partition /dev/vda9 Jan 17 00:17:18.745668 coreos-metadata[1434]: Jan 17 00:17:18.728 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:17:18.542657 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:17:18.753461 jq[1448]: true Jan 17 00:17:18.753712 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:17:18.753712 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:17:18.753712 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 00:17:18.753712 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 00:17:18.561822 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:17:18.785884 coreos-metadata[1434]: Jan 17 00:17:18.766 INFO Fetch successful Jan 17 00:17:18.786447 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jan 17 00:17:18.786447 extend-filesystems[1437]: Found vdb Jan 17 00:17:18.561889 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:17:18.565394 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:17:18.565482 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 00:17:18.796409 tar[1450]: linux-amd64/LICENSE Jan 17 00:17:18.796409 tar[1450]: linux-amd64/helm Jan 17 00:17:18.565506 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:17:18.805338 jq[1465]: true Jan 17 00:17:18.625213 systemd-logind[1444]: New seat seat0. Jan 17 00:17:18.630533 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:17:18.630555 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:17:18.633039 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:17:18.646221 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:17:18.646454 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:17:18.661126 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:17:18.661374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:17:18.707254 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:17:18.708875 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:17:18.732548 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:17:18.740241 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:17:18.753310 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:17:18.887307 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:17:18.891682 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:17:18.943266 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:17:18.945369 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:17:18.946188 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:17:18.957550 systemd[1]: Starting sshkeys.service... Jan 17 00:17:18.980925 systemd-networkd[1368]: eth0: Gained IPv6LL Jan 17 00:17:18.981377 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 17 00:17:18.985936 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:17:18.988700 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:17:19.007019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:19.029091 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:17:19.040160 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:17:19.054526 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:17:19.063889 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:17:19.075266 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:17:19.113223 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:17:19.131139 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:17:19.138536 coreos-metadata[1525]: Jan 17 00:17:19.136 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 00:17:19.139609 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:17:19.140924 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:17:19.159138 coreos-metadata[1525]: Jan 17 00:17:19.159 INFO Fetch successful Jan 17 00:17:19.160343 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:17:19.185356 unknown[1525]: wrote ssh authorized keys file for user: core Jan 17 00:17:19.231189 update-ssh-keys[1540]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:17:19.232540 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:17:19.240450 systemd[1]: Finished sshkeys.service. Jan 17 00:17:19.252313 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:17:19.267490 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:17:19.280214 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:17:19.282782 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:17:19.361127 containerd[1466]: time="2026-01-17T00:17:19.360620249Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:17:19.433557 containerd[1466]: time="2026-01-17T00:17:19.433452802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:17:19.437042 containerd[1466]: time="2026-01-17T00:17:19.436600526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:17:19.437042 containerd[1466]: time="2026-01-17T00:17:19.436665384Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:17:19.437042 containerd[1466]: time="2026-01-17T00:17:19.436697277Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:17:19.437042 containerd[1466]: time="2026-01-17T00:17:19.436964010Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:17:19.437364 containerd[1466]: time="2026-01-17T00:17:19.437008586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:17:19.437754 containerd[1466]: time="2026-01-17T00:17:19.437594596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:17:19.437754 containerd[1466]: time="2026-01-17T00:17:19.437632007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:17:19.438714 containerd[1466]: time="2026-01-17T00:17:19.438109673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:17:19.438714 containerd[1466]: time="2026-01-17T00:17:19.438144288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:17:19.438714 containerd[1466]: time="2026-01-17T00:17:19.438167702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:17:19.438714 containerd[1466]: time="2026-01-17T00:17:19.438183653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:17:19.438714 containerd[1466]: time="2026-01-17T00:17:19.438330681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:17:19.438714 containerd[1466]: time="2026-01-17T00:17:19.438658895Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:17:19.439251 containerd[1466]: time="2026-01-17T00:17:19.439216452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:17:19.439379 containerd[1466]: time="2026-01-17T00:17:19.439354756Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:17:19.439590 containerd[1466]: time="2026-01-17T00:17:19.439560825Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:17:19.439847 containerd[1466]: time="2026-01-17T00:17:19.439785289Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:17:19.463390 containerd[1466]: time="2026-01-17T00:17:19.463322382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:17:19.464044 containerd[1466]: time="2026-01-17T00:17:19.463610792Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:17:19.464044 containerd[1466]: time="2026-01-17T00:17:19.463683147Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:17:19.464044 containerd[1466]: time="2026-01-17T00:17:19.463700370Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:17:19.464715 containerd[1466]: time="2026-01-17T00:17:19.464272076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:17:19.465008 containerd[1466]: time="2026-01-17T00:17:19.464850296Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465346660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465520214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465573401Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465592665Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465607069Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465622843Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465644251Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465659017Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465674224Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:17:19.466431 containerd[1466]: time="2026-01-17T00:17:19.465699915Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:17:19.467397 containerd[1466]: time="2026-01-17T00:17:19.465715843Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:17:19.467397 containerd[1466]: time="2026-01-17T00:17:19.467357559Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:17:19.467554 containerd[1466]: time="2026-01-17T00:17:19.467531872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.467660 containerd[1466]: time="2026-01-17T00:17:19.467641884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.467788 containerd[1466]: time="2026-01-17T00:17:19.467775085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.467883 containerd[1466]: time="2026-01-17T00:17:19.467866078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.468825 containerd[1466]: time="2026-01-17T00:17:19.468762035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.468825 containerd[1466]: time="2026-01-17T00:17:19.468788595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.468825 containerd[1466]: time="2026-01-17T00:17:19.468802175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469019 containerd[1466]: time="2026-01-17T00:17:19.468907978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469019 containerd[1466]: time="2026-01-17T00:17:19.468924521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469378 containerd[1466]: time="2026-01-17T00:17:19.469069460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469378 containerd[1466]: time="2026-01-17T00:17:19.469121970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469378 containerd[1466]: time="2026-01-17T00:17:19.469164193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469378 containerd[1466]: time="2026-01-17T00:17:19.469306911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469378 containerd[1466]: time="2026-01-17T00:17:19.469330342Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:17:19.469586 containerd[1466]: time="2026-01-17T00:17:19.469361133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469586 containerd[1466]: time="2026-01-17T00:17:19.469467098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469586 containerd[1466]: time="2026-01-17T00:17:19.469478478Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:17:19.469875 containerd[1466]: time="2026-01-17T00:17:19.469632736Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:17:19.469875 containerd[1466]: time="2026-01-17T00:17:19.469656820Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:17:19.469875 containerd[1466]: time="2026-01-17T00:17:19.469668552Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:17:19.469875 containerd[1466]: time="2026-01-17T00:17:19.469763675Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:17:19.469875 containerd[1466]: time="2026-01-17T00:17:19.469780257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.469875 containerd[1466]: time="2026-01-17T00:17:19.469798727Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:17:19.470738 containerd[1466]: time="2026-01-17T00:17:19.470049129Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:17:19.470738 containerd[1466]: time="2026-01-17T00:17:19.470075952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:17:19.471412 containerd[1466]: time="2026-01-17T00:17:19.471251802Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:17:19.471412 containerd[1466]: time="2026-01-17T00:17:19.471332733Z" level=info msg="Connect containerd service" Jan 17 00:17:19.472129 containerd[1466]: time="2026-01-17T00:17:19.471863749Z" level=info msg="using legacy CRI server" Jan 17 00:17:19.472129 containerd[1466]: time="2026-01-17T00:17:19.471903042Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:17:19.472307 containerd[1466]: time="2026-01-17T00:17:19.472251343Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:17:19.473445 containerd[1466]: time="2026-01-17T00:17:19.473334634Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:17:19.473975 containerd[1466]: time="2026-01-17T00:17:19.473927744Z" level=info msg="Start subscribing containerd event" Jan 17 00:17:19.474274 containerd[1466]: time="2026-01-17T00:17:19.474149254Z" level=info msg="Start recovering state" Jan 17 00:17:19.474274 containerd[1466]: time="2026-01-17T00:17:19.474247779Z" level=info msg="Start event monitor" Jan 17 00:17:19.474400 containerd[1466]: time="2026-01-17T00:17:19.474387117Z" level=info msg="Start snapshots syncer" Jan 17 00:17:19.474527 containerd[1466]: time="2026-01-17T00:17:19.474453990Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:17:19.474527 containerd[1466]: time="2026-01-17T00:17:19.474464280Z" level=info msg="Start streaming server" Jan 17 00:17:19.474657 containerd[1466]: time="2026-01-17T00:17:19.474091538Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:17:19.475746 containerd[1466]: time="2026-01-17T00:17:19.474810055Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:17:19.475746 containerd[1466]: time="2026-01-17T00:17:19.474881311Z" level=info msg="containerd successfully booted in 0.117332s" Jan 17 00:17:19.474998 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:17:19.799756 tar[1450]: linux-amd64/README.md Jan 17 00:17:19.812920 systemd-networkd[1368]: eth1: Gained IPv6LL Jan 17 00:17:19.813874 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 17 00:17:19.814769 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:17:20.517589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:20.519321 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:17:20.525236 systemd[1]: Startup finished in 1.215s (kernel) + 5.734s (initrd) + 6.116s (userspace) = 13.066s. Jan 17 00:17:20.529924 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:17:21.281135 kubelet[1558]: E0117 00:17:21.281026 1558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:17:21.284086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:17:21.284298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:17:21.284692 systemd[1]: kubelet.service: Consumed 1.289s CPU time. Jan 17 00:17:23.318078 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:17:23.324117 systemd[1]: Started sshd@0-146.190.166.4:22-4.153.228.146:39414.service - OpenSSH per-connection server daemon (4.153.228.146:39414). Jan 17 00:17:23.716153 sshd[1569]: Accepted publickey for core from 4.153.228.146 port 39414 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:23.718737 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:23.733120 systemd-logind[1444]: New session 1 of user core. Jan 17 00:17:23.734822 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:17:23.741607 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:17:23.760092 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:17:23.768150 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:17:23.774563 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:17:23.904985 systemd[1573]: Queued start job for default target default.target. Jan 17 00:17:23.915446 systemd[1573]: Created slice app.slice - User Application Slice. Jan 17 00:17:23.915495 systemd[1573]: Reached target paths.target - Paths. Jan 17 00:17:23.915517 systemd[1573]: Reached target timers.target - Timers. Jan 17 00:17:23.917268 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:17:23.933759 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:17:23.933923 systemd[1573]: Reached target sockets.target - Sockets. Jan 17 00:17:23.933942 systemd[1573]: Reached target basic.target - Basic System. Jan 17 00:17:23.933997 systemd[1573]: Reached target default.target - Main User Target. Jan 17 00:17:23.934036 systemd[1573]: Startup finished in 149ms. Jan 17 00:17:23.934198 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:17:23.944998 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:17:24.263334 systemd[1]: Started sshd@1-146.190.166.4:22-4.153.228.146:37922.service - OpenSSH per-connection server daemon (4.153.228.146:37922). Jan 17 00:17:24.672984 sshd[1585]: Accepted publickey for core from 4.153.228.146 port 37922 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:24.674616 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:24.679939 systemd-logind[1444]: New session 2 of user core. Jan 17 00:17:24.688017 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:17:24.978666 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:24.983085 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:17:24.983671 systemd[1]: sshd@1-146.190.166.4:22-4.153.228.146:37922.service: Deactivated successfully. Jan 17 00:17:24.986047 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:17:24.987472 systemd-logind[1444]: Removed session 2. Jan 17 00:17:25.057212 systemd[1]: Started sshd@2-146.190.166.4:22-4.153.228.146:37924.service - OpenSSH per-connection server daemon (4.153.228.146:37924). Jan 17 00:17:25.475791 sshd[1592]: Accepted publickey for core from 4.153.228.146 port 37924 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:25.477596 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:25.497031 systemd-logind[1444]: New session 3 of user core. Jan 17 00:17:25.504488 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:17:25.774165 sshd[1592]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:25.779291 systemd[1]: sshd@2-146.190.166.4:22-4.153.228.146:37924.service: Deactivated successfully. Jan 17 00:17:25.781838 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:17:25.782661 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:17:25.784033 systemd-logind[1444]: Removed session 3. Jan 17 00:17:25.851261 systemd[1]: Started sshd@3-146.190.166.4:22-4.153.228.146:37938.service - OpenSSH per-connection server daemon (4.153.228.146:37938). Jan 17 00:17:26.279946 sshd[1599]: Accepted publickey for core from 4.153.228.146 port 37938 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:26.281935 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:26.288946 systemd-logind[1444]: New session 4 of user core. Jan 17 00:17:26.292004 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:17:26.587337 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:26.593896 systemd[1]: sshd@3-146.190.166.4:22-4.153.228.146:37938.service: Deactivated successfully. Jan 17 00:17:26.596506 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:17:26.597308 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:17:26.598498 systemd-logind[1444]: Removed session 4. Jan 17 00:17:26.667372 systemd[1]: Started sshd@4-146.190.166.4:22-4.153.228.146:37950.service - OpenSSH per-connection server daemon (4.153.228.146:37950). Jan 17 00:17:27.106234 sshd[1606]: Accepted publickey for core from 4.153.228.146 port 37950 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:27.108171 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:27.114772 systemd-logind[1444]: New session 5 of user core. Jan 17 00:17:27.121011 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:17:27.366964 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:17:27.367458 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:17:27.382604 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 17 00:17:27.450601 sshd[1606]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:27.456396 systemd[1]: sshd@4-146.190.166.4:22-4.153.228.146:37950.service: Deactivated successfully. Jan 17 00:17:27.458656 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:17:27.459919 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:17:27.461346 systemd-logind[1444]: Removed session 5. Jan 17 00:17:27.535442 systemd[1]: Started sshd@5-146.190.166.4:22-4.153.228.146:37962.service - OpenSSH per-connection server daemon (4.153.228.146:37962). Jan 17 00:17:27.925703 sshd[1614]: Accepted publickey for core from 4.153.228.146 port 37962 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:27.928126 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:27.935641 systemd-logind[1444]: New session 6 of user core. Jan 17 00:17:27.942097 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:17:28.154242 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:17:28.155297 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:17:28.160104 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 17 00:17:28.167735 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:17:28.168580 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:17:28.189414 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:17:28.192430 auditctl[1621]: No rules Jan 17 00:17:28.193051 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:17:28.193323 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:17:28.197377 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:17:28.240421 augenrules[1639]: No rules Jan 17 00:17:28.241532 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:17:28.243600 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 17 00:17:28.305069 sshd[1614]: pam_unix(sshd:session): session closed for user core Jan 17 00:17:28.309652 systemd[1]: sshd@5-146.190.166.4:22-4.153.228.146:37962.service: Deactivated successfully. Jan 17 00:17:28.311943 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:17:28.312829 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:17:28.314136 systemd-logind[1444]: Removed session 6. Jan 17 00:17:28.381200 systemd[1]: Started sshd@6-146.190.166.4:22-4.153.228.146:37970.service - OpenSSH per-connection server daemon (4.153.228.146:37970). Jan 17 00:17:28.768574 sshd[1647]: Accepted publickey for core from 4.153.228.146 port 37970 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:17:28.770790 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:17:28.776215 systemd-logind[1444]: New session 7 of user core. Jan 17 00:17:28.789532 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:17:28.998682 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:17:28.999584 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:17:29.492261 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:17:29.492539 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:17:29.976508 dockerd[1666]: time="2026-01-17T00:17:29.976418546Z" level=info msg="Starting up" Jan 17 00:17:30.129761 dockerd[1666]: time="2026-01-17T00:17:30.128985032Z" level=info msg="Loading containers: start." Jan 17 00:17:30.261760 kernel: Initializing XFRM netlink socket Jan 17 00:17:30.295980 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jan 17 00:17:30.362757 systemd-networkd[1368]: docker0: Link UP Jan 17 00:17:30.379152 dockerd[1666]: time="2026-01-17T00:17:30.379094517Z" level=info msg="Loading containers: done." Jan 17 00:17:30.396828 dockerd[1666]: time="2026-01-17T00:17:30.396294436Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:17:30.396828 dockerd[1666]: time="2026-01-17T00:17:30.396426291Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:17:30.396828 dockerd[1666]: time="2026-01-17T00:17:30.396547120Z" level=info msg="Daemon has completed initialization" Jan 17 00:17:30.432967 dockerd[1666]: time="2026-01-17T00:17:30.432679422Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:17:30.433136 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:17:31.636825 systemd-resolved[1320]: Clock change detected. Flushing caches. Jan 17 00:17:31.637374 systemd-timesyncd[1345]: Contacted time server 64.186.96.3:123 (2.flatcar.pool.ntp.org). Jan 17 00:17:31.637467 systemd-timesyncd[1345]: Initial clock synchronization to Sat 2026-01-17 00:17:31.636358 UTC. Jan 17 00:17:32.400623 containerd[1466]: time="2026-01-17T00:17:32.400565117Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:17:32.456734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:17:32.467664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:32.680882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:32.681161 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:17:32.753137 kubelet[1818]: E0117 00:17:32.753054 1818 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:17:32.758165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:17:32.758375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:17:33.183073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641468857.mount: Deactivated successfully. Jan 17 00:17:34.539529 containerd[1466]: time="2026-01-17T00:17:34.539446882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:34.540834 containerd[1466]: time="2026-01-17T00:17:34.540779224Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 17 00:17:34.541518 containerd[1466]: time="2026-01-17T00:17:34.541359137Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:34.546510 containerd[1466]: time="2026-01-17T00:17:34.545423139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:34.550172 containerd[1466]: time="2026-01-17T00:17:34.550117504Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.149507388s" Jan 17 00:17:34.550365 containerd[1466]: time="2026-01-17T00:17:34.550348480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 17 00:17:34.551245 containerd[1466]: time="2026-01-17T00:17:34.551196839Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:17:36.042626 containerd[1466]: time="2026-01-17T00:17:36.041088810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:36.042626 containerd[1466]: time="2026-01-17T00:17:36.042331858Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 17 00:17:36.042626 containerd[1466]: time="2026-01-17T00:17:36.042449439Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:36.046790 containerd[1466]: time="2026-01-17T00:17:36.046721894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:36.048337 containerd[1466]: time="2026-01-17T00:17:36.048276075Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.497028565s" Jan 17 00:17:36.048452 containerd[1466]: time="2026-01-17T00:17:36.048334972Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 17 00:17:36.049058 containerd[1466]: time="2026-01-17T00:17:36.048977231Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:17:37.266375 containerd[1466]: time="2026-01-17T00:17:37.266299242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:37.268087 containerd[1466]: time="2026-01-17T00:17:37.268020985Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 17 00:17:37.268980 containerd[1466]: time="2026-01-17T00:17:37.268931299Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:37.272708 containerd[1466]: time="2026-01-17T00:17:37.271826006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:37.273314 containerd[1466]: time="2026-01-17T00:17:37.273282370Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.224264601s" Jan 17 00:17:37.273314 containerd[1466]: time="2026-01-17T00:17:37.273315452Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 17 00:17:37.274744 containerd[1466]: time="2026-01-17T00:17:37.274713097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:17:37.472882 systemd-resolved[1320]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 00:17:38.541300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484744044.mount: Deactivated successfully. Jan 17 00:17:38.930381 containerd[1466]: time="2026-01-17T00:17:38.930202445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:38.931540 containerd[1466]: time="2026-01-17T00:17:38.931451601Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 17 00:17:38.932509 containerd[1466]: time="2026-01-17T00:17:38.932193999Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:38.935507 containerd[1466]: time="2026-01-17T00:17:38.935081924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:38.936513 containerd[1466]: time="2026-01-17T00:17:38.936206907Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.661326752s" Jan 17 00:17:38.936513 containerd[1466]: time="2026-01-17T00:17:38.936262564Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 00:17:38.936992 containerd[1466]: time="2026-01-17T00:17:38.936963765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:17:39.598073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208698024.mount: Deactivated successfully. Jan 17 00:17:40.582753 systemd-resolved[1320]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 00:17:40.718512 containerd[1466]: time="2026-01-17T00:17:40.717136235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:40.737163 containerd[1466]: time="2026-01-17T00:17:40.737085270Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 17 00:17:40.738695 containerd[1466]: time="2026-01-17T00:17:40.738642950Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:40.742466 containerd[1466]: time="2026-01-17T00:17:40.742397074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:40.744864 containerd[1466]: time="2026-01-17T00:17:40.744790292Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.807786997s" Jan 17 00:17:40.745067 containerd[1466]: time="2026-01-17T00:17:40.745042133Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 17 00:17:40.745940 containerd[1466]: time="2026-01-17T00:17:40.745906825Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:17:41.362086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147575538.mount: Deactivated successfully. Jan 17 00:17:41.367811 containerd[1466]: time="2026-01-17T00:17:41.367744763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:41.369184 containerd[1466]: time="2026-01-17T00:17:41.369116968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 17 00:17:41.369738 containerd[1466]: time="2026-01-17T00:17:41.369679608Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:41.372405 containerd[1466]: time="2026-01-17T00:17:41.372319128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:41.373354 containerd[1466]: time="2026-01-17T00:17:41.373168997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 627.226627ms" Jan 17 00:17:41.373354 containerd[1466]: time="2026-01-17T00:17:41.373209552Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 17 00:17:41.374609 containerd[1466]: time="2026-01-17T00:17:41.374115920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:17:42.106368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226150021.mount: Deactivated successfully. Jan 17 00:17:42.957279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:17:42.963320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:43.189083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:43.203931 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:17:43.292231 kubelet[2015]: E0117 00:17:43.292077 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:17:43.297416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:17:43.297899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:17:44.742386 containerd[1466]: time="2026-01-17T00:17:44.742312384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:44.745103 containerd[1466]: time="2026-01-17T00:17:44.745014609Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 17 00:17:44.745432 containerd[1466]: time="2026-01-17T00:17:44.745238708Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:44.749292 containerd[1466]: time="2026-01-17T00:17:44.749201792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:17:44.751056 containerd[1466]: time="2026-01-17T00:17:44.750632610Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.376481741s" Jan 17 00:17:44.751056 containerd[1466]: time="2026-01-17T00:17:44.750690024Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 17 00:17:49.052496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:49.061966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:49.105171 systemd[1]: Reloading requested from client PID 2052 ('systemctl') (unit session-7.scope)... Jan 17 00:17:49.105189 systemd[1]: Reloading... Jan 17 00:17:49.244518 zram_generator::config[2091]: No configuration found. Jan 17 00:17:49.403041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:17:49.489254 systemd[1]: Reloading finished in 383 ms. Jan 17 00:17:49.574769 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:49.575841 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:17:49.576289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:49.585119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:49.735148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:49.747099 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:17:49.802910 kubelet[2147]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:17:49.802910 kubelet[2147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:17:49.803401 kubelet[2147]: I0117 00:17:49.802982 2147 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:17:50.165646 kubelet[2147]: I0117 00:17:50.164974 2147 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:17:50.165646 kubelet[2147]: I0117 00:17:50.165010 2147 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:17:50.165646 kubelet[2147]: I0117 00:17:50.165045 2147 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:17:50.166149 kubelet[2147]: I0117 00:17:50.166104 2147 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:17:50.166560 kubelet[2147]: I0117 00:17:50.166531 2147 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:17:50.180468 kubelet[2147]: I0117 00:17:50.179328 2147 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:17:50.182457 kubelet[2147]: E0117 00:17:50.182204 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://146.190.166.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:17:50.191117 kubelet[2147]: E0117 00:17:50.191065 2147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:17:50.191268 kubelet[2147]: I0117 00:17:50.191151 2147 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:17:50.199578 kubelet[2147]: I0117 00:17:50.199538 2147 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:17:50.200497 kubelet[2147]: I0117 00:17:50.200417 2147 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:17:50.202015 kubelet[2147]: I0117 00:17:50.200468 2147 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2808572c0d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:17:50.202015 kubelet[2147]: I0117 00:17:50.202002 2147 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:17:50.202015 kubelet[2147]: I0117 00:17:50.202019 2147 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:17:50.202446 kubelet[2147]: I0117 00:17:50.202150 2147 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:17:50.204881 kubelet[2147]: I0117 00:17:50.204839 2147 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:50.206685 kubelet[2147]: I0117 00:17:50.206654 2147 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:17:50.206685 kubelet[2147]: I0117 00:17:50.206687 2147 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:17:50.208616 kubelet[2147]: I0117 00:17:50.206714 2147 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:17:50.208616 kubelet[2147]: I0117 00:17:50.206732 2147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:17:50.209410 kubelet[2147]: I0117 00:17:50.209383 2147 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:17:50.209921 kubelet[2147]: I0117 00:17:50.209889 2147 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:17:50.209994 kubelet[2147]: I0117 00:17:50.209940 2147 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:17:50.210037 kubelet[2147]: W0117 00:17:50.210008 2147 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:17:50.213707 kubelet[2147]: I0117 00:17:50.213062 2147 server.go:1262] "Started kubelet" Jan 17 00:17:50.213707 kubelet[2147]: E0117 00:17:50.213264 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://146.190.166.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:17:50.223818 kubelet[2147]: I0117 00:17:50.223784 2147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:17:50.226895 kubelet[2147]: I0117 00:17:50.226678 2147 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:17:50.227934 kubelet[2147]: E0117 00:17:50.224347 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.166.4:6443/api/v1/namespaces/default/events\": dial tcp 146.190.166.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-2808572c0d.188b5c943094d73a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2808572c0d,UID:ci-4081.3.6-n-2808572c0d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2808572c0d,},FirstTimestamp:2026-01-17 00:17:50.213027642 +0000 UTC m=+0.458922801,LastTimestamp:2026-01-17 00:17:50.213027642 +0000 UTC m=+0.458922801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2808572c0d,}" Jan 17 00:17:50.229604 kubelet[2147]: I0117 00:17:50.229468 2147 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:17:50.229604 kubelet[2147]: I0117 00:17:50.229570 2147 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:17:50.231594 kubelet[2147]: I0117 00:17:50.229922 2147 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:17:50.231799 kubelet[2147]: I0117 00:17:50.228142 2147 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:17:50.233901 kubelet[2147]: E0117 00:17:50.233855 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://146.190.166.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2808572c0d&limit=500&resourceVersion=0\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:17:50.238545 kubelet[2147]: I0117 00:17:50.238457 2147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:17:50.239670 kubelet[2147]: I0117 00:17:50.239636 2147 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:17:50.241200 kubelet[2147]: E0117 00:17:50.240912 2147 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-2808572c0d\" not found" Jan 17 00:17:50.243222 kubelet[2147]: I0117 00:17:50.242858 2147 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:17:50.243375 kubelet[2147]: I0117 00:17:50.243196 2147 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:17:50.243838 kubelet[2147]: I0117 00:17:50.243778 2147 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:17:50.244191 kubelet[2147]: I0117 00:17:50.243911 2147 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:17:50.244456 kubelet[2147]: E0117 00:17:50.244426 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.166.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2808572c0d?timeout=10s\": dial tcp 146.190.166.4:6443: connect: connection refused" interval="200ms" Jan 17 00:17:50.244868 kubelet[2147]: E0117 00:17:50.244840 2147 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:17:50.247164 kubelet[2147]: E0117 00:17:50.247123 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://146.190.166.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:17:50.247298 kubelet[2147]: I0117 00:17:50.247253 2147 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:17:50.251395 kubelet[2147]: I0117 00:17:50.250940 2147 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:17:50.271535 kubelet[2147]: I0117 00:17:50.271500 2147 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:17:50.271763 kubelet[2147]: I0117 00:17:50.271745 2147 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:17:50.271878 kubelet[2147]: I0117 00:17:50.271869 2147 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:50.275373 kubelet[2147]: I0117 00:17:50.275343 2147 policy_none.go:49] "None policy: Start" Jan 17 00:17:50.275519 kubelet[2147]: I0117 00:17:50.275408 2147 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:17:50.275519 kubelet[2147]: I0117 00:17:50.275426 2147 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:17:50.278651 kubelet[2147]: I0117 00:17:50.278596 2147 policy_none.go:47] "Start" Jan 17 00:17:50.286720 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:17:50.297534 kubelet[2147]: I0117 00:17:50.297491 2147 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:17:50.297534 kubelet[2147]: I0117 00:17:50.297533 2147 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:17:50.298812 kubelet[2147]: I0117 00:17:50.297567 2147 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:17:50.298812 kubelet[2147]: E0117 00:17:50.297638 2147 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:17:50.300179 kubelet[2147]: E0117 00:17:50.299942 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://146.190.166.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:17:50.304532 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:17:50.308916 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:17:50.318983 kubelet[2147]: E0117 00:17:50.318934 2147 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:17:50.319839 kubelet[2147]: I0117 00:17:50.319219 2147 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:17:50.319839 kubelet[2147]: I0117 00:17:50.319244 2147 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:17:50.319839 kubelet[2147]: I0117 00:17:50.319576 2147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:17:50.322009 kubelet[2147]: E0117 00:17:50.321831 2147 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:17:50.322009 kubelet[2147]: E0117 00:17:50.321899 2147 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-2808572c0d\" not found" Jan 17 00:17:50.413308 systemd[1]: Created slice kubepods-burstable-pod49bb737249473b69e81e359c36a07d45.slice - libcontainer container kubepods-burstable-pod49bb737249473b69e81e359c36a07d45.slice. Jan 17 00:17:50.421174 kubelet[2147]: I0117 00:17:50.421013 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.424418 kubelet[2147]: E0117 00:17:50.424343 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.166.4:6443/api/v1/nodes\": dial tcp 146.190.166.4:6443: connect: connection refused" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.426146 kubelet[2147]: E0117 00:17:50.425667 2147 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2808572c0d\" not found" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.431538 systemd[1]: Created slice kubepods-burstable-pod989772a545f9a3e568893f5614faacf8.slice - libcontainer container kubepods-burstable-pod989772a545f9a3e568893f5614faacf8.slice. Jan 17 00:17:50.434002 kubelet[2147]: E0117 00:17:50.433965 2147 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2808572c0d\" not found" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.436188 systemd[1]: Created slice kubepods-burstable-pod5a265202bc9739a4592b55642c0e9b50.slice - libcontainer container kubepods-burstable-pod5a265202bc9739a4592b55642c0e9b50.slice. Jan 17 00:17:50.439068 kubelet[2147]: E0117 00:17:50.438799 2147 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2808572c0d\" not found" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.445660 kubelet[2147]: I0117 00:17:50.445298 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.445660 kubelet[2147]: I0117 00:17:50.445380 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.445660 kubelet[2147]: I0117 00:17:50.445412 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.445660 kubelet[2147]: I0117 00:17:50.445438 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.445660 kubelet[2147]: I0117 00:17:50.445466 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49bb737249473b69e81e359c36a07d45-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2808572c0d\" (UID: \"49bb737249473b69e81e359c36a07d45\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.446076 kubelet[2147]: I0117 00:17:50.445519 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49bb737249473b69e81e359c36a07d45-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2808572c0d\" (UID: \"49bb737249473b69e81e359c36a07d45\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.446076 kubelet[2147]: E0117 00:17:50.445542 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.166.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2808572c0d?timeout=10s\": dial tcp 146.190.166.4:6443: connect: connection refused" interval="400ms" Jan 17 00:17:50.446076 kubelet[2147]: I0117 00:17:50.445560 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.446076 kubelet[2147]: I0117 00:17:50.445587 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/989772a545f9a3e568893f5614faacf8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2808572c0d\" (UID: \"989772a545f9a3e568893f5614faacf8\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.446076 kubelet[2147]: I0117 00:17:50.445614 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49bb737249473b69e81e359c36a07d45-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2808572c0d\" (UID: \"49bb737249473b69e81e359c36a07d45\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.626472 kubelet[2147]: I0117 00:17:50.626412 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.626912 kubelet[2147]: E0117 00:17:50.626846 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.166.4:6443/api/v1/nodes\": dial tcp 146.190.166.4:6443: connect: connection refused" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:50.728885 kubelet[2147]: E0117 00:17:50.728807 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:50.730443 containerd[1466]: time="2026-01-17T00:17:50.730212660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2808572c0d,Uid:49bb737249473b69e81e359c36a07d45,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:50.732705 systemd-resolved[1320]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 00:17:50.737524 kubelet[2147]: E0117 00:17:50.737128 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:50.737852 containerd[1466]: time="2026-01-17T00:17:50.737811008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2808572c0d,Uid:989772a545f9a3e568893f5614faacf8,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:50.742097 kubelet[2147]: E0117 00:17:50.741182 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:50.742435 containerd[1466]: time="2026-01-17T00:17:50.741778420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2808572c0d,Uid:5a265202bc9739a4592b55642c0e9b50,Namespace:kube-system,Attempt:0,}" Jan 17 00:17:50.846447 kubelet[2147]: E0117 00:17:50.846377 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.166.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2808572c0d?timeout=10s\": dial tcp 146.190.166.4:6443: connect: connection refused" interval="800ms" Jan 17 00:17:51.029099 kubelet[2147]: I0117 00:17:51.028499 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:51.029099 kubelet[2147]: E0117 00:17:51.028832 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.166.4:6443/api/v1/nodes\": dial tcp 146.190.166.4:6443: connect: connection refused" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:51.136977 kubelet[2147]: E0117 00:17:51.136909 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://146.190.166.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:17:51.295944 kubelet[2147]: E0117 00:17:51.295787 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://146.190.166.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2808572c0d&limit=500&resourceVersion=0\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:17:51.443468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2189899876.mount: Deactivated successfully. Jan 17 00:17:51.448616 containerd[1466]: time="2026-01-17T00:17:51.447750939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:51.449419 containerd[1466]: time="2026-01-17T00:17:51.449360970Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:17:51.450785 containerd[1466]: time="2026-01-17T00:17:51.450730352Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:51.453330 containerd[1466]: time="2026-01-17T00:17:51.453274166Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:51.456080 containerd[1466]: time="2026-01-17T00:17:51.456030105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:17:51.458604 containerd[1466]: time="2026-01-17T00:17:51.458548722Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:51.460560 containerd[1466]: time="2026-01-17T00:17:51.460511971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:17:51.461737 containerd[1466]: time="2026-01-17T00:17:51.461685660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 731.165468ms" Jan 17 00:17:51.463791 containerd[1466]: time="2026-01-17T00:17:51.463741262Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:17:51.467024 containerd[1466]: time="2026-01-17T00:17:51.466976470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 729.063231ms" Jan 17 00:17:51.467753 containerd[1466]: time="2026-01-17T00:17:51.467721735Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 725.856005ms" Jan 17 00:17:51.610756 kubelet[2147]: E0117 00:17:51.610367 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://146.190.166.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:17:51.647133 kubelet[2147]: E0117 00:17:51.647060 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.166.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2808572c0d?timeout=10s\": dial tcp 146.190.166.4:6443: connect: connection refused" interval="1.6s" Jan 17 00:17:51.652599 containerd[1466]: time="2026-01-17T00:17:51.652099759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:51.652599 containerd[1466]: time="2026-01-17T00:17:51.652169651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:51.652599 containerd[1466]: time="2026-01-17T00:17:51.652203052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:51.652599 containerd[1466]: time="2026-01-17T00:17:51.652294840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:51.654007 containerd[1466]: time="2026-01-17T00:17:51.653772790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:51.654007 containerd[1466]: time="2026-01-17T00:17:51.653830946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:51.654007 containerd[1466]: time="2026-01-17T00:17:51.653849345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:51.654007 containerd[1466]: time="2026-01-17T00:17:51.653925912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:51.670765 containerd[1466]: time="2026-01-17T00:17:51.670393926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:17:51.670912 containerd[1466]: time="2026-01-17T00:17:51.670785461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:17:51.670912 containerd[1466]: time="2026-01-17T00:17:51.670870147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:51.672591 containerd[1466]: time="2026-01-17T00:17:51.672447253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:17:51.696109 systemd[1]: Started cri-containerd-ae4cad226494fb28db82d0e86faadc729d301ba9de604528ec910abbdfbda37b.scope - libcontainer container ae4cad226494fb28db82d0e86faadc729d301ba9de604528ec910abbdfbda37b. Jan 17 00:17:51.729048 systemd[1]: Started cri-containerd-a9b5ee776caf53d5d6c8346cb2aeb25c8f29d7a51c68625cf4ce05779b95c22d.scope - libcontainer container a9b5ee776caf53d5d6c8346cb2aeb25c8f29d7a51c68625cf4ce05779b95c22d. Jan 17 00:17:51.739049 systemd[1]: Started cri-containerd-76901568a8d67961a11a34ba581f1199dc719613dbe133fb0ac3695e8fd6a16f.scope - libcontainer container 76901568a8d67961a11a34ba581f1199dc719613dbe133fb0ac3695e8fd6a16f. Jan 17 00:17:51.752612 kubelet[2147]: E0117 00:17:51.752546 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://146.190.166.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:17:51.813473 containerd[1466]: time="2026-01-17T00:17:51.813112954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2808572c0d,Uid:49bb737249473b69e81e359c36a07d45,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae4cad226494fb28db82d0e86faadc729d301ba9de604528ec910abbdfbda37b\"" Jan 17 00:17:51.823200 kubelet[2147]: E0117 00:17:51.822891 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:51.834101 containerd[1466]: time="2026-01-17T00:17:51.833759859Z" level=info msg="CreateContainer within sandbox \"ae4cad226494fb28db82d0e86faadc729d301ba9de604528ec910abbdfbda37b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:17:51.836218 kubelet[2147]: I0117 00:17:51.835786 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:51.836218 kubelet[2147]: E0117 00:17:51.836171 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.166.4:6443/api/v1/nodes\": dial tcp 146.190.166.4:6443: connect: connection refused" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:51.836919 containerd[1466]: time="2026-01-17T00:17:51.836626577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2808572c0d,Uid:5a265202bc9739a4592b55642c0e9b50,Namespace:kube-system,Attempt:0,} returns sandbox id \"76901568a8d67961a11a34ba581f1199dc719613dbe133fb0ac3695e8fd6a16f\"" Jan 17 00:17:51.838512 kubelet[2147]: E0117 00:17:51.837975 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:51.856055 containerd[1466]: time="2026-01-17T00:17:51.856005191Z" level=info msg="CreateContainer within sandbox \"76901568a8d67961a11a34ba581f1199dc719613dbe133fb0ac3695e8fd6a16f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:17:51.856800 containerd[1466]: time="2026-01-17T00:17:51.856763014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2808572c0d,Uid:989772a545f9a3e568893f5614faacf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9b5ee776caf53d5d6c8346cb2aeb25c8f29d7a51c68625cf4ce05779b95c22d\"" Jan 17 00:17:51.859501 kubelet[2147]: E0117 00:17:51.859208 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:51.863557 containerd[1466]: time="2026-01-17T00:17:51.863149714Z" level=info msg="CreateContainer within sandbox \"ae4cad226494fb28db82d0e86faadc729d301ba9de604528ec910abbdfbda37b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4ec865e7ea137bd7c0a43b136637548625be9b1badbc4b05c9b9bc6250ed56e\"" Jan 17 00:17:51.864485 containerd[1466]: time="2026-01-17T00:17:51.864124317Z" level=info msg="StartContainer for \"d4ec865e7ea137bd7c0a43b136637548625be9b1badbc4b05c9b9bc6250ed56e\"" Jan 17 00:17:51.868997 containerd[1466]: time="2026-01-17T00:17:51.868749045Z" level=info msg="CreateContainer within sandbox \"a9b5ee776caf53d5d6c8346cb2aeb25c8f29d7a51c68625cf4ce05779b95c22d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:17:51.880743 containerd[1466]: time="2026-01-17T00:17:51.880681551Z" level=info msg="CreateContainer within sandbox \"76901568a8d67961a11a34ba581f1199dc719613dbe133fb0ac3695e8fd6a16f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ac999dbe6a91dea1babeed7d469c0092b57d5ab91e7c8dcf3977e4f28649276\"" Jan 17 00:17:51.881466 containerd[1466]: time="2026-01-17T00:17:51.881432784Z" level=info msg="StartContainer for \"0ac999dbe6a91dea1babeed7d469c0092b57d5ab91e7c8dcf3977e4f28649276\"" Jan 17 00:17:51.887813 containerd[1466]: time="2026-01-17T00:17:51.887746437Z" level=info msg="CreateContainer within sandbox \"a9b5ee776caf53d5d6c8346cb2aeb25c8f29d7a51c68625cf4ce05779b95c22d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f193652c29533b3e96e4b4862201bc9e1676ccea9f1360019bdf3daebcaf8cf8\"" Jan 17 00:17:51.888760 containerd[1466]: time="2026-01-17T00:17:51.888663132Z" level=info msg="StartContainer for \"f193652c29533b3e96e4b4862201bc9e1676ccea9f1360019bdf3daebcaf8cf8\"" Jan 17 00:17:51.910840 systemd[1]: Started cri-containerd-d4ec865e7ea137bd7c0a43b136637548625be9b1badbc4b05c9b9bc6250ed56e.scope - libcontainer container d4ec865e7ea137bd7c0a43b136637548625be9b1badbc4b05c9b9bc6250ed56e. Jan 17 00:17:51.940751 systemd[1]: Started cri-containerd-0ac999dbe6a91dea1babeed7d469c0092b57d5ab91e7c8dcf3977e4f28649276.scope - libcontainer container 0ac999dbe6a91dea1babeed7d469c0092b57d5ab91e7c8dcf3977e4f28649276. Jan 17 00:17:51.968713 systemd[1]: Started cri-containerd-f193652c29533b3e96e4b4862201bc9e1676ccea9f1360019bdf3daebcaf8cf8.scope - libcontainer container f193652c29533b3e96e4b4862201bc9e1676ccea9f1360019bdf3daebcaf8cf8. Jan 17 00:17:52.012042 containerd[1466]: time="2026-01-17T00:17:52.011985107Z" level=info msg="StartContainer for \"d4ec865e7ea137bd7c0a43b136637548625be9b1badbc4b05c9b9bc6250ed56e\" returns successfully" Jan 17 00:17:52.041650 containerd[1466]: time="2026-01-17T00:17:52.041596359Z" level=info msg="StartContainer for \"0ac999dbe6a91dea1babeed7d469c0092b57d5ab91e7c8dcf3977e4f28649276\" returns successfully" Jan 17 00:17:52.096185 containerd[1466]: time="2026-01-17T00:17:52.096125353Z" level=info msg="StartContainer for \"f193652c29533b3e96e4b4862201bc9e1676ccea9f1360019bdf3daebcaf8cf8\" returns successfully" Jan 17 00:17:52.310375 kubelet[2147]: E0117 00:17:52.310334 2147 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2808572c0d\" not found" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:52.311422 kubelet[2147]: E0117 00:17:52.310505 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:52.316683 kubelet[2147]: E0117 00:17:52.315467 2147 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2808572c0d\" not found" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:52.319061 kubelet[2147]: E0117 00:17:52.318791 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:52.321517 kubelet[2147]: E0117 00:17:52.321398 2147 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2808572c0d\" not found" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:52.321925 kubelet[2147]: E0117 00:17:52.321840 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:52.366521 kubelet[2147]: E0117 00:17:52.363764 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://146.190.166.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.166.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:17:53.326551 kubelet[2147]: E0117 00:17:53.324620 2147 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2808572c0d\" not found" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:53.326551 kubelet[2147]: E0117 00:17:53.324823 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:53.326551 kubelet[2147]: E0117 00:17:53.325245 2147 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2808572c0d\" not found" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:53.326551 kubelet[2147]: E0117 00:17:53.325382 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:53.438153 kubelet[2147]: I0117 00:17:53.438112 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:54.843684 kubelet[2147]: E0117 00:17:54.843610 2147 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-2808572c0d\" not found" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:54.957132 kubelet[2147]: I0117 00:17:54.956664 2147 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:54.973054 kubelet[2147]: E0117 00:17:54.972937 2147 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-2808572c0d.188b5c943094d73a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2808572c0d,UID:ci-4081.3.6-n-2808572c0d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2808572c0d,},FirstTimestamp:2026-01-17 00:17:50.213027642 +0000 UTC m=+0.458922801,LastTimestamp:2026-01-17 00:17:50.213027642 +0000 UTC m=+0.458922801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2808572c0d,}" Jan 17 00:17:55.042946 kubelet[2147]: I0117 00:17:55.042888 2147 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:55.053801 kubelet[2147]: E0117 00:17:55.053753 2147 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2808572c0d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:55.054319 kubelet[2147]: I0117 00:17:55.054024 2147 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:55.057717 kubelet[2147]: E0117 00:17:55.057655 2147 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:55.057717 kubelet[2147]: I0117 00:17:55.057703 2147 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:55.060499 kubelet[2147]: E0117 00:17:55.060411 2147 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2808572c0d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:55.211778 kubelet[2147]: I0117 00:17:55.211723 2147 apiserver.go:52] "Watching apiserver" Jan 17 00:17:55.243407 kubelet[2147]: I0117 00:17:55.243313 2147 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:17:57.143721 kubelet[2147]: I0117 00:17:57.143669 2147 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:57.154772 kubelet[2147]: I0117 00:17:57.153847 2147 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:17:57.154772 kubelet[2147]: E0117 00:17:57.154304 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:57.212914 systemd[1]: Reloading requested from client PID 2432 ('systemctl') (unit session-7.scope)... Jan 17 00:17:57.213391 systemd[1]: Reloading... Jan 17 00:17:57.332935 kubelet[2147]: E0117 00:17:57.332869 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:57.346507 zram_generator::config[2471]: No configuration found. Jan 17 00:17:57.516922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:17:57.637763 systemd[1]: Reloading finished in 423 ms. Jan 17 00:17:57.690766 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:57.706056 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:17:57.706459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:57.706552 systemd[1]: kubelet.service: Consumed 1.019s CPU time, 124.4M memory peak, 0B memory swap peak. Jan 17 00:17:57.715012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:17:57.877571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:17:57.896020 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:17:57.995927 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:17:57.995927 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:17:57.996607 kubelet[2521]: I0117 00:17:57.995995 2521 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:17:58.009219 kubelet[2521]: I0117 00:17:58.009135 2521 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:17:58.009219 kubelet[2521]: I0117 00:17:58.009200 2521 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:17:58.012905 kubelet[2521]: I0117 00:17:58.012802 2521 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:17:58.012905 kubelet[2521]: I0117 00:17:58.012870 2521 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:17:58.013283 kubelet[2521]: I0117 00:17:58.013255 2521 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:17:58.015468 kubelet[2521]: I0117 00:17:58.015427 2521 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:17:58.022699 kubelet[2521]: I0117 00:17:58.022429 2521 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:17:58.027011 kubelet[2521]: E0117 00:17:58.026953 2521 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:17:58.027661 kubelet[2521]: I0117 00:17:58.027356 2521 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:17:58.031659 kubelet[2521]: I0117 00:17:58.031627 2521 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:17:58.033318 kubelet[2521]: I0117 00:17:58.032096 2521 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:17:58.033318 kubelet[2521]: I0117 00:17:58.032145 2521 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2808572c0d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:17:58.033318 kubelet[2521]: I0117 00:17:58.032327 2521 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:17:58.033318 kubelet[2521]: I0117 00:17:58.032341 2521 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:17:58.033663 kubelet[2521]: I0117 00:17:58.032378 2521 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:17:58.035279 kubelet[2521]: I0117 00:17:58.035243 2521 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:58.035677 kubelet[2521]: I0117 00:17:58.035654 2521 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:17:58.037268 kubelet[2521]: I0117 00:17:58.037229 2521 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:17:58.037468 kubelet[2521]: I0117 00:17:58.037449 2521 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:17:58.039612 kubelet[2521]: I0117 00:17:58.039541 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:17:58.057664 kubelet[2521]: I0117 00:17:58.057612 2521 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:17:58.058493 kubelet[2521]: I0117 00:17:58.058441 2521 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:17:58.058701 kubelet[2521]: I0117 00:17:58.058682 2521 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:17:58.061423 kubelet[2521]: I0117 00:17:58.061249 2521 apiserver.go:52] "Watching apiserver" Jan 17 00:17:58.069673 kubelet[2521]: I0117 00:17:58.067832 2521 server.go:1262] "Started kubelet" Jan 17 00:17:58.069673 kubelet[2521]: I0117 00:17:58.068905 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:17:58.070775 kubelet[2521]: I0117 00:17:58.070730 2521 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:17:58.075012 kubelet[2521]: I0117 00:17:58.074953 2521 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:17:58.082393 kubelet[2521]: I0117 00:17:58.082316 2521 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:17:58.082786 kubelet[2521]: I0117 00:17:58.082441 2521 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:17:58.083106 kubelet[2521]: I0117 00:17:58.083079 2521 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:17:58.088131 kubelet[2521]: I0117 00:17:58.088083 2521 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:17:58.093290 kubelet[2521]: I0117 00:17:58.093252 2521 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:17:58.095856 kubelet[2521]: I0117 00:17:58.095814 2521 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:17:58.096175 kubelet[2521]: I0117 00:17:58.096158 2521 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:17:58.102153 kubelet[2521]: I0117 00:17:58.101320 2521 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:17:58.103559 kubelet[2521]: I0117 00:17:58.103449 2521 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:17:58.106249 kubelet[2521]: I0117 00:17:58.106192 2521 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:17:58.110263 kubelet[2521]: I0117 00:17:58.110211 2521 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:17:58.111669 kubelet[2521]: I0117 00:17:58.110405 2521 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:17:58.111669 kubelet[2521]: I0117 00:17:58.110443 2521 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:17:58.112339 kubelet[2521]: E0117 00:17:58.111949 2521 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:17:58.122993 kubelet[2521]: I0117 00:17:58.122942 2521 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:17:58.138791 kubelet[2521]: E0117 00:17:58.138645 2521 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:17:58.196787 kubelet[2521]: I0117 00:17:58.196399 2521 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:17:58.196787 kubelet[2521]: I0117 00:17:58.196469 2521 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:17:58.196787 kubelet[2521]: I0117 00:17:58.196541 2521 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:17:58.197047 kubelet[2521]: I0117 00:17:58.196801 2521 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:17:58.197047 kubelet[2521]: I0117 00:17:58.196822 2521 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:17:58.197047 kubelet[2521]: I0117 00:17:58.196865 2521 policy_none.go:49] "None policy: Start" Jan 17 00:17:58.197047 kubelet[2521]: I0117 00:17:58.196882 2521 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:17:58.197047 kubelet[2521]: I0117 00:17:58.196900 2521 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:17:58.197216 kubelet[2521]: I0117 00:17:58.197078 2521 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:17:58.197216 kubelet[2521]: I0117 00:17:58.197095 2521 policy_none.go:47] "Start" Jan 17 00:17:58.213121 kubelet[2521]: E0117 00:17:58.212116 2521 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:17:58.215133 kubelet[2521]: E0117 00:17:58.215055 2521 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:17:58.215395 kubelet[2521]: I0117 00:17:58.215365 2521 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:17:58.215448 kubelet[2521]: I0117 00:17:58.215394 2521 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:17:58.219699 kubelet[2521]: I0117 00:17:58.217528 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:17:58.224521 kubelet[2521]: E0117 00:17:58.222738 2521 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:17:58.269092 sudo[2557]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:17:58.270447 sudo[2557]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:17:58.329614 kubelet[2521]: I0117 00:17:58.329538 2521 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.343443 kubelet[2521]: I0117 00:17:58.341970 2521 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.343443 kubelet[2521]: I0117 00:17:58.342067 2521 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.414588 kubelet[2521]: I0117 00:17:58.413767 2521 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.414588 kubelet[2521]: I0117 00:17:58.414033 2521 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.422631 kubelet[2521]: I0117 00:17:58.422244 2521 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.441343 kubelet[2521]: I0117 00:17:58.441304 2521 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:17:58.441615 kubelet[2521]: E0117 00:17:58.441406 2521 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2808572c0d\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.444747 kubelet[2521]: I0117 00:17:58.444699 2521 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:17:58.447046 kubelet[2521]: I0117 00:17:58.446831 2521 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:17:58.488440 kubelet[2521]: I0117 00:17:58.488371 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2808572c0d" podStartSLOduration=1.488352978 podStartE2EDuration="1.488352978s" podCreationTimestamp="2026-01-17 00:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:58.467928651 +0000 UTC m=+0.561615853" watchObservedRunningTime="2026-01-17 00:17:58.488352978 +0000 UTC m=+0.582040178" Jan 17 00:17:58.496809 kubelet[2521]: I0117 00:17:58.496744 2521 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:17:58.501904 kubelet[2521]: I0117 00:17:58.501851 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49bb737249473b69e81e359c36a07d45-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2808572c0d\" (UID: \"49bb737249473b69e81e359c36a07d45\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.501904 kubelet[2521]: I0117 00:17:58.501905 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49bb737249473b69e81e359c36a07d45-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2808572c0d\" (UID: \"49bb737249473b69e81e359c36a07d45\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.502085 kubelet[2521]: I0117 00:17:58.501941 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.502085 kubelet[2521]: I0117 00:17:58.501974 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.502085 kubelet[2521]: I0117 00:17:58.502005 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.502085 kubelet[2521]: I0117 00:17:58.502036 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.502085 kubelet[2521]: I0117 00:17:58.502060 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49bb737249473b69e81e359c36a07d45-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2808572c0d\" (UID: \"49bb737249473b69e81e359c36a07d45\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.502260 kubelet[2521]: I0117 00:17:58.502089 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a265202bc9739a4592b55642c0e9b50-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2808572c0d\" (UID: \"5a265202bc9739a4592b55642c0e9b50\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.502260 kubelet[2521]: I0117 00:17:58.502137 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/989772a545f9a3e568893f5614faacf8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2808572c0d\" (UID: \"989772a545f9a3e568893f5614faacf8\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2808572c0d" Jan 17 00:17:58.510492 kubelet[2521]: I0117 00:17:58.510091 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2808572c0d" podStartSLOduration=0.510069029 podStartE2EDuration="510.069029ms" podCreationTimestamp="2026-01-17 00:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:58.488859098 +0000 UTC m=+0.582546298" watchObservedRunningTime="2026-01-17 00:17:58.510069029 +0000 UTC m=+0.603756231" Jan 17 00:17:58.743532 kubelet[2521]: E0117 00:17:58.742373 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:58.746786 kubelet[2521]: E0117 00:17:58.746561 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:58.748047 kubelet[2521]: E0117 00:17:58.748003 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:59.014452 sudo[2557]: pam_unix(sudo:session): session closed for user root Jan 17 00:17:59.018869 kubelet[2521]: I0117 00:17:59.018604 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2808572c0d" podStartSLOduration=1.018452263 podStartE2EDuration="1.018452263s" podCreationTimestamp="2026-01-17 00:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:17:58.510971891 +0000 UTC m=+0.604659096" watchObservedRunningTime="2026-01-17 00:17:59.018452263 +0000 UTC m=+1.112139463" Jan 17 00:17:59.177533 kubelet[2521]: E0117 00:17:59.176841 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:59.177533 kubelet[2521]: E0117 00:17:59.177062 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:17:59.177533 kubelet[2521]: E0117 00:17:59.177427 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:00.181545 kubelet[2521]: E0117 00:18:00.181161 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:00.181545 kubelet[2521]: E0117 00:18:00.181162 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:00.935031 sudo[1650]: pam_unix(sudo:session): session closed for user root Jan 17 00:18:00.998298 sshd[1647]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:01.005604 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:18:01.006003 systemd[1]: sshd@6-146.190.166.4:22-4.153.228.146:37970.service: Deactivated successfully. Jan 17 00:18:01.009316 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:18:01.009988 systemd[1]: session-7.scope: Consumed 6.840s CPU time, 147.2M memory peak, 0B memory swap peak. Jan 17 00:18:01.011585 systemd-logind[1444]: Removed session 7. Jan 17 00:18:02.112126 kubelet[2521]: I0117 00:18:02.111868 2521 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:18:02.116935 containerd[1466]: time="2026-01-17T00:18:02.116717766Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:18:02.117659 kubelet[2521]: I0117 00:18:02.117308 2521 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:18:02.968814 systemd[1]: Created slice kubepods-besteffort-pod90a81eb9_9888_4e13_937d_f5effd24e00a.slice - libcontainer container kubepods-besteffort-pod90a81eb9_9888_4e13_937d_f5effd24e00a.slice. Jan 17 00:18:02.982944 systemd[1]: Created slice kubepods-burstable-pod43e91e1a_3d13_42d9_b038_1c8cbbe61a3c.slice - libcontainer container kubepods-burstable-pod43e91e1a_3d13_42d9_b038_1c8cbbe61a3c.slice. Jan 17 00:18:03.040803 kubelet[2521]: I0117 00:18:03.040749 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/90a81eb9-9888-4e13-937d-f5effd24e00a-kube-proxy\") pod \"kube-proxy-kphbf\" (UID: \"90a81eb9-9888-4e13-937d-f5effd24e00a\") " pod="kube-system/kube-proxy-kphbf" Jan 17 00:18:03.042598 kubelet[2521]: I0117 00:18:03.041286 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90a81eb9-9888-4e13-937d-f5effd24e00a-xtables-lock\") pod \"kube-proxy-kphbf\" (UID: \"90a81eb9-9888-4e13-937d-f5effd24e00a\") " pod="kube-system/kube-proxy-kphbf" Jan 17 00:18:03.042598 kubelet[2521]: I0117 00:18:03.041381 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90a81eb9-9888-4e13-937d-f5effd24e00a-lib-modules\") pod \"kube-proxy-kphbf\" (UID: \"90a81eb9-9888-4e13-937d-f5effd24e00a\") " pod="kube-system/kube-proxy-kphbf" Jan 17 00:18:03.042598 kubelet[2521]: I0117 00:18:03.041415 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-run\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.042598 kubelet[2521]: I0117 00:18:03.041440 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-host-proc-sys-kernel\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.042598 kubelet[2521]: I0117 00:18:03.041464 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-hubble-tls\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.042598 kubelet[2521]: I0117 00:18:03.041512 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-bpf-maps\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043046 kubelet[2521]: I0117 00:18:03.041541 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-cgroup\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043046 kubelet[2521]: I0117 00:18:03.041567 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-xtables-lock\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043046 kubelet[2521]: I0117 00:18:03.041592 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-hostproc\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043046 kubelet[2521]: I0117 00:18:03.041614 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-etc-cni-netd\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043046 kubelet[2521]: I0117 00:18:03.041654 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-lib-modules\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043046 kubelet[2521]: I0117 00:18:03.041678 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-clustermesh-secrets\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043337 kubelet[2521]: I0117 00:18:03.041700 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-config-path\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043337 kubelet[2521]: I0117 00:18:03.041724 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-host-proc-sys-net\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043337 kubelet[2521]: I0117 00:18:03.041747 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csq2q\" (UniqueName: \"kubernetes.io/projected/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-kube-api-access-csq2q\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.043337 kubelet[2521]: I0117 00:18:03.041774 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55mf8\" (UniqueName: \"kubernetes.io/projected/90a81eb9-9888-4e13-937d-f5effd24e00a-kube-api-access-55mf8\") pod \"kube-proxy-kphbf\" (UID: \"90a81eb9-9888-4e13-937d-f5effd24e00a\") " pod="kube-system/kube-proxy-kphbf" Jan 17 00:18:03.043337 kubelet[2521]: I0117 00:18:03.041819 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cni-path\") pod \"cilium-tlwts\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " pod="kube-system/cilium-tlwts" Jan 17 00:18:03.281241 kubelet[2521]: E0117 00:18:03.280746 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:03.290688 containerd[1466]: time="2026-01-17T00:18:03.290594581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kphbf,Uid:90a81eb9-9888-4e13-937d-f5effd24e00a,Namespace:kube-system,Attempt:0,}" Jan 17 00:18:03.295724 kubelet[2521]: E0117 00:18:03.295676 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:03.297911 containerd[1466]: time="2026-01-17T00:18:03.297301300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tlwts,Uid:43e91e1a-3d13-42d9-b038-1c8cbbe61a3c,Namespace:kube-system,Attempt:0,}" Jan 17 00:18:03.316089 systemd[1]: Created slice kubepods-besteffort-podad726539_bb7c_427f_84e4_554e45e556be.slice - libcontainer container kubepods-besteffort-podad726539_bb7c_427f_84e4_554e45e556be.slice. Jan 17 00:18:03.373218 containerd[1466]: time="2026-01-17T00:18:03.371387708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:03.373218 containerd[1466]: time="2026-01-17T00:18:03.373058498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:03.374212 containerd[1466]: time="2026-01-17T00:18:03.373144443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:03.374985 containerd[1466]: time="2026-01-17T00:18:03.374006176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:03.414729 systemd[1]: Started cri-containerd-25f146eac4de91b200fa90f446d3c1bc0bb49e39caa432bff1c0478cc45d24ef.scope - libcontainer container 25f146eac4de91b200fa90f446d3c1bc0bb49e39caa432bff1c0478cc45d24ef. Jan 17 00:18:03.434230 kubelet[2521]: I0117 00:18:03.434172 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad726539-bb7c-427f-84e4-554e45e556be-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-vpfl9\" (UID: \"ad726539-bb7c-427f-84e4-554e45e556be\") " pod="kube-system/cilium-operator-6f9c7c5859-vpfl9" Jan 17 00:18:03.435878 kubelet[2521]: I0117 00:18:03.435717 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb74x\" (UniqueName: \"kubernetes.io/projected/ad726539-bb7c-427f-84e4-554e45e556be-kube-api-access-qb74x\") pod \"cilium-operator-6f9c7c5859-vpfl9\" (UID: \"ad726539-bb7c-427f-84e4-554e45e556be\") " pod="kube-system/cilium-operator-6f9c7c5859-vpfl9" Jan 17 00:18:03.469202 containerd[1466]: time="2026-01-17T00:18:03.459964540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:03.469202 containerd[1466]: time="2026-01-17T00:18:03.460116447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:03.469202 containerd[1466]: time="2026-01-17T00:18:03.460524056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:03.469202 containerd[1466]: time="2026-01-17T00:18:03.460879789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:03.505095 kubelet[2521]: E0117 00:18:03.505040 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:03.539763 systemd[1]: Started cri-containerd-fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5.scope - libcontainer container fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5. Jan 17 00:18:03.657718 containerd[1466]: time="2026-01-17T00:18:03.657595314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kphbf,Uid:90a81eb9-9888-4e13-937d-f5effd24e00a,Namespace:kube-system,Attempt:0,} returns sandbox id \"25f146eac4de91b200fa90f446d3c1bc0bb49e39caa432bff1c0478cc45d24ef\"" Jan 17 00:18:03.660627 kubelet[2521]: E0117 00:18:03.660312 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:03.664094 containerd[1466]: time="2026-01-17T00:18:03.663421598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tlwts,Uid:43e91e1a-3d13-42d9-b038-1c8cbbe61a3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\"" Jan 17 00:18:03.667514 kubelet[2521]: E0117 00:18:03.667148 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:03.671568 containerd[1466]: time="2026-01-17T00:18:03.671296879Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:18:03.676965 containerd[1466]: time="2026-01-17T00:18:03.676906863Z" level=info msg="CreateContainer within sandbox \"25f146eac4de91b200fa90f446d3c1bc0bb49e39caa432bff1c0478cc45d24ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:18:03.697134 containerd[1466]: time="2026-01-17T00:18:03.697061699Z" level=info msg="CreateContainer within sandbox \"25f146eac4de91b200fa90f446d3c1bc0bb49e39caa432bff1c0478cc45d24ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"adc193a805cd797e770d82a9ba7e29ec9827bf5f9e0fa130f94573c26029dfe9\"" Jan 17 00:18:03.699526 containerd[1466]: time="2026-01-17T00:18:03.699269821Z" level=info msg="StartContainer for \"adc193a805cd797e770d82a9ba7e29ec9827bf5f9e0fa130f94573c26029dfe9\"" Jan 17 00:18:03.742837 systemd[1]: Started cri-containerd-adc193a805cd797e770d82a9ba7e29ec9827bf5f9e0fa130f94573c26029dfe9.scope - libcontainer container adc193a805cd797e770d82a9ba7e29ec9827bf5f9e0fa130f94573c26029dfe9. Jan 17 00:18:03.784535 containerd[1466]: time="2026-01-17T00:18:03.784429319Z" level=info msg="StartContainer for \"adc193a805cd797e770d82a9ba7e29ec9827bf5f9e0fa130f94573c26029dfe9\" returns successfully" Jan 17 00:18:03.945600 kubelet[2521]: E0117 00:18:03.944746 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:03.948566 containerd[1466]: time="2026-01-17T00:18:03.946106605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-vpfl9,Uid:ad726539-bb7c-427f-84e4-554e45e556be,Namespace:kube-system,Attempt:0,}" Jan 17 00:18:03.991856 containerd[1466]: time="2026-01-17T00:18:03.989247509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:03.991856 containerd[1466]: time="2026-01-17T00:18:03.989340786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:03.991856 containerd[1466]: time="2026-01-17T00:18:03.989359652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:03.993947 containerd[1466]: time="2026-01-17T00:18:03.993380722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:04.023794 systemd[1]: Started cri-containerd-700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2.scope - libcontainer container 700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2. Jan 17 00:18:04.109198 containerd[1466]: time="2026-01-17T00:18:04.109120361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-vpfl9,Uid:ad726539-bb7c-427f-84e4-554e45e556be,Namespace:kube-system,Attempt:0,} returns sandbox id \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\"" Jan 17 00:18:04.111159 kubelet[2521]: E0117 00:18:04.111122 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:04.200036 kubelet[2521]: E0117 00:18:04.199820 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:04.200469 kubelet[2521]: E0117 00:18:04.200433 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:04.241309 kubelet[2521]: I0117 00:18:04.239580 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kphbf" podStartSLOduration=2.239560367 podStartE2EDuration="2.239560367s" podCreationTimestamp="2026-01-17 00:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:18:04.226310833 +0000 UTC m=+6.319998035" watchObservedRunningTime="2026-01-17 00:18:04.239560367 +0000 UTC m=+6.333247562" Jan 17 00:18:05.203717 kubelet[2521]: E0117 00:18:05.203461 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:05.456733 update_engine[1445]: I20260117 00:18:05.456427 1445 update_attempter.cc:509] Updating boot flags... Jan 17 00:18:05.511218 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2895) Jan 17 00:18:05.636015 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2792) Jan 17 00:18:06.418465 kubelet[2521]: E0117 00:18:06.418410 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:07.211561 kubelet[2521]: E0117 00:18:07.211289 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:08.729550 kubelet[2521]: E0117 00:18:08.729454 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:09.216631 kubelet[2521]: E0117 00:18:09.216430 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:09.446805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371821900.mount: Deactivated successfully. Jan 17 00:18:12.019977 containerd[1466]: time="2026-01-17T00:18:12.019912784Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:12.021427 containerd[1466]: time="2026-01-17T00:18:12.021357624Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:18:12.022442 containerd[1466]: time="2026-01-17T00:18:12.021848452Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:12.023954 containerd[1466]: time="2026-01-17T00:18:12.023911264Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.352567843s" Jan 17 00:18:12.023954 containerd[1466]: time="2026-01-17T00:18:12.023956647Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:18:12.026859 containerd[1466]: time="2026-01-17T00:18:12.026808885Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:18:12.029876 containerd[1466]: time="2026-01-17T00:18:12.029833173Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:18:12.149790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303550815.mount: Deactivated successfully. Jan 17 00:18:12.154695 containerd[1466]: time="2026-01-17T00:18:12.154624871Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\"" Jan 17 00:18:12.156847 containerd[1466]: time="2026-01-17T00:18:12.156798895Z" level=info msg="StartContainer for \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\"" Jan 17 00:18:12.287795 systemd[1]: Started cri-containerd-bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3.scope - libcontainer container bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3. Jan 17 00:18:12.325940 containerd[1466]: time="2026-01-17T00:18:12.325837414Z" level=info msg="StartContainer for \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\" returns successfully" Jan 17 00:18:12.345136 systemd[1]: cri-containerd-bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3.scope: Deactivated successfully. Jan 17 00:18:12.453096 containerd[1466]: time="2026-01-17T00:18:12.433983898Z" level=info msg="shim disconnected" id=bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3 namespace=k8s.io Jan 17 00:18:12.453096 containerd[1466]: time="2026-01-17T00:18:12.453080925Z" level=warning msg="cleaning up after shim disconnected" id=bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3 namespace=k8s.io Jan 17 00:18:12.453096 containerd[1466]: time="2026-01-17T00:18:12.453105505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:18:12.472378 containerd[1466]: time="2026-01-17T00:18:12.472307094Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:18:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:18:13.145182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3-rootfs.mount: Deactivated successfully. Jan 17 00:18:13.238865 kubelet[2521]: E0117 00:18:13.237846 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:13.249497 containerd[1466]: time="2026-01-17T00:18:13.249423381Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:18:13.271043 containerd[1466]: time="2026-01-17T00:18:13.270891349Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\"" Jan 17 00:18:13.272447 containerd[1466]: time="2026-01-17T00:18:13.272383859Z" level=info msg="StartContainer for \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\"" Jan 17 00:18:13.321746 systemd[1]: Started cri-containerd-b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c.scope - libcontainer container b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c. Jan 17 00:18:13.388152 containerd[1466]: time="2026-01-17T00:18:13.386738409Z" level=info msg="StartContainer for \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\" returns successfully" Jan 17 00:18:13.409666 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:18:13.410123 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:18:13.410208 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:18:13.417115 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:18:13.421821 systemd[1]: cri-containerd-b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c.scope: Deactivated successfully. Jan 17 00:18:13.497587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:18:13.523371 containerd[1466]: time="2026-01-17T00:18:13.523296354Z" level=info msg="shim disconnected" id=b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c namespace=k8s.io Jan 17 00:18:13.523371 containerd[1466]: time="2026-01-17T00:18:13.523362846Z" level=warning msg="cleaning up after shim disconnected" id=b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c namespace=k8s.io Jan 17 00:18:13.523371 containerd[1466]: time="2026-01-17T00:18:13.523373603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:18:14.047947 containerd[1466]: time="2026-01-17T00:18:14.047876939Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:14.048941 containerd[1466]: time="2026-01-17T00:18:14.048885979Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:18:14.050658 containerd[1466]: time="2026-01-17T00:18:14.049191068Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:18:14.051194 containerd[1466]: time="2026-01-17T00:18:14.051162283Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.023048613s" Jan 17 00:18:14.051298 containerd[1466]: time="2026-01-17T00:18:14.051283096Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:18:14.056387 containerd[1466]: time="2026-01-17T00:18:14.056164018Z" level=info msg="CreateContainer within sandbox \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:18:14.068982 containerd[1466]: time="2026-01-17T00:18:14.068924919Z" level=info msg="CreateContainer within sandbox \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\"" Jan 17 00:18:14.071570 containerd[1466]: time="2026-01-17T00:18:14.071508639Z" level=info msg="StartContainer for \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\"" Jan 17 00:18:14.112138 systemd[1]: Started cri-containerd-40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f.scope - libcontainer container 40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f. Jan 17 00:18:14.152066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c-rootfs.mount: Deactivated successfully. Jan 17 00:18:14.168933 containerd[1466]: time="2026-01-17T00:18:14.168866854Z" level=info msg="StartContainer for \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\" returns successfully" Jan 17 00:18:14.243271 kubelet[2521]: E0117 00:18:14.243213 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:14.252077 kubelet[2521]: E0117 00:18:14.252006 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:14.257179 containerd[1466]: time="2026-01-17T00:18:14.257114503Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:18:14.293164 containerd[1466]: time="2026-01-17T00:18:14.293098852Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\"" Jan 17 00:18:14.295232 containerd[1466]: time="2026-01-17T00:18:14.295176635Z" level=info msg="StartContainer for \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\"" Jan 17 00:18:14.342391 kubelet[2521]: I0117 00:18:14.342105 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-vpfl9" podStartSLOduration=1.404671972 podStartE2EDuration="11.342079855s" podCreationTimestamp="2026-01-17 00:18:03 +0000 UTC" firstStartedPulling="2026-01-17 00:18:04.114980252 +0000 UTC m=+6.208667436" lastFinishedPulling="2026-01-17 00:18:14.05238814 +0000 UTC m=+16.146075319" observedRunningTime="2026-01-17 00:18:14.291069248 +0000 UTC m=+16.384756446" watchObservedRunningTime="2026-01-17 00:18:14.342079855 +0000 UTC m=+16.435767058" Jan 17 00:18:14.373044 systemd[1]: Started cri-containerd-2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029.scope - libcontainer container 2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029. Jan 17 00:18:14.451963 containerd[1466]: time="2026-01-17T00:18:14.451882128Z" level=info msg="StartContainer for \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\" returns successfully" Jan 17 00:18:14.465232 systemd[1]: cri-containerd-2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029.scope: Deactivated successfully. Jan 17 00:18:14.543901 containerd[1466]: time="2026-01-17T00:18:14.543763233Z" level=info msg="shim disconnected" id=2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029 namespace=k8s.io Jan 17 00:18:14.543901 containerd[1466]: time="2026-01-17T00:18:14.543820374Z" level=warning msg="cleaning up after shim disconnected" id=2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029 namespace=k8s.io Jan 17 00:18:14.543901 containerd[1466]: time="2026-01-17T00:18:14.543829141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:18:15.146625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029-rootfs.mount: Deactivated successfully. Jan 17 00:18:15.261856 kubelet[2521]: E0117 00:18:15.261502 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:15.264825 kubelet[2521]: E0117 00:18:15.264549 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:15.268936 containerd[1466]: time="2026-01-17T00:18:15.268878021Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:18:15.311526 containerd[1466]: time="2026-01-17T00:18:15.306812193Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\"" Jan 17 00:18:15.311526 containerd[1466]: time="2026-01-17T00:18:15.309070197Z" level=info msg="StartContainer for \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\"" Jan 17 00:18:15.311271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285764317.mount: Deactivated successfully. Jan 17 00:18:15.397976 systemd[1]: Started cri-containerd-a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa.scope - libcontainer container a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa. Jan 17 00:18:15.476930 containerd[1466]: time="2026-01-17T00:18:15.476875008Z" level=info msg="StartContainer for \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\" returns successfully" Jan 17 00:18:15.480558 systemd[1]: cri-containerd-a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa.scope: Deactivated successfully. Jan 17 00:18:15.516812 containerd[1466]: time="2026-01-17T00:18:15.516705873Z" level=info msg="shim disconnected" id=a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa namespace=k8s.io Jan 17 00:18:15.516812 containerd[1466]: time="2026-01-17T00:18:15.516795213Z" level=warning msg="cleaning up after shim disconnected" id=a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa namespace=k8s.io Jan 17 00:18:15.516812 containerd[1466]: time="2026-01-17T00:18:15.516809275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:18:16.145959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa-rootfs.mount: Deactivated successfully. Jan 17 00:18:16.268554 kubelet[2521]: E0117 00:18:16.268062 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:16.276920 containerd[1466]: time="2026-01-17T00:18:16.276496001Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:18:16.307372 containerd[1466]: time="2026-01-17T00:18:16.304910679Z" level=info msg="CreateContainer within sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\"" Jan 17 00:18:16.307372 containerd[1466]: time="2026-01-17T00:18:16.306420172Z" level=info msg="StartContainer for \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\"" Jan 17 00:18:16.375992 systemd[1]: Started cri-containerd-5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9.scope - libcontainer container 5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9. Jan 17 00:18:16.476909 containerd[1466]: time="2026-01-17T00:18:16.476863893Z" level=info msg="StartContainer for \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\" returns successfully" Jan 17 00:18:16.702314 kubelet[2521]: I0117 00:18:16.702268 2521 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:18:16.763683 systemd[1]: Created slice kubepods-burstable-podd9f78d6f_92de_4dea_a70b_611b949be750.slice - libcontainer container kubepods-burstable-podd9f78d6f_92de_4dea_a70b_611b949be750.slice. Jan 17 00:18:16.772675 systemd[1]: Created slice kubepods-burstable-podcc4a0ea5_6b09_45b8_816a_a9e71df523c1.slice - libcontainer container kubepods-burstable-podcc4a0ea5_6b09_45b8_816a_a9e71df523c1.slice. Jan 17 00:18:16.864633 kubelet[2521]: I0117 00:18:16.864563 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc4a0ea5-6b09-45b8-816a-a9e71df523c1-config-volume\") pod \"coredns-66bc5c9577-lwhmm\" (UID: \"cc4a0ea5-6b09-45b8-816a-a9e71df523c1\") " pod="kube-system/coredns-66bc5c9577-lwhmm" Jan 17 00:18:16.864633 kubelet[2521]: I0117 00:18:16.864622 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt8v7\" (UniqueName: \"kubernetes.io/projected/d9f78d6f-92de-4dea-a70b-611b949be750-kube-api-access-mt8v7\") pod \"coredns-66bc5c9577-w64f6\" (UID: \"d9f78d6f-92de-4dea-a70b-611b949be750\") " pod="kube-system/coredns-66bc5c9577-w64f6" Jan 17 00:18:16.864633 kubelet[2521]: I0117 00:18:16.864647 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8xc\" (UniqueName: \"kubernetes.io/projected/cc4a0ea5-6b09-45b8-816a-a9e71df523c1-kube-api-access-fg8xc\") pod \"coredns-66bc5c9577-lwhmm\" (UID: \"cc4a0ea5-6b09-45b8-816a-a9e71df523c1\") " pod="kube-system/coredns-66bc5c9577-lwhmm" Jan 17 00:18:16.865060 kubelet[2521]: I0117 00:18:16.864665 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f78d6f-92de-4dea-a70b-611b949be750-config-volume\") pod \"coredns-66bc5c9577-w64f6\" (UID: \"d9f78d6f-92de-4dea-a70b-611b949be750\") " pod="kube-system/coredns-66bc5c9577-w64f6" Jan 17 00:18:17.075013 kubelet[2521]: E0117 00:18:17.074426 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:17.077244 containerd[1466]: time="2026-01-17T00:18:17.077178982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w64f6,Uid:d9f78d6f-92de-4dea-a70b-611b949be750,Namespace:kube-system,Attempt:0,}" Jan 17 00:18:17.082966 kubelet[2521]: E0117 00:18:17.081465 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:17.086158 containerd[1466]: time="2026-01-17T00:18:17.085644531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lwhmm,Uid:cc4a0ea5-6b09-45b8-816a-a9e71df523c1,Namespace:kube-system,Attempt:0,}" Jan 17 00:18:17.279020 kubelet[2521]: E0117 00:18:17.278979 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:18.282279 kubelet[2521]: E0117 00:18:18.282230 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:18.988642 systemd-networkd[1368]: cilium_host: Link UP Jan 17 00:18:18.989438 systemd-networkd[1368]: cilium_net: Link UP Jan 17 00:18:18.989743 systemd-networkd[1368]: cilium_net: Gained carrier Jan 17 00:18:18.989902 systemd-networkd[1368]: cilium_host: Gained carrier Jan 17 00:18:19.161116 systemd-networkd[1368]: cilium_vxlan: Link UP Jan 17 00:18:19.161124 systemd-networkd[1368]: cilium_vxlan: Gained carrier Jan 17 00:18:19.284866 kubelet[2521]: E0117 00:18:19.284714 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:19.326759 systemd-networkd[1368]: cilium_host: Gained IPv6LL Jan 17 00:18:19.589520 kernel: NET: Registered PF_ALG protocol family Jan 17 00:18:19.622715 systemd-networkd[1368]: cilium_net: Gained IPv6LL Jan 17 00:18:20.573920 systemd-networkd[1368]: lxc_health: Link UP Jan 17 00:18:20.583258 systemd-networkd[1368]: lxc_health: Gained carrier Jan 17 00:18:21.183445 systemd-networkd[1368]: lxc159f9daf4e34: Link UP Jan 17 00:18:21.187211 kernel: eth0: renamed from tmpb93a5 Jan 17 00:18:21.198223 systemd-networkd[1368]: lxc159f9daf4e34: Gained carrier Jan 17 00:18:21.216209 systemd-networkd[1368]: lxc8760b766afa5: Link UP Jan 17 00:18:21.220899 kernel: eth0: renamed from tmp617c7 Jan 17 00:18:21.224839 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Jan 17 00:18:21.227666 systemd-networkd[1368]: lxc8760b766afa5: Gained carrier Jan 17 00:18:21.301187 kubelet[2521]: E0117 00:18:21.300814 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:21.330289 kubelet[2521]: I0117 00:18:21.330218 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tlwts" podStartSLOduration=10.974143782 podStartE2EDuration="19.330196195s" podCreationTimestamp="2026-01-17 00:18:02 +0000 UTC" firstStartedPulling="2026-01-17 00:18:03.669849197 +0000 UTC m=+5.763536396" lastFinishedPulling="2026-01-17 00:18:12.025901629 +0000 UTC m=+14.119588809" observedRunningTime="2026-01-17 00:18:17.33853206 +0000 UTC m=+19.432219270" watchObservedRunningTime="2026-01-17 00:18:21.330196195 +0000 UTC m=+23.423883399" Jan 17 00:18:22.246729 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jan 17 00:18:22.304516 kubelet[2521]: E0117 00:18:22.303238 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:22.312194 systemd-networkd[1368]: lxc159f9daf4e34: Gained IPv6LL Jan 17 00:18:22.759258 systemd-networkd[1368]: lxc8760b766afa5: Gained IPv6LL Jan 17 00:18:26.260240 containerd[1466]: time="2026-01-17T00:18:26.259798532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:26.260240 containerd[1466]: time="2026-01-17T00:18:26.259973965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:26.260240 containerd[1466]: time="2026-01-17T00:18:26.260018058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:26.260240 containerd[1466]: time="2026-01-17T00:18:26.260133979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:26.307680 systemd[1]: Started cri-containerd-617c7ded015268625f65da954436f51e5b636f491a8f773c2dd16d837fd12797.scope - libcontainer container 617c7ded015268625f65da954436f51e5b636f491a8f773c2dd16d837fd12797. Jan 17 00:18:26.350460 containerd[1466]: time="2026-01-17T00:18:26.350327506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:18:26.351124 containerd[1466]: time="2026-01-17T00:18:26.351071626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:18:26.351274 containerd[1466]: time="2026-01-17T00:18:26.351222581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:26.352261 containerd[1466]: time="2026-01-17T00:18:26.352137403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:18:26.381955 containerd[1466]: time="2026-01-17T00:18:26.380906510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lwhmm,Uid:cc4a0ea5-6b09-45b8-816a-a9e71df523c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"617c7ded015268625f65da954436f51e5b636f491a8f773c2dd16d837fd12797\"" Jan 17 00:18:26.382101 kubelet[2521]: E0117 00:18:26.381754 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:26.389654 systemd[1]: Started cri-containerd-b93a51edc101668a55a2683ff06be7fb34807a5fc38b62b3a21d572f8a312df6.scope - libcontainer container b93a51edc101668a55a2683ff06be7fb34807a5fc38b62b3a21d572f8a312df6. Jan 17 00:18:26.403267 containerd[1466]: time="2026-01-17T00:18:26.403170012Z" level=info msg="CreateContainer within sandbox \"617c7ded015268625f65da954436f51e5b636f491a8f773c2dd16d837fd12797\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:18:26.421659 containerd[1466]: time="2026-01-17T00:18:26.421570056Z" level=info msg="CreateContainer within sandbox \"617c7ded015268625f65da954436f51e5b636f491a8f773c2dd16d837fd12797\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8678e7c749d6ccea193258b485e8cf1ff7cf523ff35196708f7e1fd4e7bce6b3\"" Jan 17 00:18:26.424244 containerd[1466]: time="2026-01-17T00:18:26.423680671Z" level=info msg="StartContainer for \"8678e7c749d6ccea193258b485e8cf1ff7cf523ff35196708f7e1fd4e7bce6b3\"" Jan 17 00:18:26.471089 systemd[1]: Started cri-containerd-8678e7c749d6ccea193258b485e8cf1ff7cf523ff35196708f7e1fd4e7bce6b3.scope - libcontainer container 8678e7c749d6ccea193258b485e8cf1ff7cf523ff35196708f7e1fd4e7bce6b3. Jan 17 00:18:26.501842 containerd[1466]: time="2026-01-17T00:18:26.501804156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w64f6,Uid:d9f78d6f-92de-4dea-a70b-611b949be750,Namespace:kube-system,Attempt:0,} returns sandbox id \"b93a51edc101668a55a2683ff06be7fb34807a5fc38b62b3a21d572f8a312df6\"" Jan 17 00:18:26.503434 kubelet[2521]: E0117 00:18:26.503407 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:26.511277 containerd[1466]: time="2026-01-17T00:18:26.511143237Z" level=info msg="CreateContainer within sandbox \"b93a51edc101668a55a2683ff06be7fb34807a5fc38b62b3a21d572f8a312df6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:18:26.521351 containerd[1466]: time="2026-01-17T00:18:26.520781121Z" level=info msg="StartContainer for \"8678e7c749d6ccea193258b485e8cf1ff7cf523ff35196708f7e1fd4e7bce6b3\" returns successfully" Jan 17 00:18:26.531367 containerd[1466]: time="2026-01-17T00:18:26.531226918Z" level=info msg="CreateContainer within sandbox \"b93a51edc101668a55a2683ff06be7fb34807a5fc38b62b3a21d572f8a312df6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b731460e9483d36a6cb78e664fd7de2a9fabf448f0b1e532750722ba6199d658\"" Jan 17 00:18:26.533437 containerd[1466]: time="2026-01-17T00:18:26.533387224Z" level=info msg="StartContainer for \"b731460e9483d36a6cb78e664fd7de2a9fabf448f0b1e532750722ba6199d658\"" Jan 17 00:18:26.573793 systemd[1]: Started cri-containerd-b731460e9483d36a6cb78e664fd7de2a9fabf448f0b1e532750722ba6199d658.scope - libcontainer container b731460e9483d36a6cb78e664fd7de2a9fabf448f0b1e532750722ba6199d658. Jan 17 00:18:26.616366 containerd[1466]: time="2026-01-17T00:18:26.616301573Z" level=info msg="StartContainer for \"b731460e9483d36a6cb78e664fd7de2a9fabf448f0b1e532750722ba6199d658\" returns successfully" Jan 17 00:18:27.271504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1599363909.mount: Deactivated successfully. Jan 17 00:18:27.320826 kubelet[2521]: E0117 00:18:27.320783 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:27.328860 kubelet[2521]: E0117 00:18:27.328809 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:27.342076 kubelet[2521]: I0117 00:18:27.341384 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lwhmm" podStartSLOduration=24.34136645 podStartE2EDuration="24.34136645s" podCreationTimestamp="2026-01-17 00:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:18:27.340157354 +0000 UTC m=+29.433844555" watchObservedRunningTime="2026-01-17 00:18:27.34136645 +0000 UTC m=+29.435053651" Jan 17 00:18:27.380275 kubelet[2521]: I0117 00:18:27.380153 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w64f6" podStartSLOduration=24.380133282 podStartE2EDuration="24.380133282s" podCreationTimestamp="2026-01-17 00:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:18:27.379791033 +0000 UTC m=+29.473478235" watchObservedRunningTime="2026-01-17 00:18:27.380133282 +0000 UTC m=+29.473820483" Jan 17 00:18:28.331456 kubelet[2521]: E0117 00:18:28.329701 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:28.331456 kubelet[2521]: E0117 00:18:28.330145 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:29.332122 kubelet[2521]: E0117 00:18:29.332072 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:29.334068 kubelet[2521]: E0117 00:18:29.334033 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:18:43.880891 systemd[1]: Started sshd@7-146.190.166.4:22-4.153.228.146:40998.service - OpenSSH per-connection server daemon (4.153.228.146:40998). Jan 17 00:18:44.365879 sshd[3923]: Accepted publickey for core from 4.153.228.146 port 40998 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:18:44.368855 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:44.377436 systemd-logind[1444]: New session 8 of user core. Jan 17 00:18:44.382750 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:18:45.265820 sshd[3923]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:45.272245 systemd[1]: sshd@7-146.190.166.4:22-4.153.228.146:40998.service: Deactivated successfully. Jan 17 00:18:45.275946 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:18:45.278124 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:18:45.279562 systemd-logind[1444]: Removed session 8. Jan 17 00:18:50.352007 systemd[1]: Started sshd@8-146.190.166.4:22-4.153.228.146:48828.service - OpenSSH per-connection server daemon (4.153.228.146:48828). Jan 17 00:18:50.776467 sshd[3937]: Accepted publickey for core from 4.153.228.146 port 48828 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:18:50.778978 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:50.785900 systemd-logind[1444]: New session 9 of user core. Jan 17 00:18:50.790725 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:18:51.204749 sshd[3937]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:51.211781 systemd[1]: sshd@8-146.190.166.4:22-4.153.228.146:48828.service: Deactivated successfully. Jan 17 00:18:51.214503 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:18:51.215807 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:18:51.217150 systemd-logind[1444]: Removed session 9. Jan 17 00:18:56.279860 systemd[1]: Started sshd@9-146.190.166.4:22-4.153.228.146:35092.service - OpenSSH per-connection server daemon (4.153.228.146:35092). Jan 17 00:18:56.713709 sshd[3950]: Accepted publickey for core from 4.153.228.146 port 35092 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:18:56.715542 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:18:56.721716 systemd-logind[1444]: New session 10 of user core. Jan 17 00:18:56.728801 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:18:57.128831 sshd[3950]: pam_unix(sshd:session): session closed for user core Jan 17 00:18:57.134414 systemd[1]: sshd@9-146.190.166.4:22-4.153.228.146:35092.service: Deactivated successfully. Jan 17 00:18:57.136996 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:18:57.138075 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:18:57.139363 systemd-logind[1444]: Removed session 10. Jan 17 00:19:02.208071 systemd[1]: Started sshd@10-146.190.166.4:22-4.153.228.146:35106.service - OpenSSH per-connection server daemon (4.153.228.146:35106). Jan 17 00:19:02.597798 sshd[3967]: Accepted publickey for core from 4.153.228.146 port 35106 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:02.600224 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:02.607837 systemd-logind[1444]: New session 11 of user core. Jan 17 00:19:02.613903 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:19:02.964889 sshd[3967]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:02.971422 systemd[1]: sshd@10-146.190.166.4:22-4.153.228.146:35106.service: Deactivated successfully. Jan 17 00:19:02.978968 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:19:02.984380 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:19:02.990875 systemd-logind[1444]: Removed session 11. Jan 17 00:19:03.054881 systemd[1]: Started sshd@11-146.190.166.4:22-4.153.228.146:35118.service - OpenSSH per-connection server daemon (4.153.228.146:35118). Jan 17 00:19:03.497570 sshd[3981]: Accepted publickey for core from 4.153.228.146 port 35118 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:03.499818 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:03.504941 systemd-logind[1444]: New session 12 of user core. Jan 17 00:19:03.509805 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:19:03.946277 sshd[3981]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:03.953765 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:19:03.954752 systemd[1]: sshd@11-146.190.166.4:22-4.153.228.146:35118.service: Deactivated successfully. Jan 17 00:19:03.957999 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:19:03.960055 systemd-logind[1444]: Removed session 12. Jan 17 00:19:04.014584 systemd[1]: Started sshd@12-146.190.166.4:22-4.153.228.146:35126.service - OpenSSH per-connection server daemon (4.153.228.146:35126). Jan 17 00:19:04.415132 sshd[3992]: Accepted publickey for core from 4.153.228.146 port 35126 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:04.417469 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:04.425728 systemd-logind[1444]: New session 13 of user core. Jan 17 00:19:04.435812 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:19:04.790860 sshd[3992]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:04.795339 systemd[1]: sshd@12-146.190.166.4:22-4.153.228.146:35126.service: Deactivated successfully. Jan 17 00:19:04.798329 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:19:04.799437 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:19:04.801760 systemd-logind[1444]: Removed session 13. Jan 17 00:19:08.112597 kubelet[2521]: E0117 00:19:08.112000 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:09.867273 systemd[1]: Started sshd@13-146.190.166.4:22-4.153.228.146:50234.service - OpenSSH per-connection server daemon (4.153.228.146:50234). Jan 17 00:19:10.276528 sshd[4006]: Accepted publickey for core from 4.153.228.146 port 50234 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:10.278900 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:10.285673 systemd-logind[1444]: New session 14 of user core. Jan 17 00:19:10.289746 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:19:10.651054 sshd[4006]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:10.656373 systemd[1]: sshd@13-146.190.166.4:22-4.153.228.146:50234.service: Deactivated successfully. Jan 17 00:19:10.659091 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:19:10.660090 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:19:10.661670 systemd-logind[1444]: Removed session 14. Jan 17 00:19:14.113343 kubelet[2521]: E0117 00:19:14.111711 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:15.112017 kubelet[2521]: E0117 00:19:15.111879 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:15.727959 systemd[1]: Started sshd@14-146.190.166.4:22-4.153.228.146:36966.service - OpenSSH per-connection server daemon (4.153.228.146:36966). Jan 17 00:19:16.113361 sshd[4018]: Accepted publickey for core from 4.153.228.146 port 36966 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:16.117902 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:16.129137 systemd-logind[1444]: New session 15 of user core. Jan 17 00:19:16.136770 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:19:16.485937 sshd[4018]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:16.491527 systemd[1]: sshd@14-146.190.166.4:22-4.153.228.146:36966.service: Deactivated successfully. Jan 17 00:19:16.494471 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:19:16.495967 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:19:16.497094 systemd-logind[1444]: Removed session 15. Jan 17 00:19:16.562943 systemd[1]: Started sshd@15-146.190.166.4:22-4.153.228.146:36976.service - OpenSSH per-connection server daemon (4.153.228.146:36976). Jan 17 00:19:16.961818 sshd[4030]: Accepted publickey for core from 4.153.228.146 port 36976 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:16.963767 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:16.969357 systemd-logind[1444]: New session 16 of user core. Jan 17 00:19:16.974760 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:19:17.533335 sshd[4030]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:17.542338 systemd[1]: sshd@15-146.190.166.4:22-4.153.228.146:36976.service: Deactivated successfully. Jan 17 00:19:17.547584 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:19:17.550568 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:19:17.552906 systemd-logind[1444]: Removed session 16. Jan 17 00:19:17.612928 systemd[1]: Started sshd@16-146.190.166.4:22-4.153.228.146:36990.service - OpenSSH per-connection server daemon (4.153.228.146:36990). Jan 17 00:19:18.015238 sshd[4041]: Accepted publickey for core from 4.153.228.146 port 36990 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:18.017189 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:18.023453 systemd-logind[1444]: New session 17 of user core. Jan 17 00:19:18.026772 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:19:19.009889 sshd[4041]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:19.020691 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:19:19.021130 systemd[1]: sshd@16-146.190.166.4:22-4.153.228.146:36990.service: Deactivated successfully. Jan 17 00:19:19.025974 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:19:19.027535 systemd-logind[1444]: Removed session 17. Jan 17 00:19:19.087982 systemd[1]: Started sshd@17-146.190.166.4:22-4.153.228.146:36992.service - OpenSSH per-connection server daemon (4.153.228.146:36992). Jan 17 00:19:19.475842 sshd[4057]: Accepted publickey for core from 4.153.228.146 port 36992 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:19.478443 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:19.484238 systemd-logind[1444]: New session 18 of user core. Jan 17 00:19:19.489939 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:19:20.018892 sshd[4057]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:20.024101 systemd[1]: sshd@17-146.190.166.4:22-4.153.228.146:36992.service: Deactivated successfully. Jan 17 00:19:20.027295 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:19:20.028587 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:19:20.030287 systemd-logind[1444]: Removed session 18. Jan 17 00:19:20.107907 systemd[1]: Started sshd@18-146.190.166.4:22-4.153.228.146:37004.service - OpenSSH per-connection server daemon (4.153.228.146:37004). Jan 17 00:19:20.536211 sshd[4070]: Accepted publickey for core from 4.153.228.146 port 37004 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:20.538211 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:20.544587 systemd-logind[1444]: New session 19 of user core. Jan 17 00:19:20.552772 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:19:20.922393 sshd[4070]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:20.927221 systemd[1]: sshd@18-146.190.166.4:22-4.153.228.146:37004.service: Deactivated successfully. Jan 17 00:19:20.930254 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:19:20.931620 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:19:20.932964 systemd-logind[1444]: Removed session 19. Jan 17 00:19:26.012962 systemd[1]: Started sshd@19-146.190.166.4:22-4.153.228.146:40832.service - OpenSSH per-connection server daemon (4.153.228.146:40832). Jan 17 00:19:26.446077 sshd[4086]: Accepted publickey for core from 4.153.228.146 port 40832 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:26.448420 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:26.454568 systemd-logind[1444]: New session 20 of user core. Jan 17 00:19:26.461862 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:19:26.842756 sshd[4086]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:26.846920 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:19:26.847364 systemd[1]: sshd@19-146.190.166.4:22-4.153.228.146:40832.service: Deactivated successfully. Jan 17 00:19:26.850263 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:19:26.852932 systemd-logind[1444]: Removed session 20. Jan 17 00:19:30.113499 kubelet[2521]: E0117 00:19:30.113434 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:31.935922 systemd[1]: Started sshd@20-146.190.166.4:22-4.153.228.146:40844.service - OpenSSH per-connection server daemon (4.153.228.146:40844). Jan 17 00:19:32.388903 sshd[4098]: Accepted publickey for core from 4.153.228.146 port 40844 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:32.391010 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:32.398159 systemd-logind[1444]: New session 21 of user core. Jan 17 00:19:32.405968 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:19:32.787827 sshd[4098]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:32.792292 systemd[1]: sshd@20-146.190.166.4:22-4.153.228.146:40844.service: Deactivated successfully. Jan 17 00:19:32.794901 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:19:32.796569 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:19:32.798152 systemd-logind[1444]: Removed session 21. Jan 17 00:19:32.866898 systemd[1]: Started sshd@21-146.190.166.4:22-4.153.228.146:40860.service - OpenSSH per-connection server daemon (4.153.228.146:40860). Jan 17 00:19:33.266525 sshd[4111]: Accepted publickey for core from 4.153.228.146 port 40860 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:33.268610 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:33.275878 systemd-logind[1444]: New session 22 of user core. Jan 17 00:19:33.278700 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:19:34.112289 kubelet[2521]: E0117 00:19:34.111751 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:35.880176 systemd[1]: run-containerd-runc-k8s.io-5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9-runc.1U7iqW.mount: Deactivated successfully. Jan 17 00:19:35.897877 containerd[1466]: time="2026-01-17T00:19:35.897836565Z" level=info msg="StopContainer for \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\" with timeout 30 (s)" Jan 17 00:19:35.900813 containerd[1466]: time="2026-01-17T00:19:35.900707104Z" level=info msg="Stop container \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\" with signal terminated" Jan 17 00:19:35.912799 containerd[1466]: time="2026-01-17T00:19:35.912734759Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:19:35.930900 systemd[1]: cri-containerd-40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f.scope: Deactivated successfully. Jan 17 00:19:35.938510 containerd[1466]: time="2026-01-17T00:19:35.938400350Z" level=info msg="StopContainer for \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\" with timeout 2 (s)" Jan 17 00:19:35.940038 containerd[1466]: time="2026-01-17T00:19:35.939995520Z" level=info msg="Stop container \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\" with signal terminated" Jan 17 00:19:35.956953 systemd-networkd[1368]: lxc_health: Link DOWN Jan 17 00:19:35.956962 systemd-networkd[1368]: lxc_health: Lost carrier Jan 17 00:19:35.985135 systemd[1]: cri-containerd-5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9.scope: Deactivated successfully. Jan 17 00:19:35.987761 systemd[1]: cri-containerd-5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9.scope: Consumed 8.740s CPU time. Jan 17 00:19:35.997801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f-rootfs.mount: Deactivated successfully. Jan 17 00:19:36.001768 containerd[1466]: time="2026-01-17T00:19:36.001627670Z" level=info msg="shim disconnected" id=40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f namespace=k8s.io Jan 17 00:19:36.001768 containerd[1466]: time="2026-01-17T00:19:36.001739733Z" level=warning msg="cleaning up after shim disconnected" id=40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f namespace=k8s.io Jan 17 00:19:36.001768 containerd[1466]: time="2026-01-17T00:19:36.001751474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:19:36.030076 containerd[1466]: time="2026-01-17T00:19:36.030015328Z" level=info msg="StopContainer for \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\" returns successfully" Jan 17 00:19:36.030907 containerd[1466]: time="2026-01-17T00:19:36.030871816Z" level=info msg="StopPodSandbox for \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\"" Jan 17 00:19:36.031016 containerd[1466]: time="2026-01-17T00:19:36.030933060Z" level=info msg="Container to stop \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:19:36.033929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2-shm.mount: Deactivated successfully. Jan 17 00:19:36.044240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9-rootfs.mount: Deactivated successfully. Jan 17 00:19:36.051503 containerd[1466]: time="2026-01-17T00:19:36.051308934Z" level=info msg="shim disconnected" id=5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9 namespace=k8s.io Jan 17 00:19:36.051503 containerd[1466]: time="2026-01-17T00:19:36.051376893Z" level=warning msg="cleaning up after shim disconnected" id=5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9 namespace=k8s.io Jan 17 00:19:36.051503 containerd[1466]: time="2026-01-17T00:19:36.051387956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:19:36.052645 systemd[1]: cri-containerd-700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2.scope: Deactivated successfully. Jan 17 00:19:36.084911 containerd[1466]: time="2026-01-17T00:19:36.084845140Z" level=info msg="StopContainer for \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\" returns successfully" Jan 17 00:19:36.085719 containerd[1466]: time="2026-01-17T00:19:36.085669448Z" level=info msg="StopPodSandbox for \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\"" Jan 17 00:19:36.085845 containerd[1466]: time="2026-01-17T00:19:36.085734140Z" level=info msg="Container to stop \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:19:36.085845 containerd[1466]: time="2026-01-17T00:19:36.085747834Z" level=info msg="Container to stop \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:19:36.085845 containerd[1466]: time="2026-01-17T00:19:36.085761606Z" level=info msg="Container to stop \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:19:36.085845 containerd[1466]: time="2026-01-17T00:19:36.085775101Z" level=info msg="Container to stop \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:19:36.085845 containerd[1466]: time="2026-01-17T00:19:36.085786132Z" level=info msg="Container to stop \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:19:36.105452 systemd[1]: cri-containerd-fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5.scope: Deactivated successfully. Jan 17 00:19:36.112536 containerd[1466]: time="2026-01-17T00:19:36.111416055Z" level=info msg="shim disconnected" id=700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2 namespace=k8s.io Jan 17 00:19:36.112536 containerd[1466]: time="2026-01-17T00:19:36.111950755Z" level=warning msg="cleaning up after shim disconnected" id=700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2 namespace=k8s.io Jan 17 00:19:36.112536 containerd[1466]: time="2026-01-17T00:19:36.112355483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:19:36.150637 containerd[1466]: time="2026-01-17T00:19:36.148828837Z" level=info msg="shim disconnected" id=fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5 namespace=k8s.io Jan 17 00:19:36.150637 containerd[1466]: time="2026-01-17T00:19:36.148890507Z" level=warning msg="cleaning up after shim disconnected" id=fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5 namespace=k8s.io Jan 17 00:19:36.150637 containerd[1466]: time="2026-01-17T00:19:36.148899506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:19:36.150637 containerd[1466]: time="2026-01-17T00:19:36.150422053Z" level=info msg="TearDown network for sandbox \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\" successfully" Jan 17 00:19:36.150637 containerd[1466]: time="2026-01-17T00:19:36.150460095Z" level=info msg="StopPodSandbox for \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\" returns successfully" Jan 17 00:19:36.180656 containerd[1466]: time="2026-01-17T00:19:36.180398352Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:19:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:19:36.183185 containerd[1466]: time="2026-01-17T00:19:36.182431109Z" level=info msg="TearDown network for sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" successfully" Jan 17 00:19:36.183185 containerd[1466]: time="2026-01-17T00:19:36.182864870Z" level=info msg="StopPodSandbox for \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" returns successfully" Jan 17 00:19:36.201072 kubelet[2521]: I0117 00:19:36.200726 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb74x\" (UniqueName: \"kubernetes.io/projected/ad726539-bb7c-427f-84e4-554e45e556be-kube-api-access-qb74x\") pod \"ad726539-bb7c-427f-84e4-554e45e556be\" (UID: \"ad726539-bb7c-427f-84e4-554e45e556be\") " Jan 17 00:19:36.201072 kubelet[2521]: I0117 00:19:36.200778 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad726539-bb7c-427f-84e4-554e45e556be-cilium-config-path\") pod \"ad726539-bb7c-427f-84e4-554e45e556be\" (UID: \"ad726539-bb7c-427f-84e4-554e45e556be\") " Jan 17 00:19:36.203895 kubelet[2521]: I0117 00:19:36.203733 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad726539-bb7c-427f-84e4-554e45e556be-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad726539-bb7c-427f-84e4-554e45e556be" (UID: "ad726539-bb7c-427f-84e4-554e45e556be"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:19:36.208909 kubelet[2521]: I0117 00:19:36.208868 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad726539-bb7c-427f-84e4-554e45e556be-kube-api-access-qb74x" (OuterVolumeSpecName: "kube-api-access-qb74x") pod "ad726539-bb7c-427f-84e4-554e45e556be" (UID: "ad726539-bb7c-427f-84e4-554e45e556be"). InnerVolumeSpecName "kube-api-access-qb74x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:19:36.302268 kubelet[2521]: I0117 00:19:36.301550 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-hostproc\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302268 kubelet[2521]: I0117 00:19:36.301647 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-host-proc-sys-net\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302268 kubelet[2521]: I0117 00:19:36.301679 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-etc-cni-netd\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302268 kubelet[2521]: I0117 00:19:36.301714 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-hubble-tls\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302268 kubelet[2521]: I0117 00:19:36.301732 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-lib-modules\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302268 kubelet[2521]: I0117 00:19:36.301747 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cni-path\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302620 kubelet[2521]: I0117 00:19:36.301765 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-clustermesh-secrets\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302620 kubelet[2521]: I0117 00:19:36.301782 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csq2q\" (UniqueName: \"kubernetes.io/projected/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-kube-api-access-csq2q\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302620 kubelet[2521]: I0117 00:19:36.301797 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-host-proc-sys-kernel\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302620 kubelet[2521]: I0117 00:19:36.301811 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-bpf-maps\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302620 kubelet[2521]: I0117 00:19:36.301824 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-cgroup\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302620 kubelet[2521]: I0117 00:19:36.301836 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-xtables-lock\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302853 kubelet[2521]: I0117 00:19:36.301849 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-run\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302853 kubelet[2521]: I0117 00:19:36.301849 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.302853 kubelet[2521]: I0117 00:19:36.301876 2521 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-config-path\") pod \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\" (UID: \"43e91e1a-3d13-42d9-b038-1c8cbbe61a3c\") " Jan 17 00:19:36.302853 kubelet[2521]: I0117 00:19:36.301926 2521 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qb74x\" (UniqueName: \"kubernetes.io/projected/ad726539-bb7c-427f-84e4-554e45e556be-kube-api-access-qb74x\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.302853 kubelet[2521]: I0117 00:19:36.301931 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-hostproc" (OuterVolumeSpecName: "hostproc") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.302853 kubelet[2521]: I0117 00:19:36.301941 2521 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-etc-cni-netd\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.303109 kubelet[2521]: I0117 00:19:36.301954 2521 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad726539-bb7c-427f-84e4-554e45e556be-cilium-config-path\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.303109 kubelet[2521]: I0117 00:19:36.301957 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.303109 kubelet[2521]: I0117 00:19:36.302012 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.303109 kubelet[2521]: I0117 00:19:36.302029 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.303109 kubelet[2521]: I0117 00:19:36.302045 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.303255 kubelet[2521]: I0117 00:19:36.302060 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.303255 kubelet[2521]: I0117 00:19:36.302080 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.307525 kubelet[2521]: I0117 00:19:36.304775 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:19:36.307525 kubelet[2521]: I0117 00:19:36.304843 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.307525 kubelet[2521]: I0117 00:19:36.304862 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cni-path" (OuterVolumeSpecName: "cni-path") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:19:36.308659 kubelet[2521]: I0117 00:19:36.308568 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:19:36.310338 kubelet[2521]: I0117 00:19:36.310256 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:19:36.310338 kubelet[2521]: I0117 00:19:36.310304 2521 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-kube-api-access-csq2q" (OuterVolumeSpecName: "kube-api-access-csq2q") pod "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" (UID: "43e91e1a-3d13-42d9-b038-1c8cbbe61a3c"). InnerVolumeSpecName "kube-api-access-csq2q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:19:36.403003 kubelet[2521]: I0117 00:19:36.402619 2521 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-host-proc-sys-kernel\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403003 kubelet[2521]: I0117 00:19:36.402665 2521 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-bpf-maps\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403003 kubelet[2521]: I0117 00:19:36.402675 2521 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-cgroup\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403003 kubelet[2521]: I0117 00:19:36.402683 2521 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-xtables-lock\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403003 kubelet[2521]: I0117 00:19:36.402692 2521 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-run\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403003 kubelet[2521]: I0117 00:19:36.402700 2521 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cilium-config-path\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403003 kubelet[2521]: I0117 00:19:36.402710 2521 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-hostproc\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403003 kubelet[2521]: I0117 00:19:36.402722 2521 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-host-proc-sys-net\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403613 kubelet[2521]: I0117 00:19:36.402737 2521 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-hubble-tls\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403613 kubelet[2521]: I0117 00:19:36.402753 2521 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-lib-modules\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403613 kubelet[2521]: I0117 00:19:36.402772 2521 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-cni-path\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403613 kubelet[2521]: I0117 00:19:36.402794 2521 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-clustermesh-secrets\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.403613 kubelet[2521]: I0117 00:19:36.402822 2521 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-csq2q\" (UniqueName: \"kubernetes.io/projected/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c-kube-api-access-csq2q\") on node \"ci-4081.3.6-n-2808572c0d\" DevicePath \"\"" Jan 17 00:19:36.537525 kubelet[2521]: I0117 00:19:36.535803 2521 scope.go:117] "RemoveContainer" containerID="40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f" Jan 17 00:19:36.545074 containerd[1466]: time="2026-01-17T00:19:36.544963564Z" level=info msg="RemoveContainer for \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\"" Jan 17 00:19:36.551054 containerd[1466]: time="2026-01-17T00:19:36.550986691Z" level=info msg="RemoveContainer for \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\" returns successfully" Jan 17 00:19:36.551832 kubelet[2521]: I0117 00:19:36.551799 2521 scope.go:117] "RemoveContainer" containerID="40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f" Jan 17 00:19:36.554500 systemd[1]: Removed slice kubepods-besteffort-podad726539_bb7c_427f_84e4_554e45e556be.slice - libcontainer container kubepods-besteffort-podad726539_bb7c_427f_84e4_554e45e556be.slice. Jan 17 00:19:36.575296 containerd[1466]: time="2026-01-17T00:19:36.558548728Z" level=error msg="ContainerStatus for \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\": not found" Jan 17 00:19:36.586266 systemd[1]: Removed slice kubepods-burstable-pod43e91e1a_3d13_42d9_b038_1c8cbbe61a3c.slice - libcontainer container kubepods-burstable-pod43e91e1a_3d13_42d9_b038_1c8cbbe61a3c.slice. Jan 17 00:19:36.586400 systemd[1]: kubepods-burstable-pod43e91e1a_3d13_42d9_b038_1c8cbbe61a3c.slice: Consumed 8.855s CPU time. Jan 17 00:19:36.594887 kubelet[2521]: E0117 00:19:36.594822 2521 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\": not found" containerID="40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f" Jan 17 00:19:36.606734 kubelet[2521]: I0117 00:19:36.594867 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f"} err="failed to get container status \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\": rpc error: code = NotFound desc = an error occurred when try to find container \"40319e3fbdf83f3a0eabd0783e517575c22b5310e586f185e43087534f7df93f\": not found" Jan 17 00:19:36.606734 kubelet[2521]: I0117 00:19:36.606737 2521 scope.go:117] "RemoveContainer" containerID="5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9" Jan 17 00:19:36.611001 containerd[1466]: time="2026-01-17T00:19:36.610872249Z" level=info msg="RemoveContainer for \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\"" Jan 17 00:19:36.615745 containerd[1466]: time="2026-01-17T00:19:36.615473321Z" level=info msg="RemoveContainer for \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\" returns successfully" Jan 17 00:19:36.616425 kubelet[2521]: I0117 00:19:36.616277 2521 scope.go:117] "RemoveContainer" containerID="a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa" Jan 17 00:19:36.619431 containerd[1466]: time="2026-01-17T00:19:36.619375461Z" level=info msg="RemoveContainer for \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\"" Jan 17 00:19:36.622032 containerd[1466]: time="2026-01-17T00:19:36.621993631Z" level=info msg="RemoveContainer for \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\" returns successfully" Jan 17 00:19:36.622449 kubelet[2521]: I0117 00:19:36.622390 2521 scope.go:117] "RemoveContainer" containerID="2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029" Jan 17 00:19:36.626877 containerd[1466]: time="2026-01-17T00:19:36.626835081Z" level=info msg="RemoveContainer for \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\"" Jan 17 00:19:36.629926 containerd[1466]: time="2026-01-17T00:19:36.629843710Z" level=info msg="RemoveContainer for \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\" returns successfully" Jan 17 00:19:36.631394 kubelet[2521]: I0117 00:19:36.631170 2521 scope.go:117] "RemoveContainer" containerID="b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c" Jan 17 00:19:36.634755 containerd[1466]: time="2026-01-17T00:19:36.633679107Z" level=info msg="RemoveContainer for \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\"" Jan 17 00:19:36.637517 containerd[1466]: time="2026-01-17T00:19:36.637462562Z" level=info msg="RemoveContainer for \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\" returns successfully" Jan 17 00:19:36.638191 kubelet[2521]: I0117 00:19:36.638093 2521 scope.go:117] "RemoveContainer" containerID="bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3" Jan 17 00:19:36.639819 containerd[1466]: time="2026-01-17T00:19:36.639772968Z" level=info msg="RemoveContainer for \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\"" Jan 17 00:19:36.642268 containerd[1466]: time="2026-01-17T00:19:36.642199731Z" level=info msg="RemoveContainer for \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\" returns successfully" Jan 17 00:19:36.642579 kubelet[2521]: I0117 00:19:36.642528 2521 scope.go:117] "RemoveContainer" containerID="5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9" Jan 17 00:19:36.642844 containerd[1466]: time="2026-01-17T00:19:36.642793471Z" level=error msg="ContainerStatus for \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\": not found" Jan 17 00:19:36.643157 kubelet[2521]: E0117 00:19:36.643019 2521 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\": not found" containerID="5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9" Jan 17 00:19:36.643157 kubelet[2521]: I0117 00:19:36.643048 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9"} err="failed to get container status \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5496723f06c7b5212b0f24402c74a6af67b016779539b2661a61b9d02d0057c9\": not found" Jan 17 00:19:36.643157 kubelet[2521]: I0117 00:19:36.643070 2521 scope.go:117] "RemoveContainer" containerID="a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa" Jan 17 00:19:36.643273 containerd[1466]: time="2026-01-17T00:19:36.643239331Z" level=error msg="ContainerStatus for \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\": not found" Jan 17 00:19:36.643384 kubelet[2521]: E0117 00:19:36.643350 2521 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\": not found" containerID="a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa" Jan 17 00:19:36.643422 kubelet[2521]: I0117 00:19:36.643379 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa"} err="failed to get container status \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\": rpc error: code = NotFound desc = an error occurred when try to find container \"a98c3aa94b00982ac9bda3d26d27a1ca29ccca13db64b0b8fb78a749bc879afa\": not found" Jan 17 00:19:36.643422 kubelet[2521]: I0117 00:19:36.643395 2521 scope.go:117] "RemoveContainer" containerID="2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029" Jan 17 00:19:36.643654 containerd[1466]: time="2026-01-17T00:19:36.643624192Z" level=error msg="ContainerStatus for \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\": not found" Jan 17 00:19:36.643851 kubelet[2521]: E0117 00:19:36.643819 2521 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\": not found" containerID="2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029" Jan 17 00:19:36.643851 kubelet[2521]: I0117 00:19:36.643841 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029"} err="failed to get container status \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e52d26204f13f241a3ed89cf178413576bc6c8e3c747d95be9bdbbab8858029\": not found" Jan 17 00:19:36.644268 kubelet[2521]: I0117 00:19:36.643856 2521 scope.go:117] "RemoveContainer" containerID="b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c" Jan 17 00:19:36.644268 kubelet[2521]: E0117 00:19:36.644114 2521 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\": not found" containerID="b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c" Jan 17 00:19:36.644268 kubelet[2521]: I0117 00:19:36.644139 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c"} err="failed to get container status \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\": not found" Jan 17 00:19:36.644268 kubelet[2521]: I0117 00:19:36.644161 2521 scope.go:117] "RemoveContainer" containerID="bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3" Jan 17 00:19:36.644392 containerd[1466]: time="2026-01-17T00:19:36.644012050Z" level=error msg="ContainerStatus for \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5b3a4fedd1ac4cc8701a0faaf2636c85ec54e258691698dfc43233f32022c3c\": not found" Jan 17 00:19:36.644712 containerd[1466]: time="2026-01-17T00:19:36.644601208Z" level=error msg="ContainerStatus for \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\": not found" Jan 17 00:19:36.644845 kubelet[2521]: E0117 00:19:36.644793 2521 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\": not found" containerID="bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3" Jan 17 00:19:36.644845 kubelet[2521]: I0117 00:19:36.644814 2521 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3"} err="failed to get container status \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb8a8672e89715685df163c7006421d7b446f87607ec19ae7a2f3b9ef95dfcd3\": not found" Jan 17 00:19:36.870894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2-rootfs.mount: Deactivated successfully. Jan 17 00:19:36.871068 systemd[1]: var-lib-kubelet-pods-ad726539\x2dbb7c\x2d427f\x2d84e4\x2d554e45e556be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqb74x.mount: Deactivated successfully. Jan 17 00:19:36.871139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5-rootfs.mount: Deactivated successfully. Jan 17 00:19:36.871201 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5-shm.mount: Deactivated successfully. Jan 17 00:19:36.871274 systemd[1]: var-lib-kubelet-pods-43e91e1a\x2d3d13\x2d42d9\x2db038\x2d1c8cbbe61a3c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcsq2q.mount: Deactivated successfully. Jan 17 00:19:36.871330 systemd[1]: var-lib-kubelet-pods-43e91e1a\x2d3d13\x2d42d9\x2db038\x2d1c8cbbe61a3c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:19:36.871396 systemd[1]: var-lib-kubelet-pods-43e91e1a\x2d3d13\x2d42d9\x2db038\x2d1c8cbbe61a3c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:19:37.855145 sshd[4111]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:37.859299 systemd[1]: sshd@21-146.190.166.4:22-4.153.228.146:40860.service: Deactivated successfully. Jan 17 00:19:37.863785 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:19:37.864455 systemd[1]: session-22.scope: Consumed 1.716s CPU time. Jan 17 00:19:37.867057 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:19:37.868431 systemd-logind[1444]: Removed session 22. Jan 17 00:19:37.944999 systemd[1]: Started sshd@22-146.190.166.4:22-4.153.228.146:53646.service - OpenSSH per-connection server daemon (4.153.228.146:53646). Jan 17 00:19:38.114796 kubelet[2521]: I0117 00:19:38.114636 2521 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e91e1a-3d13-42d9-b038-1c8cbbe61a3c" path="/var/lib/kubelet/pods/43e91e1a-3d13-42d9-b038-1c8cbbe61a3c/volumes" Jan 17 00:19:38.115779 kubelet[2521]: I0117 00:19:38.115630 2521 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad726539-bb7c-427f-84e4-554e45e556be" path="/var/lib/kubelet/pods/ad726539-bb7c-427f-84e4-554e45e556be/volumes" Jan 17 00:19:38.264209 kubelet[2521]: E0117 00:19:38.264120 2521 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:19:38.384461 sshd[4278]: Accepted publickey for core from 4.153.228.146 port 53646 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:38.386387 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:38.392693 systemd-logind[1444]: New session 23 of user core. Jan 17 00:19:38.401858 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:19:39.790089 sshd[4278]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:39.797766 systemd[1]: sshd@22-146.190.166.4:22-4.153.228.146:53646.service: Deactivated successfully. Jan 17 00:19:39.801165 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:19:39.801568 systemd[1]: session-23.scope: Consumed 1.008s CPU time. Jan 17 00:19:39.804384 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:19:39.808998 systemd-logind[1444]: Removed session 23. Jan 17 00:19:39.865661 kubelet[2521]: E0117 00:19:39.864182 2521 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081.3.6-n-2808572c0d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.6-n-2808572c0d' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Jan 17 00:19:39.869022 systemd[1]: Started sshd@23-146.190.166.4:22-4.153.228.146:53656.service - OpenSSH per-connection server daemon (4.153.228.146:53656). Jan 17 00:19:39.881270 kubelet[2521]: E0117 00:19:39.880638 2521 status_manager.go:1018] "Failed to get status for pod" err="pods \"cilium-8w7cq\" is forbidden: User \"system:node:ci-4081.3.6-n-2808572c0d\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.6-n-2808572c0d' and this object" podUID="2055a6a4-62ac-4ad6-afc4-bbe0589f1317" pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.893469 systemd[1]: Created slice kubepods-burstable-pod2055a6a4_62ac_4ad6_afc4_bbe0589f1317.slice - libcontainer container kubepods-burstable-pod2055a6a4_62ac_4ad6_afc4_bbe0589f1317.slice. Jan 17 00:19:39.932057 kubelet[2521]: I0117 00:19:39.931729 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-cilium-run\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932057 kubelet[2521]: I0117 00:19:39.932003 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-hostproc\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932057 kubelet[2521]: I0117 00:19:39.932061 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-cilium-cgroup\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932376 kubelet[2521]: I0117 00:19:39.932088 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-xtables-lock\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932376 kubelet[2521]: I0117 00:19:39.932119 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-cilium-config-path\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932376 kubelet[2521]: I0117 00:19:39.932146 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-cni-path\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932376 kubelet[2521]: I0117 00:19:39.932170 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-host-proc-sys-net\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932376 kubelet[2521]: I0117 00:19:39.932195 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-host-proc-sys-kernel\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932648 kubelet[2521]: I0117 00:19:39.932217 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqljm\" (UniqueName: \"kubernetes.io/projected/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-kube-api-access-xqljm\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932648 kubelet[2521]: I0117 00:19:39.932245 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-hubble-tls\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932648 kubelet[2521]: I0117 00:19:39.932274 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-cilium-ipsec-secrets\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932648 kubelet[2521]: I0117 00:19:39.932300 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-bpf-maps\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932648 kubelet[2521]: I0117 00:19:39.932322 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-etc-cni-netd\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932648 kubelet[2521]: I0117 00:19:39.932350 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-clustermesh-secrets\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:39.932888 kubelet[2521]: I0117 00:19:39.932381 2521 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-lib-modules\") pod \"cilium-8w7cq\" (UID: \"2055a6a4-62ac-4ad6-afc4-bbe0589f1317\") " pod="kube-system/cilium-8w7cq" Jan 17 00:19:40.118551 kubelet[2521]: I0117 00:19:40.118325 2521 setters.go:543] "Node became not ready" node="ci-4081.3.6-n-2808572c0d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:19:40Z","lastTransitionTime":"2026-01-17T00:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:19:40.295545 sshd[4290]: Accepted publickey for core from 4.153.228.146 port 53656 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:40.298815 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:40.306545 systemd-logind[1444]: New session 24 of user core. Jan 17 00:19:40.311772 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:19:40.584936 sshd[4290]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:40.593395 systemd[1]: sshd@23-146.190.166.4:22-4.153.228.146:53656.service: Deactivated successfully. Jan 17 00:19:40.597431 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:19:40.599021 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:19:40.600669 systemd-logind[1444]: Removed session 24. Jan 17 00:19:40.673024 systemd[1]: Started sshd@24-146.190.166.4:22-4.153.228.146:53658.service - OpenSSH per-connection server daemon (4.153.228.146:53658). Jan 17 00:19:41.039015 kubelet[2521]: E0117 00:19:41.038925 2521 projected.go:266] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 00:19:41.039015 kubelet[2521]: E0117 00:19:41.039001 2521 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-8w7cq: failed to sync secret cache: timed out waiting for the condition Jan 17 00:19:41.039536 kubelet[2521]: E0117 00:19:41.039130 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-hubble-tls podName:2055a6a4-62ac-4ad6-afc4-bbe0589f1317 nodeName:}" failed. No retries permitted until 2026-01-17 00:19:41.539097027 +0000 UTC m=+103.632784240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/2055a6a4-62ac-4ad6-afc4-bbe0589f1317-hubble-tls") pod "cilium-8w7cq" (UID: "2055a6a4-62ac-4ad6-afc4-bbe0589f1317") : failed to sync secret cache: timed out waiting for the condition Jan 17 00:19:41.102700 sshd[4301]: Accepted publickey for core from 4.153.228.146 port 53658 ssh2: RSA SHA256:d1xssXCxZ7/RICQNTzGJeDFE6NneBADHoj85LlPFNm8 Jan 17 00:19:41.104986 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:19:41.112417 systemd-logind[1444]: New session 25 of user core. Jan 17 00:19:41.118827 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:19:41.703391 kubelet[2521]: E0117 00:19:41.703333 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:41.705507 containerd[1466]: time="2026-01-17T00:19:41.705440217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8w7cq,Uid:2055a6a4-62ac-4ad6-afc4-bbe0589f1317,Namespace:kube-system,Attempt:0,}" Jan 17 00:19:41.746003 containerd[1466]: time="2026-01-17T00:19:41.744460937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:19:41.746003 containerd[1466]: time="2026-01-17T00:19:41.744607409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:19:41.746003 containerd[1466]: time="2026-01-17T00:19:41.744663666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:41.746003 containerd[1466]: time="2026-01-17T00:19:41.744890701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:19:41.785226 systemd[1]: run-containerd-runc-k8s.io-1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786-runc.edrvkr.mount: Deactivated successfully. Jan 17 00:19:41.797991 systemd[1]: Started cri-containerd-1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786.scope - libcontainer container 1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786. Jan 17 00:19:41.841380 containerd[1466]: time="2026-01-17T00:19:41.841094927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8w7cq,Uid:2055a6a4-62ac-4ad6-afc4-bbe0589f1317,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\"" Jan 17 00:19:41.844804 kubelet[2521]: E0117 00:19:41.843003 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:41.855444 containerd[1466]: time="2026-01-17T00:19:41.854913169Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:19:41.872914 containerd[1466]: time="2026-01-17T00:19:41.872780718Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"916893289f439049806fd35cf5fbb9f0ad150b02a6ca07186da3adf26bb6a2d1\"" Jan 17 00:19:41.875608 containerd[1466]: time="2026-01-17T00:19:41.875552485Z" level=info msg="StartContainer for \"916893289f439049806fd35cf5fbb9f0ad150b02a6ca07186da3adf26bb6a2d1\"" Jan 17 00:19:41.920831 systemd[1]: Started cri-containerd-916893289f439049806fd35cf5fbb9f0ad150b02a6ca07186da3adf26bb6a2d1.scope - libcontainer container 916893289f439049806fd35cf5fbb9f0ad150b02a6ca07186da3adf26bb6a2d1. Jan 17 00:19:41.970540 containerd[1466]: time="2026-01-17T00:19:41.968017194Z" level=info msg="StartContainer for \"916893289f439049806fd35cf5fbb9f0ad150b02a6ca07186da3adf26bb6a2d1\" returns successfully" Jan 17 00:19:41.993444 systemd[1]: cri-containerd-916893289f439049806fd35cf5fbb9f0ad150b02a6ca07186da3adf26bb6a2d1.scope: Deactivated successfully. Jan 17 00:19:42.044204 containerd[1466]: time="2026-01-17T00:19:42.043869978Z" level=info msg="shim disconnected" id=916893289f439049806fd35cf5fbb9f0ad150b02a6ca07186da3adf26bb6a2d1 namespace=k8s.io Jan 17 00:19:42.044204 containerd[1466]: time="2026-01-17T00:19:42.043960950Z" level=warning msg="cleaning up after shim disconnected" id=916893289f439049806fd35cf5fbb9f0ad150b02a6ca07186da3adf26bb6a2d1 namespace=k8s.io Jan 17 00:19:42.044204 containerd[1466]: time="2026-01-17T00:19:42.043975182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:19:42.591530 kubelet[2521]: E0117 00:19:42.591070 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:42.597588 containerd[1466]: time="2026-01-17T00:19:42.597446857Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:19:42.609715 containerd[1466]: time="2026-01-17T00:19:42.609508356Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"baed3a31db045912a76bdd976e157b60df78e5f1487dffda173da624180f08ae\"" Jan 17 00:19:42.610649 containerd[1466]: time="2026-01-17T00:19:42.610602866Z" level=info msg="StartContainer for \"baed3a31db045912a76bdd976e157b60df78e5f1487dffda173da624180f08ae\"" Jan 17 00:19:42.667757 systemd[1]: Started cri-containerd-baed3a31db045912a76bdd976e157b60df78e5f1487dffda173da624180f08ae.scope - libcontainer container baed3a31db045912a76bdd976e157b60df78e5f1487dffda173da624180f08ae. Jan 17 00:19:42.701585 containerd[1466]: time="2026-01-17T00:19:42.701466813Z" level=info msg="StartContainer for \"baed3a31db045912a76bdd976e157b60df78e5f1487dffda173da624180f08ae\" returns successfully" Jan 17 00:19:42.714266 systemd[1]: cri-containerd-baed3a31db045912a76bdd976e157b60df78e5f1487dffda173da624180f08ae.scope: Deactivated successfully. Jan 17 00:19:42.750356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baed3a31db045912a76bdd976e157b60df78e5f1487dffda173da624180f08ae-rootfs.mount: Deactivated successfully. Jan 17 00:19:42.751316 containerd[1466]: time="2026-01-17T00:19:42.751255461Z" level=info msg="shim disconnected" id=baed3a31db045912a76bdd976e157b60df78e5f1487dffda173da624180f08ae namespace=k8s.io Jan 17 00:19:42.751316 containerd[1466]: time="2026-01-17T00:19:42.751316492Z" level=warning msg="cleaning up after shim disconnected" id=baed3a31db045912a76bdd976e157b60df78e5f1487dffda173da624180f08ae namespace=k8s.io Jan 17 00:19:42.752529 containerd[1466]: time="2026-01-17T00:19:42.751327212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:19:42.776501 containerd[1466]: time="2026-01-17T00:19:42.776360855Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:19:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:19:43.266435 kubelet[2521]: E0117 00:19:43.266370 2521 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:19:43.597342 kubelet[2521]: E0117 00:19:43.597184 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:43.605134 containerd[1466]: time="2026-01-17T00:19:43.605082374Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:19:43.628235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173129382.mount: Deactivated successfully. Jan 17 00:19:43.634369 containerd[1466]: time="2026-01-17T00:19:43.634321138Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b799fffa741a029597e092b1f65a32b3848e35fa92d55cdc57e94d178193e45\"" Jan 17 00:19:43.636572 containerd[1466]: time="2026-01-17T00:19:43.636257611Z" level=info msg="StartContainer for \"0b799fffa741a029597e092b1f65a32b3848e35fa92d55cdc57e94d178193e45\"" Jan 17 00:19:43.691779 systemd[1]: Started cri-containerd-0b799fffa741a029597e092b1f65a32b3848e35fa92d55cdc57e94d178193e45.scope - libcontainer container 0b799fffa741a029597e092b1f65a32b3848e35fa92d55cdc57e94d178193e45. Jan 17 00:19:43.730954 containerd[1466]: time="2026-01-17T00:19:43.730887873Z" level=info msg="StartContainer for \"0b799fffa741a029597e092b1f65a32b3848e35fa92d55cdc57e94d178193e45\" returns successfully" Jan 17 00:19:43.743574 systemd[1]: cri-containerd-0b799fffa741a029597e092b1f65a32b3848e35fa92d55cdc57e94d178193e45.scope: Deactivated successfully. Jan 17 00:19:43.782687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b799fffa741a029597e092b1f65a32b3848e35fa92d55cdc57e94d178193e45-rootfs.mount: Deactivated successfully. Jan 17 00:19:43.784109 containerd[1466]: time="2026-01-17T00:19:43.783989743Z" level=info msg="shim disconnected" id=0b799fffa741a029597e092b1f65a32b3848e35fa92d55cdc57e94d178193e45 namespace=k8s.io Jan 17 00:19:43.784109 containerd[1466]: time="2026-01-17T00:19:43.784073585Z" level=warning msg="cleaning up after shim disconnected" id=0b799fffa741a029597e092b1f65a32b3848e35fa92d55cdc57e94d178193e45 namespace=k8s.io Jan 17 00:19:43.784914 containerd[1466]: time="2026-01-17T00:19:43.784089569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:19:44.602521 kubelet[2521]: E0117 00:19:44.601430 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:44.610268 containerd[1466]: time="2026-01-17T00:19:44.610211778Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:19:44.628682 containerd[1466]: time="2026-01-17T00:19:44.628620099Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"959b4455f31cfc966fd18176d9faa0ee2563f93105130111cfd0c62179f268cc\"" Jan 17 00:19:44.632553 containerd[1466]: time="2026-01-17T00:19:44.630753784Z" level=info msg="StartContainer for \"959b4455f31cfc966fd18176d9faa0ee2563f93105130111cfd0c62179f268cc\"" Jan 17 00:19:44.689826 systemd[1]: Started cri-containerd-959b4455f31cfc966fd18176d9faa0ee2563f93105130111cfd0c62179f268cc.scope - libcontainer container 959b4455f31cfc966fd18176d9faa0ee2563f93105130111cfd0c62179f268cc. Jan 17 00:19:44.721344 systemd[1]: cri-containerd-959b4455f31cfc966fd18176d9faa0ee2563f93105130111cfd0c62179f268cc.scope: Deactivated successfully. Jan 17 00:19:44.722153 containerd[1466]: time="2026-01-17T00:19:44.722105545Z" level=info msg="StartContainer for \"959b4455f31cfc966fd18176d9faa0ee2563f93105130111cfd0c62179f268cc\" returns successfully" Jan 17 00:19:44.764058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-959b4455f31cfc966fd18176d9faa0ee2563f93105130111cfd0c62179f268cc-rootfs.mount: Deactivated successfully. Jan 17 00:19:44.766463 containerd[1466]: time="2026-01-17T00:19:44.766316785Z" level=info msg="shim disconnected" id=959b4455f31cfc966fd18176d9faa0ee2563f93105130111cfd0c62179f268cc namespace=k8s.io Jan 17 00:19:44.766463 containerd[1466]: time="2026-01-17T00:19:44.766449379Z" level=warning msg="cleaning up after shim disconnected" id=959b4455f31cfc966fd18176d9faa0ee2563f93105130111cfd0c62179f268cc namespace=k8s.io Jan 17 00:19:44.766463 containerd[1466]: time="2026-01-17T00:19:44.766459521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:19:45.610354 kubelet[2521]: E0117 00:19:45.610308 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:45.629745 containerd[1466]: time="2026-01-17T00:19:45.629609785Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:19:45.651217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233821848.mount: Deactivated successfully. Jan 17 00:19:45.656849 containerd[1466]: time="2026-01-17T00:19:45.656436642Z" level=info msg="CreateContainer within sandbox \"1ff2263e476d302b11dafb01ff0ce219f6f4a0af838cc9653369b7dad6fed786\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1e81b8b96155e1cd24f13bf9d3e05ceba13f4cd1e02fbebb619dd8ea5089cfd5\"" Jan 17 00:19:45.657717 containerd[1466]: time="2026-01-17T00:19:45.657246015Z" level=info msg="StartContainer for \"1e81b8b96155e1cd24f13bf9d3e05ceba13f4cd1e02fbebb619dd8ea5089cfd5\"" Jan 17 00:19:45.711856 systemd[1]: Started cri-containerd-1e81b8b96155e1cd24f13bf9d3e05ceba13f4cd1e02fbebb619dd8ea5089cfd5.scope - libcontainer container 1e81b8b96155e1cd24f13bf9d3e05ceba13f4cd1e02fbebb619dd8ea5089cfd5. Jan 17 00:19:45.755514 containerd[1466]: time="2026-01-17T00:19:45.755079515Z" level=info msg="StartContainer for \"1e81b8b96155e1cd24f13bf9d3e05ceba13f4cd1e02fbebb619dd8ea5089cfd5\" returns successfully" Jan 17 00:19:46.367530 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:19:46.615525 kubelet[2521]: E0117 00:19:46.614715 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:46.640230 kubelet[2521]: I0117 00:19:46.640052 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8w7cq" podStartSLOduration=7.640030795 podStartE2EDuration="7.640030795s" podCreationTimestamp="2026-01-17 00:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:19:46.63993962 +0000 UTC m=+108.733626817" watchObservedRunningTime="2026-01-17 00:19:46.640030795 +0000 UTC m=+108.733717995" Jan 17 00:19:46.645030 systemd[1]: run-containerd-runc-k8s.io-1e81b8b96155e1cd24f13bf9d3e05ceba13f4cd1e02fbebb619dd8ea5089cfd5-runc.5PEj1G.mount: Deactivated successfully. Jan 17 00:19:47.704574 kubelet[2521]: E0117 00:19:47.704514 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:49.914788 systemd-networkd[1368]: lxc_health: Link UP Jan 17 00:19:49.924904 systemd-networkd[1368]: lxc_health: Gained carrier Jan 17 00:19:50.117813 kubelet[2521]: E0117 00:19:50.117766 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:51.271277 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jan 17 00:19:51.704168 kubelet[2521]: E0117 00:19:51.704008 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:52.636393 kubelet[2521]: E0117 00:19:52.636328 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:53.640558 kubelet[2521]: E0117 00:19:53.640472 2521 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 00:19:54.671349 systemd[1]: run-containerd-runc-k8s.io-1e81b8b96155e1cd24f13bf9d3e05ceba13f4cd1e02fbebb619dd8ea5089cfd5-runc.yvCxHM.mount: Deactivated successfully. Jan 17 00:19:56.885376 systemd[1]: run-containerd-runc-k8s.io-1e81b8b96155e1cd24f13bf9d3e05ceba13f4cd1e02fbebb619dd8ea5089cfd5-runc.fyQC4q.mount: Deactivated successfully. Jan 17 00:19:57.052165 sshd[4301]: pam_unix(sshd:session): session closed for user core Jan 17 00:19:57.058312 systemd[1]: sshd@24-146.190.166.4:22-4.153.228.146:53658.service: Deactivated successfully. Jan 17 00:19:57.064082 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:19:57.066061 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:19:57.067564 systemd-logind[1444]: Removed session 25. Jan 17 00:19:58.135395 containerd[1466]: time="2026-01-17T00:19:58.135342469Z" level=info msg="StopPodSandbox for \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\"" Jan 17 00:19:58.135860 containerd[1466]: time="2026-01-17T00:19:58.135468473Z" level=info msg="TearDown network for sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" successfully" Jan 17 00:19:58.135860 containerd[1466]: time="2026-01-17T00:19:58.135522012Z" level=info msg="StopPodSandbox for \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" returns successfully" Jan 17 00:19:58.136421 containerd[1466]: time="2026-01-17T00:19:58.136380128Z" level=info msg="RemovePodSandbox for \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\"" Jan 17 00:19:58.139284 containerd[1466]: time="2026-01-17T00:19:58.139211086Z" level=info msg="Forcibly stopping sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\"" Jan 17 00:19:58.139451 containerd[1466]: time="2026-01-17T00:19:58.139343143Z" level=info msg="TearDown network for sandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" successfully" Jan 17 00:19:58.143244 containerd[1466]: time="2026-01-17T00:19:58.143168279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:19:58.143244 containerd[1466]: time="2026-01-17T00:19:58.143242268Z" level=info msg="RemovePodSandbox \"fdb42ef261ee2817b8222b80bdd0396efe7a212cc1b4090ace1c1f73a68e4ab5\" returns successfully" Jan 17 00:19:58.144079 containerd[1466]: time="2026-01-17T00:19:58.144022052Z" level=info msg="StopPodSandbox for \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\"" Jan 17 00:19:58.144282 containerd[1466]: time="2026-01-17T00:19:58.144259114Z" level=info msg="TearDown network for sandbox \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\" successfully" Jan 17 00:19:58.144282 containerd[1466]: time="2026-01-17T00:19:58.144280421Z" level=info msg="StopPodSandbox for \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\" returns successfully" Jan 17 00:19:58.145354 containerd[1466]: time="2026-01-17T00:19:58.144649354Z" level=info msg="RemovePodSandbox for \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\"" Jan 17 00:19:58.145354 containerd[1466]: time="2026-01-17T00:19:58.144695792Z" level=info msg="Forcibly stopping sandbox \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\"" Jan 17 00:19:58.145354 containerd[1466]: time="2026-01-17T00:19:58.144755252Z" level=info msg="TearDown network for sandbox \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\" successfully" Jan 17 00:19:58.147813 containerd[1466]: time="2026-01-17T00:19:58.147766306Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:19:58.148026 containerd[1466]: time="2026-01-17T00:19:58.148007605Z" level=info msg="RemovePodSandbox \"700d9a8ef6b0629955ae48c2349e04fb646f667aab05390faabb2122953459c2\" returns successfully"