Jan 28 00:55:56.610241 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 00:55:56.610279 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:55:56.610295 kernel: BIOS-provided physical RAM map: Jan 28 00:55:56.610304 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 00:55:56.610313 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 00:55:56.610321 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 00:55:56.610330 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 00:55:56.610338 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 00:55:56.610350 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 28 00:55:56.610652 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 28 00:55:56.610669 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 28 00:55:56.610678 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 28 00:55:56.610790 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 28 00:55:56.610802 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 28 00:55:56.610911 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 28 00:55:56.610924 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 00:55:56.610939 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 28 00:55:56.610949 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 28 00:55:56.610959 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 00:55:56.610969 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 00:55:56.610979 kernel: NX (Execute Disable) protection: active Jan 28 00:55:56.610989 kernel: APIC: Static calls initialized Jan 28 00:55:56.610999 kernel: efi: EFI v2.7 by EDK II Jan 28 00:55:56.611009 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 28 00:55:56.611018 kernel: SMBIOS 2.8 present. Jan 28 00:55:56.611028 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 28 00:55:56.611037 kernel: Hypervisor detected: KVM Jan 28 00:55:56.611050 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 00:55:56.611059 kernel: kvm-clock: using sched offset of 21730623529 cycles Jan 28 00:55:56.611069 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 00:55:56.611079 kernel: tsc: Detected 2445.426 MHz processor Jan 28 00:55:56.611089 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 00:55:56.611100 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 00:55:56.611110 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 28 00:55:56.611120 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 00:55:56.611130 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 00:55:56.611145 kernel: Using GB pages for direct mapping Jan 28 00:55:56.611155 kernel: Secure boot disabled Jan 28 00:55:56.611165 kernel: ACPI: Early table checksum verification disabled Jan 28 00:55:56.611175 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 28 00:55:56.611191 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 28 00:55:56.611202 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:55:56.611212 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:55:56.611227 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 28 00:55:56.611238 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:55:56.611344 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:55:56.611598 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:55:56.611612 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:55:56.611624 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 28 00:55:56.611634 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 28 00:55:56.611650 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 28 00:55:56.611659 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 28 00:55:56.611669 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 28 00:55:56.611679 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 28 00:55:56.611688 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 28 00:55:56.611698 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 28 00:55:56.611709 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 28 00:55:56.611719 kernel: No NUMA configuration found Jan 28 00:55:56.611812 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 28 00:55:56.611830 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 28 00:55:56.611841 kernel: Zone ranges: Jan 28 00:55:56.611851 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 00:55:56.611862 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 28 00:55:56.611872 kernel: Normal empty Jan 28 00:55:56.611882 kernel: Movable zone start for each node Jan 28 00:55:56.611893 kernel: Early memory node ranges Jan 28 00:55:56.611904 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 00:55:56.611915 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 28 00:55:56.611930 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 28 00:55:56.611941 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 28 00:55:56.611951 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 28 00:55:56.611961 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 28 00:55:56.612058 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 28 00:55:56.612070 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 00:55:56.612081 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 00:55:56.612093 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 28 00:55:56.612103 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 00:55:56.612118 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 28 00:55:56.612129 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 28 00:55:56.612139 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 28 00:55:56.612150 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 00:55:56.612161 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 00:55:56.612172 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 00:55:56.612183 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 00:55:56.612194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 00:55:56.612205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 00:55:56.612223 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 00:55:56.612233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 00:55:56.612242 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 00:55:56.612251 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 00:55:56.612261 kernel: TSC deadline timer available Jan 28 00:55:56.612270 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 28 00:55:56.612279 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 00:55:56.612289 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 00:55:56.612298 kernel: kvm-guest: setup PV sched yield Jan 28 00:55:56.612315 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 28 00:55:56.612326 kernel: Booting paravirtualized kernel on KVM Jan 28 00:55:56.612337 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 00:55:56.612350 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 00:55:56.612615 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 28 00:55:56.612627 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 28 00:55:56.612639 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 00:55:56.612648 kernel: kvm-guest: PV spinlocks enabled Jan 28 00:55:56.612660 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 00:55:56.612678 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:55:56.612774 kernel: random: crng init done Jan 28 00:55:56.612787 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 00:55:56.612799 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:55:56.612810 kernel: Fallback order for Node 0: 0 Jan 28 00:55:56.612822 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 28 00:55:56.612832 kernel: Policy zone: DMA32 Jan 28 00:55:56.612843 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:55:56.612860 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 28 00:55:56.612871 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 00:55:56.612882 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 00:55:56.612893 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 00:55:56.612904 kernel: Dynamic Preempt: voluntary Jan 28 00:55:56.612915 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:55:56.612946 kernel: rcu: RCU event tracing is enabled. Jan 28 00:55:56.612963 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 00:55:56.612975 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:55:56.612987 kernel: Rude variant of Tasks RCU enabled. Jan 28 00:55:56.612998 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:55:56.613010 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:55:56.613026 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 00:55:56.613039 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 00:55:56.613051 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:55:56.613062 kernel: Console: colour dummy device 80x25 Jan 28 00:55:56.613075 kernel: printk: console [ttyS0] enabled Jan 28 00:55:56.613179 kernel: ACPI: Core revision 20230628 Jan 28 00:55:56.613196 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 00:55:56.613209 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 00:55:56.613220 kernel: x2apic enabled Jan 28 00:55:56.613231 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 00:55:56.613240 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 00:55:56.613251 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 00:55:56.613261 kernel: kvm-guest: setup PV IPIs Jan 28 00:55:56.613270 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 00:55:56.613286 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 28 00:55:56.613296 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 00:55:56.613308 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 00:55:56.613319 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 00:55:56.613331 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 00:55:56.613342 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 00:55:56.613647 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 00:55:56.613663 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 00:55:56.613676 kernel: Speculative Store Bypass: Vulnerable Jan 28 00:55:56.613693 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 00:55:56.613705 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 00:55:56.613715 kernel: active return thunk: srso_alias_return_thunk Jan 28 00:55:56.613726 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 00:55:56.613737 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 00:55:56.613840 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 00:55:56.613854 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 00:55:56.613867 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 00:55:56.613884 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 00:55:56.613893 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 00:55:56.613904 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 00:55:56.613913 kernel: Freeing SMP alternatives memory: 32K Jan 28 00:55:56.613923 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:55:56.613933 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 00:55:56.613942 kernel: landlock: Up and running. Jan 28 00:55:56.613954 kernel: SELinux: Initializing. Jan 28 00:55:56.613965 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:55:56.613980 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:55:56.613991 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 00:55:56.614003 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:55:56.614014 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:55:56.614026 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:55:56.614037 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 00:55:56.614049 kernel: signal: max sigframe size: 1776 Jan 28 00:55:56.614060 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:55:56.614073 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:55:56.614089 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 00:55:56.614100 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:55:56.614111 kernel: smpboot: x86: Booting SMP configuration: Jan 28 00:55:56.614123 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 00:55:56.614134 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 00:55:56.614145 kernel: smpboot: Max logical packages: 1 Jan 28 00:55:56.614156 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 00:55:56.614169 kernel: devtmpfs: initialized Jan 28 00:55:56.614181 kernel: x86/mm: Memory block size: 128MB Jan 28 00:55:56.614196 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 28 00:55:56.614206 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 28 00:55:56.614217 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 28 00:55:56.614227 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 28 00:55:56.614236 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 28 00:55:56.614246 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:55:56.614257 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 00:55:56.614268 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:55:56.614280 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:55:56.614298 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:55:56.614308 kernel: audit: type=2000 audit(1769561734.623:1): state=initialized audit_enabled=0 res=1 Jan 28 00:55:56.614318 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:55:56.614328 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 00:55:56.614338 kernel: cpuidle: using governor menu Jan 28 00:55:56.614348 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:55:56.614679 kernel: dca service started, version 1.12.1 Jan 28 00:55:56.614691 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 00:55:56.614702 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 00:55:56.614718 kernel: PCI: Using configuration type 1 for base access Jan 28 00:55:56.614728 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 00:55:56.614738 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:55:56.614749 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:55:56.614759 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:55:56.614770 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:55:56.614781 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:55:56.614793 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:55:56.614805 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:55:56.614820 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:55:56.614831 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 00:55:56.614843 kernel: ACPI: Interpreter enabled Jan 28 00:55:56.614854 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 00:55:56.614865 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 00:55:56.614876 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 00:55:56.614888 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 00:55:56.614899 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 00:55:56.614911 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 00:55:56.616608 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 00:55:56.616848 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 00:55:56.617055 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 00:55:56.617074 kernel: PCI host bridge to bus 0000:00 Jan 28 00:55:56.618320 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 00:55:56.618904 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 00:55:56.619111 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 00:55:56.619284 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 00:55:56.619808 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 00:55:56.619979 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 28 00:55:56.620161 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 00:55:56.621235 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 00:55:56.622114 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 28 00:55:56.622325 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 28 00:55:56.622805 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 28 00:55:56.623013 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 28 00:55:56.623210 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 28 00:55:56.623718 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 00:55:56.624185 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 28 00:55:56.624650 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 28 00:55:56.624843 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 28 00:55:56.625041 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 28 00:55:56.625345 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 28 00:55:56.625809 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 28 00:55:56.626005 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 28 00:55:56.626191 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 28 00:55:56.626808 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 28 00:55:56.627009 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 28 00:55:56.627197 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 28 00:55:56.627886 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 28 00:55:56.628093 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 28 00:55:56.628756 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 00:55:56.628977 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 00:55:56.629978 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 00:55:56.630185 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 28 00:55:56.630724 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 28 00:55:56.631207 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 00:55:56.631727 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 28 00:55:56.631745 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 00:55:56.631757 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 00:55:56.631776 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 00:55:56.631789 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 00:55:56.631801 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 00:55:56.631812 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 00:55:56.631821 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 00:55:56.631831 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 00:55:56.631841 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 00:55:56.631851 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 00:55:56.631861 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 00:55:56.631878 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 00:55:56.631890 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 00:55:56.631901 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 00:55:56.631913 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 00:55:56.631925 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 00:55:56.631936 kernel: iommu: Default domain type: Translated Jan 28 00:55:56.631948 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 00:55:56.631960 kernel: efivars: Registered efivars operations Jan 28 00:55:56.631971 kernel: PCI: Using ACPI for IRQ routing Jan 28 00:55:56.631987 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 00:55:56.631999 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 28 00:55:56.632011 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 28 00:55:56.632022 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 28 00:55:56.632034 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 28 00:55:56.632239 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 00:55:56.632720 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 00:55:56.632920 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 00:55:56.632942 kernel: vgaarb: loaded Jan 28 00:55:56.632964 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 00:55:56.632976 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 00:55:56.632989 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 00:55:56.633000 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:55:56.633013 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:55:56.633025 kernel: pnp: PnP ACPI init Jan 28 00:55:56.634055 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 00:55:56.634078 kernel: pnp: PnP ACPI: found 6 devices Jan 28 00:55:56.634097 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 00:55:56.634110 kernel: NET: Registered PF_INET protocol family Jan 28 00:55:56.634122 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 00:55:56.634134 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 00:55:56.634146 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:55:56.634158 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:55:56.634169 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 00:55:56.634181 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 00:55:56.634193 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:55:56.634209 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:55:56.634221 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:55:56.634233 kernel: NET: Registered PF_XDP protocol family Jan 28 00:55:56.634856 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 28 00:55:56.635156 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 28 00:55:56.635336 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 00:55:56.635826 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 00:55:56.636003 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 00:55:56.636183 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 00:55:56.636625 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 00:55:56.636814 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 28 00:55:56.636830 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:55:56.636841 kernel: Initialise system trusted keyrings Jan 28 00:55:56.636852 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 00:55:56.636862 kernel: Key type asymmetric registered Jan 28 00:55:56.636872 kernel: Asymmetric key parser 'x509' registered Jan 28 00:55:56.636882 kernel: hrtimer: interrupt took 17768412 ns Jan 28 00:55:56.636900 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 00:55:56.636910 kernel: io scheduler mq-deadline registered Jan 28 00:55:56.636921 kernel: io scheduler kyber registered Jan 28 00:55:56.636931 kernel: io scheduler bfq registered Jan 28 00:55:56.636942 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 00:55:56.636954 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 00:55:56.636964 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 00:55:56.636975 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 00:55:56.636987 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:55:56.637006 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 00:55:56.637016 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 00:55:56.637026 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 00:55:56.637036 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 00:55:56.637792 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 00:55:56.637811 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 00:55:56.638094 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 00:55:56.638284 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T00:55:52 UTC (1769561752) Jan 28 00:55:56.638715 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 00:55:56.638733 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 00:55:56.638745 kernel: efifb: probing for efifb Jan 28 00:55:56.638756 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 28 00:55:56.638766 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 28 00:55:56.638777 kernel: efifb: scrolling: redraw Jan 28 00:55:56.638788 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 28 00:55:56.638798 kernel: Console: switching to colour frame buffer device 100x37 Jan 28 00:55:56.638808 kernel: fb0: EFI VGA frame buffer device Jan 28 00:55:56.638825 kernel: pstore: Using crash dump compression: deflate Jan 28 00:55:56.638835 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 00:55:56.638845 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:55:56.638856 kernel: Segment Routing with IPv6 Jan 28 00:55:56.638867 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:55:56.638878 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:55:56.638889 kernel: Key type dns_resolver registered Jan 28 00:55:56.638927 kernel: IPI shorthand broadcast: enabled Jan 28 00:55:56.638945 kernel: sched_clock: Marking stable (15318083885, 1247120682)->(19449401637, -2884197070) Jan 28 00:55:56.638960 kernel: registered taskstats version 1 Jan 28 00:55:56.638970 kernel: Loading compiled-in X.509 certificates Jan 28 00:55:56.638982 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 00:55:56.638993 kernel: Key type .fscrypt registered Jan 28 00:55:56.639004 kernel: Key type fscrypt-provisioning registered Jan 28 00:55:56.639014 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:55:56.639025 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:55:56.639036 kernel: ima: No architecture policies found Jan 28 00:55:56.639050 kernel: clk: Disabling unused clocks Jan 28 00:55:56.639061 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 00:55:56.639072 kernel: Write protecting the kernel read-only data: 36864k Jan 28 00:55:56.639083 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 00:55:56.639094 kernel: Run /init as init process Jan 28 00:55:56.639104 kernel: with arguments: Jan 28 00:55:56.639115 kernel: /init Jan 28 00:55:56.639126 kernel: with environment: Jan 28 00:55:56.639136 kernel: HOME=/ Jan 28 00:55:56.639147 kernel: TERM=linux Jan 28 00:55:56.639164 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:55:56.639177 systemd[1]: Detected virtualization kvm. Jan 28 00:55:56.639188 systemd[1]: Detected architecture x86-64. Jan 28 00:55:56.639199 systemd[1]: Running in initrd. Jan 28 00:55:56.639210 systemd[1]: No hostname configured, using default hostname. Jan 28 00:55:56.639221 systemd[1]: Hostname set to . Jan 28 00:55:56.639237 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:55:56.639250 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:55:56.639262 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:55:56.639273 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:55:56.639287 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:55:56.639298 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:55:56.639310 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:55:56.639326 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:55:56.639338 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 00:55:56.639350 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 00:55:56.639622 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:55:56.639638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:55:56.639656 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:55:56.639667 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:55:56.639679 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:55:56.639689 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:55:56.639708 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:55:56.639721 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:55:56.639733 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:55:56.639745 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 00:55:56.639756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:55:56.639772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:55:56.639784 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:55:56.639795 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:55:56.639808 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:55:56.639819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:55:56.639830 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:55:56.639842 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:55:56.639853 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:55:56.639869 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:55:56.639915 systemd-journald[193]: Collecting audit messages is disabled. Jan 28 00:55:56.639942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:55:56.639954 systemd-journald[193]: Journal started Jan 28 00:55:56.639981 systemd-journald[193]: Runtime Journal (/run/log/journal/b0f8e5eafdcf481db62df999fd4a487c) is 6.0M, max 48.3M, 42.2M free. Jan 28 00:55:56.727596 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:55:56.753880 systemd-modules-load[195]: Inserted module 'overlay' Jan 28 00:55:56.756962 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:55:56.767936 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:55:56.770942 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:55:56.855700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:55:56.910989 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:55:57.017140 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:55:57.060071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:55:57.209902 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:55:57.262829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:55:57.354742 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:55:57.357272 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:55:57.396328 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:55:57.465348 kernel: Bridge firewalling registered Jan 28 00:55:57.489062 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 28 00:55:57.521826 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:55:57.593871 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:55:57.718848 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:55:57.758765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:55:57.824725 dracut-cmdline[228]: dracut-dracut-053 Jan 28 00:55:57.849972 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:55:58.126248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:55:58.172254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:55:58.405058 systemd-resolved[260]: Positive Trust Anchors: Jan 28 00:55:58.405175 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:55:58.405309 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:55:58.548947 systemd-resolved[260]: Defaulting to hostname 'linux'. Jan 28 00:55:58.600331 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:55:58.621075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:55:59.005864 kernel: SCSI subsystem initialized Jan 28 00:55:59.060773 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:55:59.206144 kernel: iscsi: registered transport (tcp) Jan 28 00:55:59.322860 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:55:59.322948 kernel: QLogic iSCSI HBA Driver Jan 28 00:55:59.555783 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:55:59.599152 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:55:59.788341 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:55:59.789026 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:55:59.804975 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 00:56:00.035621 kernel: raid6: avx2x4 gen() 13858 MB/s Jan 28 00:56:00.064344 kernel: raid6: avx2x2 gen() 17532 MB/s Jan 28 00:56:00.097794 kernel: raid6: avx2x1 gen() 2538 MB/s Jan 28 00:56:00.097877 kernel: raid6: using algorithm avx2x2 gen() 17532 MB/s Jan 28 00:56:00.128863 kernel: raid6: .... xor() 12957 MB/s, rmw enabled Jan 28 00:56:00.128946 kernel: raid6: using avx2x2 recovery algorithm Jan 28 00:56:00.201807 kernel: xor: automatically using best checksumming function avx Jan 28 00:56:01.316638 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:56:01.369891 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:56:01.402876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:56:01.488665 systemd-udevd[417]: Using default interface naming scheme 'v255'. Jan 28 00:56:01.513765 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:56:01.551284 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:56:01.619128 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 28 00:56:01.827617 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:56:01.864845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:56:02.083166 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:56:02.133825 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:56:02.215916 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:56:02.247709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:56:02.289666 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:56:02.314972 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:56:02.363657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:56:02.411100 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:56:02.411828 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:56:02.417159 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:56:02.417205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:56:02.417811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:02.417902 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:56:02.464112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:56:02.552011 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:56:02.741766 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 00:56:02.741836 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 00:56:02.763954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:56:02.886929 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 00:56:02.888910 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 00:56:02.888936 kernel: GPT:9289727 != 19775487 Jan 28 00:56:02.888953 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 00:56:02.888970 kernel: GPT:9289727 != 19775487 Jan 28 00:56:02.888986 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 00:56:02.889002 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:56:02.764148 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:02.958285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:56:03.041977 kernel: libata version 3.00 loaded. Jan 28 00:56:03.091004 kernel: AVX2 version of gcm_enc/dec engaged. Jan 28 00:56:03.096066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:03.120046 kernel: AES CTR mode by8 optimization enabled Jan 28 00:56:03.164114 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 00:56:03.164862 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 00:56:03.203821 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Jan 28 00:56:03.218213 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 00:56:03.319606 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (465) Jan 28 00:56:03.319646 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 00:56:03.319868 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 00:56:03.320043 kernel: scsi host0: ahci Jan 28 00:56:03.320613 kernel: scsi host1: ahci Jan 28 00:56:03.320809 kernel: scsi host2: ahci Jan 28 00:56:03.320984 kernel: scsi host3: ahci Jan 28 00:56:03.321165 kernel: scsi host4: ahci Jan 28 00:56:03.330814 kernel: scsi host5: ahci Jan 28 00:56:03.331968 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 28 00:56:03.333882 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 00:56:03.509329 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 28 00:56:03.510849 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 28 00:56:03.510876 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 28 00:56:03.510895 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 28 00:56:03.510910 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 28 00:56:03.430069 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 00:56:03.448883 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 00:56:03.673193 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:56:03.673232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:56:03.477330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:56:03.739987 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:03.740028 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:03.740048 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:03.529766 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:56:03.847212 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 00:56:03.847260 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 00:56:03.847276 kernel: ata3.00: applying bridge limits Jan 28 00:56:03.847311 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:03.847346 disk-uuid[560]: Primary Header is updated. Jan 28 00:56:03.847346 disk-uuid[560]: Secondary Entries is updated. Jan 28 00:56:03.847346 disk-uuid[560]: Secondary Header is updated. Jan 28 00:56:03.940944 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:03.940988 kernel: ata3.00: configured for UDMA/100 Jan 28 00:56:03.941007 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 00:56:03.574253 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:56:04.046162 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:56:04.214041 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 00:56:04.219900 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 00:56:04.258639 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 00:56:04.752266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:56:04.764032 disk-uuid[562]: The operation has completed successfully. Jan 28 00:56:05.053181 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:56:05.054912 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:56:05.178706 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 00:56:05.276342 sh[601]: Success Jan 28 00:56:05.433937 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 28 00:56:05.833981 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:56:05.920007 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 00:56:06.018083 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 00:56:06.159814 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 00:56:06.159908 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:56:06.159935 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 00:56:06.190831 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:56:06.190920 kernel: BTRFS info (device dm-0): using free space tree Jan 28 00:56:06.349870 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 00:56:06.363279 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:56:06.456892 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:56:06.481841 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:56:06.600806 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:06.600884 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:56:06.600902 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:56:06.662299 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:56:06.731742 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 00:56:06.763017 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:06.840329 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:56:06.887114 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:56:07.517659 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:56:07.570994 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:56:07.671018 ignition[710]: Ignition 2.19.0 Jan 28 00:56:07.671034 ignition[710]: Stage: fetch-offline Jan 28 00:56:07.671796 ignition[710]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:07.689313 systemd-networkd[788]: lo: Link UP Jan 28 00:56:07.671816 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:07.689321 systemd-networkd[788]: lo: Gained carrier Jan 28 00:56:07.671953 ignition[710]: parsed url from cmdline: "" Jan 28 00:56:07.703068 systemd-networkd[788]: Enumeration completed Jan 28 00:56:07.671960 ignition[710]: no config URL provided Jan 28 00:56:07.708767 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:56:07.671968 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:56:07.711275 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:56:07.671983 ignition[710]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:56:07.711281 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:56:07.672022 ignition[710]: op(1): [started] loading QEMU firmware config module Jan 28 00:56:07.720101 systemd[1]: Reached target network.target - Network. Jan 28 00:56:07.672035 ignition[710]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 00:56:07.754203 systemd-networkd[788]: eth0: Link UP Jan 28 00:56:07.955999 ignition[710]: op(1): [finished] loading QEMU firmware config module Jan 28 00:56:07.754211 systemd-networkd[788]: eth0: Gained carrier Jan 28 00:56:07.754229 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:56:07.978855 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 00:56:08.147353 systemd-resolved[260]: Detected conflict on linux IN A 10.0.0.13 Jan 28 00:56:08.148107 systemd-resolved[260]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jan 28 00:56:08.264263 systemd-resolved[260]: Detected conflict on linux8 IN A 10.0.0.13 Jan 28 00:56:08.265089 systemd-resolved[260]: Hostname conflict, changing published hostname from 'linux8' to 'linux9'. Jan 28 00:56:08.958908 systemd-networkd[788]: eth0: Gained IPv6LL Jan 28 00:56:09.411129 ignition[710]: parsing config with SHA512: d60088453c36f07690c8c429d85f89e5a7101f99bf42da5307f3908d51c2462cf84f5e51ce7cab0ee5eed43a1ca2611366e8fdcc68fa557386d968776ad115a3 Jan 28 00:56:09.442242 unknown[710]: fetched base config from "system" Jan 28 00:56:09.443052 unknown[710]: fetched user config from "qemu" Jan 28 00:56:09.452756 ignition[710]: fetch-offline: fetch-offline passed Jan 28 00:56:09.475830 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:56:09.452920 ignition[710]: Ignition finished successfully Jan 28 00:56:09.496344 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 00:56:09.574974 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:56:09.727351 ignition[794]: Ignition 2.19.0 Jan 28 00:56:09.727924 ignition[794]: Stage: kargs Jan 28 00:56:09.741876 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:56:09.728802 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:09.728827 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:09.825122 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:56:09.731764 ignition[794]: kargs: kargs passed Jan 28 00:56:09.731835 ignition[794]: Ignition finished successfully Jan 28 00:56:10.489191 ignition[802]: Ignition 2.19.0 Jan 28 00:56:10.489211 ignition[802]: Stage: disks Jan 28 00:56:10.513052 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:10.513080 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:10.517999 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:56:10.514725 ignition[802]: disks: disks passed Jan 28 00:56:10.542241 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:56:10.514802 ignition[802]: Ignition finished successfully Jan 28 00:56:10.587968 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:56:10.597810 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:56:10.631999 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:56:10.661536 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:56:10.762262 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:56:11.014211 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 28 00:56:11.028143 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:56:11.136960 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:56:12.258028 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 00:56:12.260748 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:56:12.300773 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:56:12.344172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:56:12.415725 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (821) Jan 28 00:56:12.418926 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:56:12.544806 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:12.544863 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:56:12.545683 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:56:12.545825 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:56:12.547178 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 00:56:12.547740 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:56:12.547874 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:56:12.572283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:56:12.612147 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:56:12.703081 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:56:13.062531 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 00:56:13.108146 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Jan 28 00:56:13.147745 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 00:56:13.178894 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 00:56:14.123063 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:56:14.175916 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:56:14.201939 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:56:14.266245 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:14.237336 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:56:14.344027 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:56:14.630113 ignition[934]: INFO : Ignition 2.19.0 Jan 28 00:56:14.630113 ignition[934]: INFO : Stage: mount Jan 28 00:56:14.646899 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:14.646899 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:14.702701 ignition[934]: INFO : mount: mount passed Jan 28 00:56:14.702701 ignition[934]: INFO : Ignition finished successfully Jan 28 00:56:14.710050 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:56:14.788056 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:56:14.822317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:56:14.922116 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Jan 28 00:56:14.949312 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:14.949686 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:56:14.949710 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:56:15.019290 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:56:15.034128 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:56:15.219151 ignition[965]: INFO : Ignition 2.19.0 Jan 28 00:56:15.219151 ignition[965]: INFO : Stage: files Jan 28 00:56:15.248053 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:15.248053 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:15.248053 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:56:15.248053 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:56:15.248053 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:56:15.248053 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:56:15.248053 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:56:15.375069 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:56:15.375069 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 00:56:15.375069 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 00:56:15.248766 unknown[965]: wrote ssh authorized keys file for user: core Jan 28 00:56:15.506937 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 00:56:17.942064 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 00:56:17.942064 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 00:56:17.942064 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 28 00:56:18.247204 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 00:56:21.419028 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 00:56:21.442279 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:56:21.475071 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:56:21.505219 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:56:21.531333 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:56:21.531333 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:56:21.591551 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:56:21.591551 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:56:21.644165 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:56:21.682754 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:56:21.712183 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:56:21.733755 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:56:21.798346 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:56:21.798346 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:56:21.871522 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 00:56:22.252809 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 00:56:30.824194 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:56:30.824194 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 28 00:56:30.886828 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:56:30.886828 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:56:30.886828 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 28 00:56:30.886828 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 28 00:56:30.886828 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 00:56:30.886828 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 00:56:30.886828 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 28 00:56:30.886828 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 00:56:31.211192 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 00:56:31.296004 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 00:56:31.323199 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 00:56:31.323199 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:56:31.323199 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:56:31.382138 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:56:31.401216 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:56:31.401216 ignition[965]: INFO : files: files passed Jan 28 00:56:31.401216 ignition[965]: INFO : Ignition finished successfully Jan 28 00:56:31.454854 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:56:31.520903 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:56:31.530580 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:56:31.580580 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:56:31.580972 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:56:31.654150 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 00:56:31.688141 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:56:31.688141 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:56:31.752858 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:56:31.803221 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:56:31.854204 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:56:31.923321 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:56:32.105185 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:56:32.105773 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:56:32.153069 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:56:32.169987 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:56:32.189122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:56:32.258250 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:56:32.345184 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:56:32.402990 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:56:32.522821 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:56:32.533968 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:56:32.562155 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:56:32.587604 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:56:32.587907 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:56:32.612322 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:56:32.633897 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:56:32.659108 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:56:32.684119 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:56:32.706939 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:56:32.732821 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:56:32.752081 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:56:32.781194 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:56:32.800942 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:56:32.822628 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:56:32.846039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:56:32.846241 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:56:32.878896 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:56:32.897101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:56:32.919261 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:56:32.925021 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:56:32.944018 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:56:32.944586 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:56:33.000547 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:56:33.001750 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:56:33.014935 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:56:33.043313 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:56:33.049127 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:56:33.049890 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:56:33.090088 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:56:33.120199 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:56:33.120927 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:56:33.145248 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:56:33.146776 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:56:33.172289 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:56:33.172801 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:56:33.178089 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:56:33.178286 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:56:33.269576 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:56:33.340812 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:56:33.487870 ignition[1019]: INFO : Ignition 2.19.0 Jan 28 00:56:33.487870 ignition[1019]: INFO : Stage: umount Jan 28 00:56:33.487870 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:33.487870 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:33.487870 ignition[1019]: INFO : umount: umount passed Jan 28 00:56:33.487870 ignition[1019]: INFO : Ignition finished successfully Jan 28 00:56:33.368089 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:56:33.368556 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:56:33.407348 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:56:33.410985 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:56:33.477146 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:56:33.477819 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:56:33.523843 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:56:33.524023 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:56:33.540184 systemd[1]: Stopped target network.target - Network. Jan 28 00:56:33.552034 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:56:33.552115 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:56:33.621081 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:56:33.622132 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:56:33.628068 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:56:33.628238 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:56:33.693841 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:56:33.693957 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:56:33.722067 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:56:33.740168 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:56:33.822812 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:56:33.823108 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:56:33.848910 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:56:33.849026 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:56:33.940633 systemd-networkd[788]: eth0: DHCPv6 lease lost Jan 28 00:56:33.964322 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:56:33.965302 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:56:33.986747 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:56:33.986863 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:56:34.051971 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:56:34.078273 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:56:34.078634 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:56:34.126273 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:56:34.126611 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:56:34.146952 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:56:34.147047 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:56:34.173218 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:56:34.221784 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:56:34.225210 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:56:34.225833 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:56:34.324107 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:56:34.324345 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:56:34.471896 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:56:34.473824 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:56:34.525318 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:56:34.525810 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:56:34.541038 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:56:34.541118 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:56:34.575608 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:56:34.576128 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:56:34.615572 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:56:34.615816 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:56:34.680327 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:56:34.682306 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:56:34.938759 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:56:34.952921 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:56:34.985913 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:56:35.028850 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 00:56:35.029145 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:56:35.029352 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:56:35.029817 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:56:35.029913 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:56:35.029977 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:35.031214 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:56:35.031790 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:56:35.063945 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:56:35.828181 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 28 00:56:35.064196 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:56:35.171295 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:56:35.202849 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:56:35.388100 systemd[1]: Switching root. Jan 28 00:56:35.921620 systemd-journald[193]: Journal stopped Jan 28 00:56:43.410152 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:56:43.410273 kernel: SELinux: policy capability open_perms=1 Jan 28 00:56:43.411032 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:56:43.411070 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:56:43.411094 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:56:43.411112 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:56:43.411131 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:56:43.411151 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:56:43.411287 kernel: audit: type=1403 audit(1769561796.634:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 00:56:43.411321 systemd[1]: Successfully loaded SELinux policy in 235.748ms. Jan 28 00:56:43.411575 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 47.086ms. Jan 28 00:56:43.411606 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:56:43.412164 systemd[1]: Detected virtualization kvm. Jan 28 00:56:43.412193 systemd[1]: Detected architecture x86-64. Jan 28 00:56:43.412215 systemd[1]: Detected first boot. Jan 28 00:56:43.412349 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:56:43.412584 zram_generator::config[1062]: No configuration found. Jan 28 00:56:43.412607 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:56:43.412633 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 00:56:43.412652 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 00:56:43.412671 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 00:56:43.412799 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:56:43.412819 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:56:43.413639 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:56:43.413669 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:56:43.413811 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:56:43.413840 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:56:43.413857 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:56:43.413874 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:56:43.413890 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:56:43.413907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:56:43.413923 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:56:43.414046 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:56:43.414067 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:56:43.414084 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:56:43.414101 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 00:56:43.414616 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:56:43.414634 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 00:56:43.414651 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 00:56:43.414667 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 00:56:43.414785 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:56:43.414904 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:56:43.414923 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:56:43.414940 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:56:43.415924 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:56:43.415954 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:56:43.415976 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:56:43.415997 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:56:43.416017 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:56:43.416153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:56:43.416177 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:56:43.416198 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:56:43.416217 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:56:43.416238 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:56:43.416258 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:43.416279 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:56:43.416300 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:56:43.416320 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:56:43.417161 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:56:43.417188 systemd[1]: Reached target machines.target - Containers. Jan 28 00:56:43.417208 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:56:43.417236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:56:43.417256 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:56:43.417276 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:56:43.417297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:56:43.417629 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:56:43.417865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:56:43.417892 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:56:43.417912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:56:43.417933 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:56:43.418290 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 00:56:43.418312 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 00:56:43.418331 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 00:56:43.418575 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 00:56:43.418820 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:56:43.418844 kernel: ACPI: bus type drm_connector registered Jan 28 00:56:43.418865 kernel: fuse: init (API version 7.39) Jan 28 00:56:43.418885 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:56:43.418906 kernel: loop: module loaded Jan 28 00:56:43.418927 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:56:43.418947 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:56:43.418965 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:56:43.418981 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 00:56:43.418997 systemd[1]: Stopped verity-setup.service. Jan 28 00:56:43.419645 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:43.419668 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:56:43.419814 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:56:43.419834 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:56:43.419851 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:56:43.419907 systemd-journald[1146]: Collecting audit messages is disabled. Jan 28 00:56:43.420058 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:56:43.420085 systemd-journald[1146]: Journal started Jan 28 00:56:43.420117 systemd-journald[1146]: Runtime Journal (/run/log/journal/b0f8e5eafdcf481db62df999fd4a487c) is 6.0M, max 48.3M, 42.2M free. Jan 28 00:56:39.479775 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:56:39.558887 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 00:56:39.561945 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 00:56:39.562827 systemd[1]: systemd-journald.service: Consumed 6.847s CPU time. Jan 28 00:56:43.455299 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:56:43.471276 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:56:43.489203 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:56:43.549578 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:56:43.585870 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:56:43.586630 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:56:43.608327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:56:43.609103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:56:43.626988 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:56:43.627663 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:56:43.645240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:56:43.646183 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:56:43.672064 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:56:43.672815 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:56:43.688935 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:56:43.689313 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:56:43.709283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:56:43.738931 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:56:43.755674 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:56:43.783159 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:56:43.879914 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:56:43.913832 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:56:43.941613 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:56:43.957617 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:56:43.957900 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:56:43.975797 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 00:56:44.020005 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:56:44.041078 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:56:44.053674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:56:44.060821 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:56:44.086337 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:56:44.109224 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:56:44.134195 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:56:44.150082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:56:44.161980 systemd-journald[1146]: Time spent on flushing to /var/log/journal/b0f8e5eafdcf481db62df999fd4a487c is 67.928ms for 993 entries. Jan 28 00:56:44.161980 systemd-journald[1146]: System Journal (/var/log/journal/b0f8e5eafdcf481db62df999fd4a487c) is 8.0M, max 195.6M, 187.6M free. Jan 28 00:56:44.700142 systemd-journald[1146]: Received client request to flush runtime journal. Jan 28 00:56:44.186250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:56:44.243886 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:56:44.290085 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:56:44.314906 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 00:56:44.341316 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:56:44.361150 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:56:44.400813 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:56:44.428246 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:56:44.685801 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:56:44.724185 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 00:56:44.741026 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:56:45.408153 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 28 00:56:45.446660 kernel: loop0: detected capacity change from 0 to 224512 Jan 28 00:56:45.486107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:56:45.614119 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:56:45.618352 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 00:56:45.657094 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:56:45.695984 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 28 00:56:45.696009 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 28 00:56:45.777256 kernel: loop1: detected capacity change from 0 to 142488 Jan 28 00:56:45.727804 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:56:45.803813 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:56:46.046946 kernel: loop2: detected capacity change from 0 to 140768 Jan 28 00:56:46.104158 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:56:46.156135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:56:46.302628 kernel: loop3: detected capacity change from 0 to 224512 Jan 28 00:56:46.549118 kernel: loop4: detected capacity change from 0 to 142488 Jan 28 00:56:46.648997 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 28 00:56:46.650201 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 28 00:56:46.669129 kernel: loop5: detected capacity change from 0 to 140768 Jan 28 00:56:46.673339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:56:46.769949 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 00:56:46.771134 (sd-merge)[1203]: Merged extensions into '/usr'. Jan 28 00:56:46.784961 systemd[1]: Reloading requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:56:46.785099 systemd[1]: Reloading... Jan 28 00:56:47.304642 zram_generator::config[1235]: No configuration found. Jan 28 00:56:47.847353 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:56:47.962221 systemd[1]: Reloading finished in 1175 ms. Jan 28 00:56:48.060040 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:56:48.244082 systemd[1]: Starting ensure-sysext.service... Jan 28 00:56:48.267940 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:56:48.336797 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:56:48.346565 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:56:48.373321 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:56:48.373847 systemd[1]: Reloading... Jan 28 00:56:48.506587 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:56:48.507303 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 00:56:48.512589 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 00:56:48.513146 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 28 00:56:48.513263 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 28 00:56:48.531055 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:56:48.531178 systemd-tmpfiles[1269]: Skipping /boot Jan 28 00:56:48.633185 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:56:48.633205 systemd-tmpfiles[1269]: Skipping /boot Jan 28 00:56:48.685840 zram_generator::config[1297]: No configuration found. Jan 28 00:56:49.174260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:56:49.401654 systemd[1]: Reloading finished in 1026 ms. Jan 28 00:56:49.597939 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:56:49.636810 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:56:49.755335 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:56:49.778258 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:56:49.801289 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:56:49.829019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:56:49.854965 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:56:49.879850 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:56:49.918038 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:56:49.937337 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:56:49.948977 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Jan 28 00:56:49.977205 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:49.978309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:56:49.989663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:56:50.026619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:56:50.058166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:56:50.076167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:56:50.138649 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:56:50.156231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:50.171812 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:56:50.190098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:56:50.190839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:56:50.216253 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:56:50.228058 augenrules[1363]: No rules Jan 28 00:56:50.238233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:56:50.257856 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:56:50.276264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:56:50.278554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:56:50.299138 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:56:50.300135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:56:50.340278 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:56:50.377264 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:56:50.460918 systemd[1]: Finished ensure-sysext.service. Jan 28 00:56:50.557025 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 00:56:50.587019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:50.587844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:56:50.631606 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1381) Jan 28 00:56:50.603172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:56:50.644070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:56:50.665884 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:56:50.699900 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:56:50.716610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:56:50.733922 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:56:50.762263 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 00:56:50.788073 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:56:50.788344 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:50.790654 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:56:50.791160 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:56:50.807333 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:56:50.808016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:56:50.984953 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 28 00:56:51.011599 kernel: ACPI: button: Power Button [PWRF] Jan 28 00:56:51.044331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:56:51.045668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:56:51.053994 systemd-resolved[1346]: Positive Trust Anchors: Jan 28 00:56:51.054123 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:56:51.054163 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:56:51.063339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:56:51.080634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:56:51.081245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:56:51.092969 systemd-resolved[1346]: Defaulting to hostname 'linux'. Jan 28 00:56:51.097890 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:56:51.121170 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:56:51.135325 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:56:51.391290 systemd-networkd[1407]: lo: Link UP Jan 28 00:56:51.391855 systemd-networkd[1407]: lo: Gained carrier Jan 28 00:56:51.397254 systemd-networkd[1407]: Enumeration completed Jan 28 00:56:51.397635 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:56:51.411642 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:56:51.411866 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:56:51.414326 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 00:56:51.414300 systemd[1]: Reached target network.target - Network. Jan 28 00:56:51.433218 systemd-networkd[1407]: eth0: Link UP Jan 28 00:56:51.433231 systemd-networkd[1407]: eth0: Gained carrier Jan 28 00:56:51.433256 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:56:51.440878 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:56:51.473265 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 00:56:51.491148 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:56:51.501133 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 00:56:51.505221 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Jan 28 00:56:51.513138 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 00:56:51.513216 systemd-timesyncd[1409]: Initial clock synchronization to Wed 2026-01-28 00:56:51.772764 UTC. Jan 28 00:56:51.619888 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 28 00:56:51.620565 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 00:56:51.620961 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:56:51.648672 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 00:56:51.649199 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 00:56:51.663192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:56:51.730636 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:56:51.791884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:56:51.994609 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:56:52.893285 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:53.189987 systemd-networkd[1407]: eth0: Gained IPv6LL Jan 28 00:56:53.229973 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:56:53.288616 kernel: kvm_amd: TSC scaling supported Jan 28 00:56:53.293015 kernel: kvm_amd: Nested Virtualization enabled Jan 28 00:56:53.293058 kernel: kvm_amd: Nested Paging enabled Jan 28 00:56:53.295780 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:56:53.302282 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 00:56:53.304033 kernel: kvm_amd: PMU virtualization is disabled Jan 28 00:56:53.906989 kernel: EDAC MC: Ver: 3.0.0 Jan 28 00:56:54.021014 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 00:56:54.094229 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 00:56:54.150743 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:56:54.289041 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 00:56:54.306030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:56:54.319924 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:56:54.361071 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:56:54.389937 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:56:54.424287 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:56:54.458712 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:56:54.488348 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:56:54.515077 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:56:54.515791 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:56:54.529038 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:56:54.548237 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:56:54.566154 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:56:54.589710 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:56:54.607235 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 00:56:54.623606 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:56:54.637192 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:56:54.652301 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:56:54.656202 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:56:54.670014 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:56:54.670255 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:56:54.682894 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:56:54.703790 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 00:56:54.724793 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:56:54.749935 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:56:54.771841 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:56:54.788074 jq[1444]: false Jan 28 00:56:54.790585 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:56:54.811837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:56:54.883200 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:56:54.886750 dbus-daemon[1443]: [system] SELinux support is enabled Jan 28 00:56:54.923778 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:56:54.937180 extend-filesystems[1445]: Found loop3 Jan 28 00:56:54.937180 extend-filesystems[1445]: Found loop4 Jan 28 00:56:54.937180 extend-filesystems[1445]: Found loop5 Jan 28 00:56:55.001938 extend-filesystems[1445]: Found sr0 Jan 28 00:56:55.001938 extend-filesystems[1445]: Found vda Jan 28 00:56:55.001938 extend-filesystems[1445]: Found vda1 Jan 28 00:56:55.001938 extend-filesystems[1445]: Found vda2 Jan 28 00:56:55.001938 extend-filesystems[1445]: Found vda3 Jan 28 00:56:55.001938 extend-filesystems[1445]: Found usr Jan 28 00:56:55.001938 extend-filesystems[1445]: Found vda4 Jan 28 00:56:55.001938 extend-filesystems[1445]: Found vda6 Jan 28 00:56:55.001938 extend-filesystems[1445]: Found vda7 Jan 28 00:56:55.001938 extend-filesystems[1445]: Found vda9 Jan 28 00:56:55.001938 extend-filesystems[1445]: Checking size of /dev/vda9 Jan 28 00:56:55.244889 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 00:56:54.947759 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:56:55.246175 extend-filesystems[1445]: Resized partition /dev/vda9 Jan 28 00:56:54.957812 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:56:55.283973 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Jan 28 00:56:55.006120 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:56:55.038857 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:56:55.080632 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:56:55.081577 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:56:55.319713 jq[1472]: true Jan 28 00:56:55.085814 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:56:55.161329 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:56:55.214979 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:56:55.250857 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 00:56:55.289848 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:56:55.290083 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:56:55.298905 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:56:55.299193 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:56:55.366180 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1391) Jan 28 00:56:55.366312 update_engine[1467]: I20260128 00:56:55.336206 1467 main.cc:92] Flatcar Update Engine starting Jan 28 00:56:55.366312 update_engine[1467]: I20260128 00:56:55.338997 1467 update_check_scheduler.cc:74] Next update check in 2m2s Jan 28 00:56:55.376853 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:56:55.333306 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:56:55.380169 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:56:55.380951 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:56:55.444325 jq[1482]: true Jan 28 00:56:55.524824 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:56:55.539567 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:56:55.539730 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:56:55.569756 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:56:55.569794 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:56:55.650058 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:56:55.651090 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 00:56:55.780067 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:56:55.813022 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 00:56:55.863744 tar[1478]: linux-amd64/LICENSE Jan 28 00:56:55.863744 tar[1478]: linux-amd64/helm Jan 28 00:56:55.831954 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:56:55.844289 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 00:56:55.844801 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 00:56:55.874182 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) Jan 28 00:56:55.874213 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 00:56:55.879047 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:56:55.892701 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 00:56:55.892701 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 00:56:55.892701 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 00:56:56.007124 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Jan 28 00:56:55.893017 systemd-logind[1463]: New seat seat0. Jan 28 00:56:55.905976 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:56:55.923126 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:56:55.923719 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:56:56.356176 bash[1522]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:56:56.362019 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:56:56.397154 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 00:56:56.420021 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:56:56.420709 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:56:56.452009 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:56:56.661609 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:56:56.891315 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:56:56.934072 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:56:56.961951 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 00:56:56.988699 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:56:59.854761 containerd[1483]: time="2026-01-28T00:56:59.851349773Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 00:57:00.310995 tar[1478]: linux-amd64/README.md Jan 28 00:57:00.369943 containerd[1483]: time="2026-01-28T00:57:00.367860282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:57:00.381030 containerd[1483]: time="2026-01-28T00:57:00.380295906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:57:00.381030 containerd[1483]: time="2026-01-28T00:57:00.380784755Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 00:57:00.381030 containerd[1483]: time="2026-01-28T00:57:00.380821691Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 00:57:00.382790 containerd[1483]: time="2026-01-28T00:57:00.382360366Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 00:57:00.382790 containerd[1483]: time="2026-01-28T00:57:00.382645500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 00:57:00.385573 containerd[1483]: time="2026-01-28T00:57:00.383035566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:57:00.385573 containerd[1483]: time="2026-01-28T00:57:00.383070344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:57:00.385573 containerd[1483]: time="2026-01-28T00:57:00.383912010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:57:00.385573 containerd[1483]: time="2026-01-28T00:57:00.383943536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 00:57:00.385573 containerd[1483]: time="2026-01-28T00:57:00.383968133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:57:00.385573 containerd[1483]: time="2026-01-28T00:57:00.383986571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 00:57:00.385573 containerd[1483]: time="2026-01-28T00:57:00.384648569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:57:00.386691 containerd[1483]: time="2026-01-28T00:57:00.386134110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:57:00.387511 containerd[1483]: time="2026-01-28T00:57:00.386955059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:57:00.387511 containerd[1483]: time="2026-01-28T00:57:00.387107513Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 00:57:00.390036 containerd[1483]: time="2026-01-28T00:57:00.389344648Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 00:57:00.390036 containerd[1483]: time="2026-01-28T00:57:00.389986488Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:57:00.455597 containerd[1483]: time="2026-01-28T00:57:00.452944797Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 00:57:00.455597 containerd[1483]: time="2026-01-28T00:57:00.454084325Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 00:57:00.455597 containerd[1483]: time="2026-01-28T00:57:00.454899609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 00:57:00.455597 containerd[1483]: time="2026-01-28T00:57:00.454937640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 00:57:00.455597 containerd[1483]: time="2026-01-28T00:57:00.454959714Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 00:57:00.455597 containerd[1483]: time="2026-01-28T00:57:00.455313523Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 00:57:00.465814 containerd[1483]: time="2026-01-28T00:57:00.458910125Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 00:57:00.465814 containerd[1483]: time="2026-01-28T00:57:00.459920878Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 00:57:00.459196 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:57:00.466252 containerd[1483]: time="2026-01-28T00:57:00.466065755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 00:57:00.466252 containerd[1483]: time="2026-01-28T00:57:00.466106915Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 00:57:00.466252 containerd[1483]: time="2026-01-28T00:57:00.466136010Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 00:57:00.466252 containerd[1483]: time="2026-01-28T00:57:00.466161741Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 00:57:00.466252 containerd[1483]: time="2026-01-28T00:57:00.466182742Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 00:57:00.466252 containerd[1483]: time="2026-01-28T00:57:00.466206397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 00:57:00.466252 containerd[1483]: time="2026-01-28T00:57:00.466234184Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 00:57:00.466775 containerd[1483]: time="2026-01-28T00:57:00.466579566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 00:57:00.466775 containerd[1483]: time="2026-01-28T00:57:00.466722203Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 00:57:00.466775 containerd[1483]: time="2026-01-28T00:57:00.466750812Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 00:57:00.466839 containerd[1483]: time="2026-01-28T00:57:00.466788396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.466839 containerd[1483]: time="2026-01-28T00:57:00.466813844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.466839 containerd[1483]: time="2026-01-28T00:57:00.466835574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.466960 containerd[1483]: time="2026-01-28T00:57:00.466859542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.466960 containerd[1483]: time="2026-01-28T00:57:00.466880817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.466960 containerd[1483]: time="2026-01-28T00:57:00.466903469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.466960 containerd[1483]: time="2026-01-28T00:57:00.466925280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.466960 containerd[1483]: time="2026-01-28T00:57:00.466947355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.467089 containerd[1483]: time="2026-01-28T00:57:00.466968284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.467089 containerd[1483]: time="2026-01-28T00:57:00.466995079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.467089 containerd[1483]: time="2026-01-28T00:57:00.467015300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.467089 containerd[1483]: time="2026-01-28T00:57:00.467038711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.467089 containerd[1483]: time="2026-01-28T00:57:00.467063045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.467180 containerd[1483]: time="2026-01-28T00:57:00.467091288Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 00:57:00.467180 containerd[1483]: time="2026-01-28T00:57:00.467126066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.467180 containerd[1483]: time="2026-01-28T00:57:00.467147200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.467180 containerd[1483]: time="2026-01-28T00:57:00.467166812Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 00:57:00.467890 containerd[1483]: time="2026-01-28T00:57:00.467352606Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 00:57:00.467890 containerd[1483]: time="2026-01-28T00:57:00.467607581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 00:57:00.467890 containerd[1483]: time="2026-01-28T00:57:00.467750644Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 00:57:00.467890 containerd[1483]: time="2026-01-28T00:57:00.467774674Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 00:57:00.467890 containerd[1483]: time="2026-01-28T00:57:00.467797265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.467890 containerd[1483]: time="2026-01-28T00:57:00.467819867Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 00:57:00.467890 containerd[1483]: time="2026-01-28T00:57:00.467848282Z" level=info msg="NRI interface is disabled by configuration." Jan 28 00:57:00.467890 containerd[1483]: time="2026-01-28T00:57:00.467864035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 00:57:00.471003 containerd[1483]: time="2026-01-28T00:57:00.470188516Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 00:57:00.471003 containerd[1483]: time="2026-01-28T00:57:00.470284290Z" level=info msg="Connect containerd service" Jan 28 00:57:00.471003 containerd[1483]: time="2026-01-28T00:57:00.470346471Z" level=info msg="using legacy CRI server" Jan 28 00:57:00.471003 containerd[1483]: time="2026-01-28T00:57:00.470565524Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:57:00.471003 containerd[1483]: time="2026-01-28T00:57:00.470833040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 00:57:00.478253 containerd[1483]: time="2026-01-28T00:57:00.477952720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:57:00.479652 containerd[1483]: time="2026-01-28T00:57:00.478726254Z" level=info msg="Start subscribing containerd event" Jan 28 00:57:00.479652 containerd[1483]: time="2026-01-28T00:57:00.479571576Z" level=info msg="Start recovering state" Jan 28 00:57:00.480085 containerd[1483]: time="2026-01-28T00:57:00.479872740Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:57:00.480178 containerd[1483]: time="2026-01-28T00:57:00.480156483Z" level=info msg="Start event monitor" Jan 28 00:57:00.480247 containerd[1483]: time="2026-01-28T00:57:00.480231034Z" level=info msg="Start snapshots syncer" Jan 28 00:57:00.480316 containerd[1483]: time="2026-01-28T00:57:00.480299445Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:57:00.481000 containerd[1483]: time="2026-01-28T00:57:00.480362912Z" level=info msg="Start streaming server" Jan 28 00:57:00.482100 containerd[1483]: time="2026-01-28T00:57:00.481914715Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:57:00.483750 containerd[1483]: time="2026-01-28T00:57:00.483601894Z" level=info msg="containerd successfully booted in 0.635477s" Jan 28 00:57:00.483747 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:57:02.413786 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:57:02.439798 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:49584.service - OpenSSH per-connection server daemon (10.0.0.1:49584). Jan 28 00:57:03.041712 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 49584 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 00:57:03.069320 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:03.208029 systemd-logind[1463]: New session 1 of user core. Jan 28 00:57:03.215649 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:57:03.271161 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:57:03.755185 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:57:03.785001 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:57:04.026153 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 00:57:04.903803 systemd[1557]: Queued start job for default target default.target. Jan 28 00:57:04.913702 systemd[1557]: Created slice app.slice - User Application Slice. Jan 28 00:57:04.913848 systemd[1557]: Reached target paths.target - Paths. Jan 28 00:57:04.913871 systemd[1557]: Reached target timers.target - Timers. Jan 28 00:57:04.919228 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:57:05.045800 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:57:05.046276 systemd[1557]: Reached target sockets.target - Sockets. Jan 28 00:57:05.046747 systemd[1557]: Reached target basic.target - Basic System. Jan 28 00:57:05.046830 systemd[1557]: Reached target default.target - Main User Target. Jan 28 00:57:05.047022 systemd[1557]: Startup finished in 909ms. Jan 28 00:57:05.049018 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:57:05.068838 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:57:05.317756 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:49702.service - OpenSSH per-connection server daemon (10.0.0.1:49702). Jan 28 00:57:05.706243 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 49702 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 00:57:05.722106 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:05.752094 systemd-logind[1463]: New session 2 of user core. Jan 28 00:57:05.766063 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 00:57:06.096948 sshd[1568]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:06.117997 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:49702.service: Deactivated successfully. Jan 28 00:57:06.123005 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 00:57:06.136928 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Jan 28 00:57:06.179009 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:49710.service - OpenSSH per-connection server daemon (10.0.0.1:49710). Jan 28 00:57:06.211302 systemd-logind[1463]: Removed session 2. Jan 28 00:57:06.331155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:06.336743 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:57:06.338183 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:57:06.339941 systemd[1]: Startup finished in 16.003s (kernel) + 42.700s (initrd) + 29.932s (userspace) = 1min 28.636s. Jan 28 00:57:06.379307 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 49710 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 00:57:06.391095 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:06.417134 systemd-logind[1463]: New session 3 of user core. Jan 28 00:57:06.432083 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:57:06.657192 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:06.670936 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:49710.service: Deactivated successfully. Jan 28 00:57:06.690095 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 00:57:06.694212 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Jan 28 00:57:06.699118 systemd-logind[1463]: Removed session 3. Jan 28 00:57:13.286974 kubelet[1581]: E0128 00:57:13.283952 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:57:13.300937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:57:13.301934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:57:13.325095 systemd[1]: kubelet.service: Consumed 14.403s CPU time. Jan 28 00:57:16.755589 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:36038.service - OpenSSH per-connection server daemon (10.0.0.1:36038). Jan 28 00:57:16.927148 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 36038 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 00:57:16.938876 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:17.020698 systemd-logind[1463]: New session 4 of user core. Jan 28 00:57:17.046941 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:57:17.175100 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:17.207205 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:36038.service: Deactivated successfully. Jan 28 00:57:17.212923 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 00:57:17.216712 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Jan 28 00:57:17.235814 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:36044.service - OpenSSH per-connection server daemon (10.0.0.1:36044). Jan 28 00:57:17.241172 systemd-logind[1463]: Removed session 4. Jan 28 00:57:17.329918 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 36044 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 00:57:17.336007 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:17.358706 systemd-logind[1463]: New session 5 of user core. Jan 28 00:57:17.366155 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:57:17.464024 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:17.489232 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:36044.service: Deactivated successfully. Jan 28 00:57:17.500034 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 00:57:17.565569 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Jan 28 00:57:17.599021 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:36050.service - OpenSSH per-connection server daemon (10.0.0.1:36050). Jan 28 00:57:17.611014 systemd-logind[1463]: Removed session 5. Jan 28 00:57:17.743110 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 36050 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 00:57:17.750326 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:17.785162 systemd-logind[1463]: New session 6 of user core. Jan 28 00:57:17.813971 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:57:18.068775 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:18.096857 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:36050.service: Deactivated successfully. Jan 28 00:57:18.106868 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:57:18.117651 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:57:18.128028 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:36058.service - OpenSSH per-connection server daemon (10.0.0.1:36058). Jan 28 00:57:18.131871 systemd-logind[1463]: Removed session 6. Jan 28 00:57:18.268824 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 36058 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 00:57:18.279832 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:18.298145 systemd-logind[1463]: New session 7 of user core. Jan 28 00:57:18.320148 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:57:18.468890 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:57:18.471134 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:57:18.523008 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 28 00:57:18.535165 sshd[1617]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:18.574972 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:36058.service: Deactivated successfully. Jan 28 00:57:18.588070 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 00:57:18.726308 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Jan 28 00:57:18.752170 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:36060.service - OpenSSH per-connection server daemon (10.0.0.1:36060). Jan 28 00:57:18.759717 systemd-logind[1463]: Removed session 7. Jan 28 00:57:18.865308 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 36060 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 00:57:18.871779 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:18.903143 systemd-logind[1463]: New session 8 of user core. Jan 28 00:57:18.913072 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 00:57:19.029938 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:57:19.030898 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:57:19.058131 sudo[1629]: pam_unix(sudo:session): session closed for user root Jan 28 00:57:19.085302 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 00:57:19.086799 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:57:19.174973 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 00:57:19.218923 auditctl[1632]: No rules Jan 28 00:57:19.231994 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:57:19.295292 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 00:57:19.358035 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:57:19.711051 augenrules[1650]: No rules Jan 28 00:57:19.716648 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:57:19.736575 sudo[1628]: pam_unix(sudo:session): session closed for user root Jan 28 00:57:19.760839 sshd[1625]: pam_unix(sshd:session): session closed for user core Jan 28 00:57:19.793822 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:36060.service: Deactivated successfully. Jan 28 00:57:19.804328 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 00:57:19.823273 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Jan 28 00:57:19.848159 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:36064.service - OpenSSH per-connection server daemon (10.0.0.1:36064). Jan 28 00:57:19.858263 systemd-logind[1463]: Removed session 8. Jan 28 00:57:20.121162 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 36064 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 00:57:20.130733 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:57:20.278809 systemd-logind[1463]: New session 9 of user core. Jan 28 00:57:20.292085 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 00:57:20.430171 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:57:20.434084 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:57:23.532714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:57:23.625960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:25.473923 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:57:25.474776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:25.987579 kubelet[1685]: E0128 00:57:25.986020 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:57:26.002877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:57:26.003212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:57:26.004172 systemd[1]: kubelet.service: Consumed 1.705s CPU time. Jan 28 00:57:26.079929 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:57:26.096102 (dockerd)[1696]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:57:28.931169 dockerd[1696]: time="2026-01-28T00:57:28.921115142Z" level=info msg="Starting up" Jan 28 00:57:30.199092 systemd[1]: var-lib-docker-metacopy\x2dcheck3014780310-merged.mount: Deactivated successfully. Jan 28 00:57:30.281566 dockerd[1696]: time="2026-01-28T00:57:30.279335999Z" level=info msg="Loading containers: start." Jan 28 00:57:31.167670 kernel: Initializing XFRM netlink socket Jan 28 00:57:31.500432 systemd-networkd[1407]: docker0: Link UP Jan 28 00:57:31.614916 dockerd[1696]: time="2026-01-28T00:57:31.613721271Z" level=info msg="Loading containers: done." Jan 28 00:57:31.730297 dockerd[1696]: time="2026-01-28T00:57:31.730067498Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:57:31.732955 dockerd[1696]: time="2026-01-28T00:57:31.730929703Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 00:57:31.738293 dockerd[1696]: time="2026-01-28T00:57:31.733633010Z" level=info msg="Daemon has completed initialization" Jan 28 00:57:32.206842 dockerd[1696]: time="2026-01-28T00:57:32.204507411Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:57:32.210098 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:57:35.985255 containerd[1483]: time="2026-01-28T00:57:35.983642132Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 00:57:36.099155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:57:36.114911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:36.945937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:37.031606 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:57:37.533497 kubelet[1851]: E0128 00:57:37.527646 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:57:37.547259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:57:37.548759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:57:37.551743 systemd[1]: kubelet.service: Consumed 1.046s CPU time. Jan 28 00:57:37.698051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423231996.mount: Deactivated successfully. Jan 28 00:57:40.708789 update_engine[1467]: I20260128 00:57:40.706888 1467 update_attempter.cc:509] Updating boot flags... Jan 28 00:57:40.863430 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1927) Jan 28 00:57:41.164320 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1929) Jan 28 00:57:46.588824 containerd[1483]: time="2026-01-28T00:57:46.587306232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:46.594153 containerd[1483]: time="2026-01-28T00:57:46.590610673Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 28 00:57:46.594153 containerd[1483]: time="2026-01-28T00:57:46.591563306Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:46.599953 containerd[1483]: time="2026-01-28T00:57:46.599590076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:46.602491 containerd[1483]: time="2026-01-28T00:57:46.602311733Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 10.617758338s" Jan 28 00:57:46.602491 containerd[1483]: time="2026-01-28T00:57:46.602487558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 00:57:46.615472 containerd[1483]: time="2026-01-28T00:57:46.613324658Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 00:57:47.780739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 00:57:47.802121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:50.206954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:50.647104 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:57:50.890119 kubelet[1945]: E0128 00:57:50.889952 1945 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:57:50.895124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:57:50.899559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:57:50.903760 systemd[1]: kubelet.service: Consumed 2.284s CPU time. Jan 28 00:57:54.540132 containerd[1483]: time="2026-01-28T00:57:54.539255323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:54.542215 containerd[1483]: time="2026-01-28T00:57:54.541107118Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 28 00:57:54.544060 containerd[1483]: time="2026-01-28T00:57:54.543919991Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:54.551532 containerd[1483]: time="2026-01-28T00:57:54.551295820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:54.552858 containerd[1483]: time="2026-01-28T00:57:54.552706524Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 7.938149849s" Jan 28 00:57:54.552858 containerd[1483]: time="2026-01-28T00:57:54.552840629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 00:57:54.558302 containerd[1483]: time="2026-01-28T00:57:54.558213425Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 00:57:59.127280 containerd[1483]: time="2026-01-28T00:57:59.126707073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:59.128855 containerd[1483]: time="2026-01-28T00:57:59.128018103Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 28 00:57:59.129821 containerd[1483]: time="2026-01-28T00:57:59.129621942Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:59.136079 containerd[1483]: time="2026-01-28T00:57:59.135794451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:59.137481 containerd[1483]: time="2026-01-28T00:57:59.137218086Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 4.578951917s" Jan 28 00:57:59.137481 containerd[1483]: time="2026-01-28T00:57:59.137340817Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 00:57:59.142826 containerd[1483]: time="2026-01-28T00:57:59.142781815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 00:58:02.133590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 00:58:02.594054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:58:05.563897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:58:05.567664 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:58:06.624827 kubelet[1970]: E0128 00:58:06.623905 1970 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:58:06.704985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:58:06.705841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:58:06.709294 systemd[1]: kubelet.service: Consumed 3.490s CPU time. Jan 28 00:58:13.273539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314389322.mount: Deactivated successfully. Jan 28 00:58:15.169685 containerd[1483]: time="2026-01-28T00:58:15.168294690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:15.175702 containerd[1483]: time="2026-01-28T00:58:15.170883380Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 28 00:58:15.175702 containerd[1483]: time="2026-01-28T00:58:15.174691362Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:15.187862 containerd[1483]: time="2026-01-28T00:58:15.187550020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:15.191153 containerd[1483]: time="2026-01-28T00:58:15.190922082Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 16.047928169s" Jan 28 00:58:15.191153 containerd[1483]: time="2026-01-28T00:58:15.191096840Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 00:58:15.204277 containerd[1483]: time="2026-01-28T00:58:15.203880925Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 00:58:17.073540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 00:58:17.307312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:58:17.930075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244756930.mount: Deactivated successfully. Jan 28 00:58:18.446928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:58:18.484606 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:58:19.393133 kubelet[2002]: E0128 00:58:19.392242 2002 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:58:19.406482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:58:19.407204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:58:19.409028 systemd[1]: kubelet.service: Consumed 1.659s CPU time. Jan 28 00:58:26.115105 containerd[1483]: time="2026-01-28T00:58:26.114339755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:26.117934 containerd[1483]: time="2026-01-28T00:58:26.117526858Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 28 00:58:26.120051 containerd[1483]: time="2026-01-28T00:58:26.119858574Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:26.133199 containerd[1483]: time="2026-01-28T00:58:26.132874315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:26.138241 containerd[1483]: time="2026-01-28T00:58:26.136914116Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 10.932976891s" Jan 28 00:58:26.138241 containerd[1483]: time="2026-01-28T00:58:26.137912890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 00:58:26.145957 containerd[1483]: time="2026-01-28T00:58:26.145475735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 00:58:26.787182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660083005.mount: Deactivated successfully. Jan 28 00:58:26.802145 containerd[1483]: time="2026-01-28T00:58:26.802011838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:26.804928 containerd[1483]: time="2026-01-28T00:58:26.804710729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 28 00:58:26.807197 containerd[1483]: time="2026-01-28T00:58:26.806982335Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:26.816131 containerd[1483]: time="2026-01-28T00:58:26.814977237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:26.820509 containerd[1483]: time="2026-01-28T00:58:26.820194105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 674.665839ms" Jan 28 00:58:26.820509 containerd[1483]: time="2026-01-28T00:58:26.820293157Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 00:58:26.826336 containerd[1483]: time="2026-01-28T00:58:26.826107095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 00:58:28.911276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218773175.mount: Deactivated successfully. Jan 28 00:58:29.517778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 00:58:29.550117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:58:31.792585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:58:31.870793 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:58:33.476185 kubelet[2079]: E0128 00:58:33.474952 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:58:33.486280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:58:33.486860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:58:33.488289 systemd[1]: kubelet.service: Consumed 3.322s CPU time. Jan 28 00:58:40.283822 containerd[1483]: time="2026-01-28T00:58:40.283010694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:40.286643 containerd[1483]: time="2026-01-28T00:58:40.286164984Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 28 00:58:40.299740 containerd[1483]: time="2026-01-28T00:58:40.298102232Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:40.312573 containerd[1483]: time="2026-01-28T00:58:40.312170771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:40.314236 containerd[1483]: time="2026-01-28T00:58:40.314117356Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 13.487966407s" Jan 28 00:58:40.314236 containerd[1483]: time="2026-01-28T00:58:40.314227066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 00:58:43.523169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 28 00:58:43.549523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:58:44.073142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:58:44.087125 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:58:44.321908 kubelet[2161]: E0128 00:58:44.320931 2161 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:58:44.337918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:58:44.338622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:58:49.320072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:58:49.358840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:58:49.517034 systemd[1]: Reloading requested from client PID 2176 ('systemctl') (unit session-9.scope)... Jan 28 00:58:49.518654 systemd[1]: Reloading... Jan 28 00:58:49.757882 zram_generator::config[2212]: No configuration found. Jan 28 00:58:50.175058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:58:50.297669 systemd[1]: Reloading finished in 777 ms. Jan 28 00:58:50.404570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:58:50.414249 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:58:50.449244 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:58:50.450529 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:58:50.466296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:58:50.863249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:58:50.880550 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:58:51.180812 kubelet[2265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:58:51.180812 kubelet[2265]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:58:51.180812 kubelet[2265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:58:51.180812 kubelet[2265]: I0128 00:58:51.176518 2265 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:58:51.740738 kubelet[2265]: I0128 00:58:51.740540 2265 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:58:51.740738 kubelet[2265]: I0128 00:58:51.740638 2265 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:58:51.742579 kubelet[2265]: I0128 00:58:51.741672 2265 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:58:51.875230 kubelet[2265]: I0128 00:58:51.874632 2265 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:58:51.875230 kubelet[2265]: E0128 00:58:51.875727 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:51.903914 kubelet[2265]: E0128 00:58:51.903647 2265 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 00:58:51.903914 kubelet[2265]: I0128 00:58:51.903855 2265 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 00:58:51.922338 kubelet[2265]: I0128 00:58:51.922263 2265 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:58:51.923683 kubelet[2265]: I0128 00:58:51.923569 2265 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:58:51.924258 kubelet[2265]: I0128 00:58:51.923663 2265 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:58:51.925331 kubelet[2265]: I0128 00:58:51.924289 2265 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:58:51.925331 kubelet[2265]: I0128 00:58:51.924306 2265 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:58:51.925331 kubelet[2265]: I0128 00:58:51.924982 2265 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:58:51.944231 kubelet[2265]: I0128 00:58:51.943035 2265 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:58:51.944231 kubelet[2265]: I0128 00:58:51.943776 2265 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:58:51.944231 kubelet[2265]: I0128 00:58:51.943940 2265 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:58:51.944231 kubelet[2265]: I0128 00:58:51.943956 2265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:58:51.948541 kubelet[2265]: W0128 00:58:51.948214 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:51.948541 kubelet[2265]: E0128 00:58:51.948324 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:51.949982 kubelet[2265]: W0128 00:58:51.949859 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:51.950318 kubelet[2265]: E0128 00:58:51.949999 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:51.962567 kubelet[2265]: I0128 00:58:51.962193 2265 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 00:58:51.963505 kubelet[2265]: I0128 00:58:51.963320 2265 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:58:51.964183 kubelet[2265]: W0128 00:58:51.963854 2265 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 00:58:51.975060 kubelet[2265]: I0128 00:58:51.974944 2265 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:58:51.975596 kubelet[2265]: I0128 00:58:51.975283 2265 server.go:1287] "Started kubelet" Jan 28 00:58:51.976608 kubelet[2265]: I0128 00:58:51.976014 2265 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:58:51.978064 kubelet[2265]: I0128 00:58:51.977001 2265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:58:51.978064 kubelet[2265]: I0128 00:58:51.977983 2265 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:58:51.982023 kubelet[2265]: I0128 00:58:51.981890 2265 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:58:51.985643 kubelet[2265]: I0128 00:58:51.985273 2265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:58:51.998573 kubelet[2265]: I0128 00:58:51.995937 2265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:58:51.998573 kubelet[2265]: E0128 00:58:51.996306 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:58:51.998573 kubelet[2265]: I0128 00:58:51.996721 2265 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:58:52.003609 kubelet[2265]: I0128 00:58:51.997347 2265 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:58:52.003609 kubelet[2265]: I0128 00:58:52.001932 2265 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:58:52.003609 kubelet[2265]: W0128 00:58:52.002556 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:52.003609 kubelet[2265]: E0128 00:58:52.003338 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:52.004485 kubelet[2265]: E0128 00:58:52.002648 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Jan 28 00:58:52.005643 kubelet[2265]: I0128 00:58:52.005613 2265 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:58:52.007638 kubelet[2265]: E0128 00:58:52.006892 2265 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:58:52.009172 kubelet[2265]: I0128 00:58:52.008531 2265 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:58:52.011531 kubelet[2265]: I0128 00:58:52.011003 2265 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:58:52.012782 kubelet[2265]: E0128 00:58:52.000709 2265 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ebf339b6ce865 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 00:58:51.975026789 +0000 UTC m=+1.048478642,LastTimestamp:2026-01-28 00:58:51.975026789 +0000 UTC m=+1.048478642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 00:58:52.103820 kubelet[2265]: E0128 00:58:52.100188 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:58:52.170049 kubelet[2265]: I0128 00:58:52.169046 2265 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:58:52.170049 kubelet[2265]: I0128 00:58:52.169081 2265 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:58:52.170049 kubelet[2265]: I0128 00:58:52.169349 2265 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:58:52.208312 kubelet[2265]: E0128 00:58:52.207040 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:58:52.213545 kubelet[2265]: E0128 00:58:52.210691 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Jan 28 00:58:52.213545 kubelet[2265]: I0128 00:58:52.212907 2265 policy_none.go:49] "None policy: Start" Jan 28 00:58:52.213545 kubelet[2265]: I0128 00:58:52.213166 2265 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:58:52.213904 kubelet[2265]: I0128 00:58:52.213584 2265 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:58:52.238028 kubelet[2265]: I0128 00:58:52.237611 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:58:52.247942 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 00:58:52.301165 kubelet[2265]: I0128 00:58:52.249628 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:58:52.301165 kubelet[2265]: I0128 00:58:52.250322 2265 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:58:52.301165 kubelet[2265]: I0128 00:58:52.250560 2265 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:58:52.301165 kubelet[2265]: I0128 00:58:52.250642 2265 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:58:52.301165 kubelet[2265]: E0128 00:58:52.250799 2265 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:58:52.301165 kubelet[2265]: W0128 00:58:52.253663 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:52.301165 kubelet[2265]: E0128 00:58:52.253702 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:52.309869 kubelet[2265]: E0128 00:58:52.309265 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:58:52.354311 kubelet[2265]: E0128 00:58:52.353812 2265 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 00:58:52.358814 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 00:58:52.417066 kubelet[2265]: E0128 00:58:52.413320 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:58:52.474789 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 00:58:52.521920 kubelet[2265]: E0128 00:58:52.519897 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:58:52.527322 kubelet[2265]: I0128 00:58:52.526861 2265 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:58:52.528676 kubelet[2265]: I0128 00:58:52.528346 2265 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:58:52.529671 kubelet[2265]: I0128 00:58:52.528754 2265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:58:52.530838 kubelet[2265]: I0128 00:58:52.530805 2265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:58:52.544353 kubelet[2265]: E0128 00:58:52.543973 2265 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:58:52.544353 kubelet[2265]: E0128 00:58:52.544161 2265 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 00:58:52.578993 systemd[1]: Created slice kubepods-burstable-podcb5a5097cb878fec302ec9db4124e0fe.slice - libcontainer container kubepods-burstable-podcb5a5097cb878fec302ec9db4124e0fe.slice. Jan 28 00:58:52.598541 kubelet[2265]: E0128 00:58:52.597931 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:52.604605 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 28 00:58:52.612067 kubelet[2265]: E0128 00:58:52.611972 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Jan 28 00:58:52.616545 kubelet[2265]: E0128 00:58:52.616517 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:52.621035 kubelet[2265]: I0128 00:58:52.620543 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:58:52.621035 kubelet[2265]: I0128 00:58:52.620603 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:58:52.621035 kubelet[2265]: I0128 00:58:52.620635 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:58:52.621035 kubelet[2265]: I0128 00:58:52.620658 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:58:52.621035 kubelet[2265]: I0128 00:58:52.620680 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:58:52.621342 kubelet[2265]: I0128 00:58:52.620705 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:58:52.621342 kubelet[2265]: I0128 00:58:52.620728 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:58:52.621342 kubelet[2265]: I0128 00:58:52.620750 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 00:58:52.621342 kubelet[2265]: I0128 00:58:52.620772 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:58:52.622686 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 28 00:58:52.630711 kubelet[2265]: E0128 00:58:52.630060 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:52.640630 kubelet[2265]: I0128 00:58:52.640260 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:58:52.641874 kubelet[2265]: E0128 00:58:52.641773 2265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 28 00:58:52.846200 kubelet[2265]: I0128 00:58:52.845822 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:58:52.846961 kubelet[2265]: E0128 00:58:52.846563 2265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 28 00:58:52.858148 kubelet[2265]: W0128 00:58:52.857942 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:52.858148 kubelet[2265]: E0128 00:58:52.858140 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:52.896204 kubelet[2265]: W0128 00:58:52.895925 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:52.896204 kubelet[2265]: E0128 00:58:52.896072 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:52.900285 kubelet[2265]: E0128 00:58:52.900060 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:52.903524 containerd[1483]: time="2026-01-28T00:58:52.902996969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cb5a5097cb878fec302ec9db4124e0fe,Namespace:kube-system,Attempt:0,}" Jan 28 00:58:52.918635 kubelet[2265]: E0128 00:58:52.918335 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:52.920544 containerd[1483]: time="2026-01-28T00:58:52.920298833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 00:58:52.934237 kubelet[2265]: E0128 00:58:52.933586 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:52.935531 containerd[1483]: time="2026-01-28T00:58:52.935325813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 00:58:53.254558 kubelet[2265]: I0128 00:58:53.253666 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:58:53.254558 kubelet[2265]: E0128 00:58:53.254547 2265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 28 00:58:53.255667 kubelet[2265]: W0128 00:58:53.254571 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:53.255667 kubelet[2265]: E0128 00:58:53.254923 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:53.380262 kubelet[2265]: W0128 00:58:53.379708 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:53.380262 kubelet[2265]: E0128 00:58:53.379839 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:53.416269 kubelet[2265]: E0128 00:58:53.415701 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Jan 28 00:58:53.466999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098086164.mount: Deactivated successfully. Jan 28 00:58:53.484342 containerd[1483]: time="2026-01-28T00:58:53.484068522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:58:53.491056 containerd[1483]: time="2026-01-28T00:58:53.490893996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 28 00:58:53.493152 containerd[1483]: time="2026-01-28T00:58:53.493029692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:58:53.494905 containerd[1483]: time="2026-01-28T00:58:53.494775349Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:58:53.498766 containerd[1483]: time="2026-01-28T00:58:53.498342955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 00:58:53.500681 containerd[1483]: time="2026-01-28T00:58:53.500619742Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:58:53.504225 containerd[1483]: time="2026-01-28T00:58:53.502691780Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 00:58:53.506722 containerd[1483]: time="2026-01-28T00:58:53.506530030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:58:53.513238 containerd[1483]: time="2026-01-28T00:58:53.512956953Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.432271ms" Jan 28 00:58:53.522034 containerd[1483]: time="2026-01-28T00:58:53.521857595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.098001ms" Jan 28 00:58:53.525187 containerd[1483]: time="2026-01-28T00:58:53.525007173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.431355ms" Jan 28 00:58:54.073705 kubelet[2265]: E0128 00:58:54.070739 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:54.203637 kubelet[2265]: I0128 00:58:54.202736 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:58:54.207161 kubelet[2265]: E0128 00:58:54.206504 2265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 28 00:58:54.896205 kubelet[2265]: W0128 00:58:54.891852 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:54.896205 kubelet[2265]: E0128 00:58:54.892348 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:55.019261 kubelet[2265]: E0128 00:58:55.019017 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="3.2s" Jan 28 00:58:55.338303 containerd[1483]: time="2026-01-28T00:58:55.337609968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:55.338303 containerd[1483]: time="2026-01-28T00:58:55.338169267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:55.338303 containerd[1483]: time="2026-01-28T00:58:55.338191299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:55.340817 containerd[1483]: time="2026-01-28T00:58:55.339145992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:55.346523 containerd[1483]: time="2026-01-28T00:58:55.343770556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:55.346523 containerd[1483]: time="2026-01-28T00:58:55.343838301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:55.346523 containerd[1483]: time="2026-01-28T00:58:55.343849603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:55.346523 containerd[1483]: time="2026-01-28T00:58:55.343944589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:55.357721 containerd[1483]: time="2026-01-28T00:58:55.330706730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:55.357909 containerd[1483]: time="2026-01-28T00:58:55.357736977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:55.357909 containerd[1483]: time="2026-01-28T00:58:55.357787831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:55.362527 containerd[1483]: time="2026-01-28T00:58:55.358694005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:55.529710 kubelet[2265]: W0128 00:58:55.528867 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:55.529710 kubelet[2265]: E0128 00:58:55.529197 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:55.621864 systemd[1]: Started cri-containerd-908394d56346f300cdbf1a377a1a565035002184ef85ab8eb2b8e9919d466f27.scope - libcontainer container 908394d56346f300cdbf1a377a1a565035002184ef85ab8eb2b8e9919d466f27. Jan 28 00:58:55.643002 systemd[1]: Started cri-containerd-a000adafe93cab24e7e73960500eb8be0b0dbd4c31fbfd57c99abeeac9274b20.scope - libcontainer container a000adafe93cab24e7e73960500eb8be0b0dbd4c31fbfd57c99abeeac9274b20. Jan 28 00:58:55.649888 systemd[1]: Started cri-containerd-fd2568ac3c4582b5671526eb61c6ab417f1567bb9858ac627eab36fe568c21b4.scope - libcontainer container fd2568ac3c4582b5671526eb61c6ab417f1567bb9858ac627eab36fe568c21b4. Jan 28 00:58:55.833795 kubelet[2265]: I0128 00:58:55.832843 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:58:55.836981 kubelet[2265]: E0128 00:58:55.836934 2265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 28 00:58:55.939851 containerd[1483]: time="2026-01-28T00:58:55.936163099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"908394d56346f300cdbf1a377a1a565035002184ef85ab8eb2b8e9919d466f27\"" Jan 28 00:58:55.945532 kubelet[2265]: E0128 00:58:55.945272 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:55.947729 containerd[1483]: time="2026-01-28T00:58:55.947182321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cb5a5097cb878fec302ec9db4124e0fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"a000adafe93cab24e7e73960500eb8be0b0dbd4c31fbfd57c99abeeac9274b20\"" Jan 28 00:58:55.948968 kubelet[2265]: E0128 00:58:55.948905 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:55.953306 containerd[1483]: time="2026-01-28T00:58:55.953197250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd2568ac3c4582b5671526eb61c6ab417f1567bb9858ac627eab36fe568c21b4\"" Jan 28 00:58:55.955945 containerd[1483]: time="2026-01-28T00:58:55.955579674Z" level=info msg="CreateContainer within sandbox \"a000adafe93cab24e7e73960500eb8be0b0dbd4c31fbfd57c99abeeac9274b20\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 00:58:55.956574 kubelet[2265]: E0128 00:58:55.956268 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:55.960030 containerd[1483]: time="2026-01-28T00:58:55.959803552Z" level=info msg="CreateContainer within sandbox \"908394d56346f300cdbf1a377a1a565035002184ef85ab8eb2b8e9919d466f27\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 00:58:55.967736 containerd[1483]: time="2026-01-28T00:58:55.967173938Z" level=info msg="CreateContainer within sandbox \"fd2568ac3c4582b5671526eb61c6ab417f1567bb9858ac627eab36fe568c21b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 00:58:55.991840 kubelet[2265]: W0128 00:58:55.991702 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:55.992023 kubelet[2265]: E0128 00:58:55.991848 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:56.007718 containerd[1483]: time="2026-01-28T00:58:56.007588101Z" level=info msg="CreateContainer within sandbox \"a000adafe93cab24e7e73960500eb8be0b0dbd4c31fbfd57c99abeeac9274b20\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b33933ed098e9ab6d06dabd04590b10735d9bc052bb625139f5c2e3838bcf4fd\"" Jan 28 00:58:56.011044 containerd[1483]: time="2026-01-28T00:58:56.010624097Z" level=info msg="StartContainer for \"b33933ed098e9ab6d06dabd04590b10735d9bc052bb625139f5c2e3838bcf4fd\"" Jan 28 00:58:56.023549 containerd[1483]: time="2026-01-28T00:58:56.023208674Z" level=info msg="CreateContainer within sandbox \"908394d56346f300cdbf1a377a1a565035002184ef85ab8eb2b8e9919d466f27\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"797939f1e65cbccfd3353b3fac2f0b6b9b01bb76414d30f29ea9e7433d657e5c\"" Jan 28 00:58:56.026702 containerd[1483]: time="2026-01-28T00:58:56.026275147Z" level=info msg="StartContainer for \"797939f1e65cbccfd3353b3fac2f0b6b9b01bb76414d30f29ea9e7433d657e5c\"" Jan 28 00:58:56.028509 containerd[1483]: time="2026-01-28T00:58:56.028043817Z" level=info msg="CreateContainer within sandbox \"fd2568ac3c4582b5671526eb61c6ab417f1567bb9858ac627eab36fe568c21b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f07e3d104799af3f629c0cb1b1b477d5a13f370f3ae8245828a44a5b88aef71b\"" Jan 28 00:58:56.030840 containerd[1483]: time="2026-01-28T00:58:56.030622949Z" level=info msg="StartContainer for \"f07e3d104799af3f629c0cb1b1b477d5a13f370f3ae8245828a44a5b88aef71b\"" Jan 28 00:58:56.078148 kubelet[2265]: W0128 00:58:56.077784 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Jan 28 00:58:56.078148 kubelet[2265]: E0128 00:58:56.077936 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:58:56.114986 systemd[1]: Started cri-containerd-797939f1e65cbccfd3353b3fac2f0b6b9b01bb76414d30f29ea9e7433d657e5c.scope - libcontainer container 797939f1e65cbccfd3353b3fac2f0b6b9b01bb76414d30f29ea9e7433d657e5c. Jan 28 00:58:56.186790 systemd[1]: Started cri-containerd-f07e3d104799af3f629c0cb1b1b477d5a13f370f3ae8245828a44a5b88aef71b.scope - libcontainer container f07e3d104799af3f629c0cb1b1b477d5a13f370f3ae8245828a44a5b88aef71b. Jan 28 00:58:56.209919 systemd[1]: Started cri-containerd-b33933ed098e9ab6d06dabd04590b10735d9bc052bb625139f5c2e3838bcf4fd.scope - libcontainer container b33933ed098e9ab6d06dabd04590b10735d9bc052bb625139f5c2e3838bcf4fd. Jan 28 00:58:56.574698 containerd[1483]: time="2026-01-28T00:58:56.573844316Z" level=info msg="StartContainer for \"797939f1e65cbccfd3353b3fac2f0b6b9b01bb76414d30f29ea9e7433d657e5c\" returns successfully" Jan 28 00:58:56.597672 kubelet[2265]: E0128 00:58:56.597608 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:56.597962 kubelet[2265]: E0128 00:58:56.597823 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:56.675905 containerd[1483]: time="2026-01-28T00:58:56.675779208Z" level=info msg="StartContainer for \"f07e3d104799af3f629c0cb1b1b477d5a13f370f3ae8245828a44a5b88aef71b\" returns successfully" Jan 28 00:58:56.710998 containerd[1483]: time="2026-01-28T00:58:56.710897054Z" level=info msg="StartContainer for \"b33933ed098e9ab6d06dabd04590b10735d9bc052bb625139f5c2e3838bcf4fd\" returns successfully" Jan 28 00:58:57.713662 update_engine[1467]: I20260128 00:58:57.711683 1467 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 00:58:57.723609 update_engine[1467]: I20260128 00:58:57.717540 1467 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 00:58:57.723609 update_engine[1467]: I20260128 00:58:57.719138 1467 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 00:58:57.725171 update_engine[1467]: I20260128 00:58:57.725139 1467 omaha_request_params.cc:62] Current group set to lts Jan 28 00:58:57.732502 update_engine[1467]: I20260128 00:58:57.727957 1467 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 00:58:57.732502 update_engine[1467]: I20260128 00:58:57.727989 1467 update_attempter.cc:643] Scheduling an action processor start. Jan 28 00:58:57.732502 update_engine[1467]: I20260128 00:58:57.728020 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 00:58:57.732502 update_engine[1467]: I20260128 00:58:57.731233 1467 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 00:58:57.732502 update_engine[1467]: I20260128 00:58:57.731679 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 00:58:57.732502 update_engine[1467]: I20260128 00:58:57.731710 1467 omaha_request_action.cc:272] Request: Jan 28 00:58:57.732502 update_engine[1467]: Jan 28 00:58:57.732502 update_engine[1467]: Jan 28 00:58:57.732502 update_engine[1467]: Jan 28 00:58:57.732502 update_engine[1467]: Jan 28 00:58:57.732502 update_engine[1467]: Jan 28 00:58:57.732502 update_engine[1467]: Jan 28 00:58:57.732502 update_engine[1467]: Jan 28 00:58:57.732502 update_engine[1467]: Jan 28 00:58:57.732502 update_engine[1467]: I20260128 00:58:57.731809 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 00:58:57.738595 update_engine[1467]: I20260128 00:58:57.738327 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 00:58:57.739931 update_engine[1467]: I20260128 00:58:57.739653 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 00:58:57.748787 kubelet[2265]: E0128 00:58:57.748312 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:57.748787 kubelet[2265]: E0128 00:58:57.748754 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:57.753053 locksmithd[1504]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 00:58:57.757685 update_engine[1467]: E20260128 00:58:57.757630 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 00:58:57.757897 update_engine[1467]: I20260128 00:58:57.757862 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 00:58:57.770275 kubelet[2265]: E0128 00:58:57.770156 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:57.780696 kubelet[2265]: E0128 00:58:57.780581 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:57.781647 kubelet[2265]: E0128 00:58:57.780936 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:57.782209 kubelet[2265]: E0128 00:58:57.781922 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:58.792046 kubelet[2265]: E0128 00:58:58.791692 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:58.792046 kubelet[2265]: E0128 00:58:58.791897 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:58.799710 kubelet[2265]: E0128 00:58:58.794316 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:58.799710 kubelet[2265]: E0128 00:58:58.796536 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:58.799710 kubelet[2265]: E0128 00:58:58.796769 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:58.799710 kubelet[2265]: E0128 00:58:58.797459 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:59.058890 kubelet[2265]: I0128 00:58:59.056642 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:58:59.827267 kubelet[2265]: E0128 00:58:59.826699 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:59.827267 kubelet[2265]: E0128 00:58:59.827245 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:59.854986 kubelet[2265]: E0128 00:58:59.828759 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:59.854986 kubelet[2265]: E0128 00:58:59.828950 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:59.854986 kubelet[2265]: E0128 00:58:59.835338 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:58:59.854986 kubelet[2265]: E0128 00:58:59.835799 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:00.913331 kubelet[2265]: E0128 00:59:00.912906 2265 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:59:00.913331 kubelet[2265]: E0128 00:59:00.913506 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:02.597065 kubelet[2265]: E0128 00:59:02.551875 2265 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 00:59:04.235905 kubelet[2265]: E0128 00:59:04.235269 2265 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 00:59:04.324655 kubelet[2265]: I0128 00:59:04.323062 2265 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 00:59:04.365043 kubelet[2265]: E0128 00:59:04.364604 2265 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ebf339b6ce865 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 00:58:51.975026789 +0000 UTC m=+1.048478642,LastTimestamp:2026-01-28 00:58:51.975026789 +0000 UTC m=+1.048478642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 00:59:04.406302 kubelet[2265]: I0128 00:59:04.405697 2265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:04.555831 kubelet[2265]: E0128 00:59:04.553336 2265 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:04.555831 kubelet[2265]: I0128 00:59:04.553636 2265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:59:04.561231 kubelet[2265]: E0128 00:59:04.560338 2265 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 00:59:04.561231 kubelet[2265]: I0128 00:59:04.560531 2265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 00:59:04.564756 kubelet[2265]: E0128 00:59:04.564199 2265 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 00:59:05.169562 kubelet[2265]: I0128 00:59:05.167906 2265 apiserver.go:52] "Watching apiserver" Jan 28 00:59:05.203784 kubelet[2265]: I0128 00:59:05.203742 2265 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:59:07.716578 update_engine[1467]: I20260128 00:59:07.714807 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 00:59:07.716578 update_engine[1467]: I20260128 00:59:07.716316 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 00:59:07.719661 update_engine[1467]: I20260128 00:59:07.718838 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 00:59:07.738850 update_engine[1467]: E20260128 00:59:07.738351 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 00:59:07.738850 update_engine[1467]: I20260128 00:59:07.738795 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 00:59:07.802711 kubelet[2265]: I0128 00:59:07.802333 2265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:07.823503 kubelet[2265]: E0128 00:59:07.821886 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:08.090277 kubelet[2265]: E0128 00:59:08.089838 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:08.161646 systemd[1]: Reloading requested from client PID 2554 ('systemctl') (unit session-9.scope)... Jan 28 00:59:08.161742 systemd[1]: Reloading... Jan 28 00:59:08.749570 kubelet[2265]: I0128 00:59:08.748574 2265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:59:08.804597 kubelet[2265]: E0128 00:59:08.801262 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:09.017508 kubelet[2265]: I0128 00:59:09.016633 2265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.016511495 podStartE2EDuration="2.016511495s" podCreationTimestamp="2026-01-28 00:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:59:09.013245266 +0000 UTC m=+18.086697118" watchObservedRunningTime="2026-01-28 00:59:09.016511495 +0000 UTC m=+18.089963346" Jan 28 00:59:09.034930 zram_generator::config[2590]: No configuration found. Jan 28 00:59:09.111294 kubelet[2265]: E0128 00:59:09.110970 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:09.113335 kubelet[2265]: I0128 00:59:09.112675 2265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.112660313 podStartE2EDuration="1.112660313s" podCreationTimestamp="2026-01-28 00:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:59:09.112315987 +0000 UTC m=+18.185767859" watchObservedRunningTime="2026-01-28 00:59:09.112660313 +0000 UTC m=+18.186112165" Jan 28 00:59:09.492702 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:59:09.658530 systemd[1]: Reloading finished in 1495 ms. Jan 28 00:59:09.839868 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:09.853724 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:59:09.854799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:09.854964 systemd[1]: kubelet.service: Consumed 7.955s CPU time, 133.1M memory peak, 0B memory swap peak. Jan 28 00:59:09.875905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:10.256967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:10.257689 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:59:10.404529 kubelet[2638]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:59:10.404529 kubelet[2638]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:59:10.404529 kubelet[2638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:59:10.404529 kubelet[2638]: I0128 00:59:10.403819 2638 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:59:10.432300 kubelet[2638]: I0128 00:59:10.431002 2638 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:59:10.432300 kubelet[2638]: I0128 00:59:10.431042 2638 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:59:10.432300 kubelet[2638]: I0128 00:59:10.431666 2638 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:59:10.436315 kubelet[2638]: I0128 00:59:10.436026 2638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 00:59:10.445739 kubelet[2638]: I0128 00:59:10.445698 2638 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:59:10.461837 kubelet[2638]: E0128 00:59:10.461764 2638 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 00:59:10.461837 kubelet[2638]: I0128 00:59:10.461810 2638 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 00:59:10.503515 kubelet[2638]: I0128 00:59:10.502319 2638 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:59:10.503515 kubelet[2638]: I0128 00:59:10.502748 2638 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:59:10.503515 kubelet[2638]: I0128 00:59:10.502779 2638 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:59:10.503515 kubelet[2638]: I0128 00:59:10.502986 2638 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:59:10.504022 kubelet[2638]: I0128 00:59:10.502999 2638 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:59:10.504022 kubelet[2638]: I0128 00:59:10.503059 2638 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:59:10.504022 kubelet[2638]: I0128 00:59:10.503491 2638 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:59:10.504022 kubelet[2638]: I0128 00:59:10.503523 2638 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:59:10.504022 kubelet[2638]: I0128 00:59:10.503547 2638 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:59:10.504022 kubelet[2638]: I0128 00:59:10.503560 2638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:59:10.511533 kubelet[2638]: I0128 00:59:10.506921 2638 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 00:59:10.511533 kubelet[2638]: I0128 00:59:10.507941 2638 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:59:10.511533 kubelet[2638]: I0128 00:59:10.508702 2638 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:59:10.511533 kubelet[2638]: I0128 00:59:10.508736 2638 server.go:1287] "Started kubelet" Jan 28 00:59:10.511716 kubelet[2638]: I0128 00:59:10.511671 2638 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:59:10.577324 kubelet[2638]: I0128 00:59:10.572253 2638 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:59:10.597584 kubelet[2638]: I0128 00:59:10.597234 2638 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:59:10.600293 kubelet[2638]: I0128 00:59:10.600264 2638 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:59:10.601553 kubelet[2638]: I0128 00:59:10.601243 2638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:59:10.617632 kubelet[2638]: I0128 00:59:10.616600 2638 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:59:10.617632 kubelet[2638]: I0128 00:59:10.616756 2638 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:59:10.617632 kubelet[2638]: I0128 00:59:10.617117 2638 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:59:10.620320 kubelet[2638]: E0128 00:59:10.619137 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:59:10.620320 kubelet[2638]: I0128 00:59:10.619978 2638 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:59:10.643495 sudo[2655]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 28 00:59:10.644336 sudo[2655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 28 00:59:10.654022 kubelet[2638]: I0128 00:59:10.653736 2638 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:59:10.654022 kubelet[2638]: I0128 00:59:10.653768 2638 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:59:10.654022 kubelet[2638]: I0128 00:59:10.653894 2638 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:59:10.657329 kubelet[2638]: E0128 00:59:10.656329 2638 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:59:10.661349 kubelet[2638]: I0128 00:59:10.658868 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:59:10.677837 kubelet[2638]: I0128 00:59:10.676543 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:59:10.677837 kubelet[2638]: I0128 00:59:10.676604 2638 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:59:10.677837 kubelet[2638]: I0128 00:59:10.677664 2638 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:59:10.677837 kubelet[2638]: I0128 00:59:10.677686 2638 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:59:10.677837 kubelet[2638]: E0128 00:59:10.677768 2638 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:59:10.778749 kubelet[2638]: E0128 00:59:10.778554 2638 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 00:59:10.810623 kubelet[2638]: I0128 00:59:10.810592 2638 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:59:10.810861 kubelet[2638]: I0128 00:59:10.810840 2638 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:59:10.810959 kubelet[2638]: I0128 00:59:10.810944 2638 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:59:10.811744 kubelet[2638]: I0128 00:59:10.811722 2638 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 00:59:10.811826 kubelet[2638]: I0128 00:59:10.811799 2638 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 00:59:10.811884 kubelet[2638]: I0128 00:59:10.811875 2638 policy_none.go:49] "None policy: Start" Jan 28 00:59:10.811930 kubelet[2638]: I0128 00:59:10.811921 2638 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:59:10.811974 kubelet[2638]: I0128 00:59:10.811965 2638 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:59:10.813598 kubelet[2638]: I0128 00:59:10.812320 2638 state_mem.go:75] "Updated machine memory state" Jan 28 00:59:10.823267 kubelet[2638]: I0128 00:59:10.822299 2638 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:59:10.823267 kubelet[2638]: I0128 00:59:10.822653 2638 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:59:10.823267 kubelet[2638]: I0128 00:59:10.822667 2638 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:59:10.823746 kubelet[2638]: I0128 00:59:10.823497 2638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:59:10.831642 kubelet[2638]: E0128 00:59:10.831236 2638 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:59:10.952789 kubelet[2638]: I0128 00:59:10.951117 2638 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:59:10.976642 kubelet[2638]: I0128 00:59:10.975932 2638 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 00:59:10.976642 kubelet[2638]: I0128 00:59:10.976034 2638 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 00:59:10.981109 kubelet[2638]: I0128 00:59:10.980885 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:59:10.983005 kubelet[2638]: I0128 00:59:10.982890 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 00:59:10.984484 kubelet[2638]: I0128 00:59:10.984056 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:10.991826 kubelet[2638]: E0128 00:59:10.991794 2638 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 00:59:11.003827 kubelet[2638]: E0128 00:59:11.003654 2638 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:11.121295 kubelet[2638]: I0128 00:59:11.120590 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:59:11.121295 kubelet[2638]: I0128 00:59:11.120661 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:11.121295 kubelet[2638]: I0128 00:59:11.120700 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:11.121295 kubelet[2638]: I0128 00:59:11.120730 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:11.121295 kubelet[2638]: I0128 00:59:11.120756 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:59:11.121746 kubelet[2638]: I0128 00:59:11.120780 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb5a5097cb878fec302ec9db4124e0fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cb5a5097cb878fec302ec9db4124e0fe\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:59:11.121746 kubelet[2638]: I0128 00:59:11.120804 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 00:59:11.121746 kubelet[2638]: I0128 00:59:11.120828 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:11.121746 kubelet[2638]: I0128 00:59:11.120855 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:59:11.295051 kubelet[2638]: E0128 00:59:11.294736 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:11.296583 kubelet[2638]: E0128 00:59:11.296532 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:11.305756 kubelet[2638]: E0128 00:59:11.304932 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:11.506344 kubelet[2638]: I0128 00:59:11.505962 2638 apiserver.go:52] "Watching apiserver" Jan 28 00:59:11.617635 kubelet[2638]: I0128 00:59:11.617482 2638 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:59:11.656588 kubelet[2638]: I0128 00:59:11.656127 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.656104766 podStartE2EDuration="1.656104766s" podCreationTimestamp="2026-01-28 00:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:59:11.655833176 +0000 UTC m=+1.387066699" watchObservedRunningTime="2026-01-28 00:59:11.656104766 +0000 UTC m=+1.387338279" Jan 28 00:59:11.797923 kubelet[2638]: E0128 00:59:11.797709 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:11.803313 kubelet[2638]: E0128 00:59:11.799152 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:11.803313 kubelet[2638]: E0128 00:59:11.799864 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:12.161053 sudo[2655]: pam_unix(sudo:session): session closed for user root Jan 28 00:59:12.865027 kubelet[2638]: E0128 00:59:12.862644 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:12.865027 kubelet[2638]: E0128 00:59:12.862775 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:13.774041 kubelet[2638]: I0128 00:59:13.771673 2638 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 00:59:13.788487 containerd[1483]: time="2026-01-28T00:59:13.788022819Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 00:59:13.793349 kubelet[2638]: I0128 00:59:13.792102 2638 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 00:59:13.886775 kubelet[2638]: E0128 00:59:13.883094 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:13.944348 systemd[1]: Created slice kubepods-besteffort-pod570e173c_fc3d_4a1a_b1dd_c404acaed5e4.slice - libcontainer container kubepods-besteffort-pod570e173c_fc3d_4a1a_b1dd_c404acaed5e4.slice. Jan 28 00:59:14.017544 kubelet[2638]: I0128 00:59:14.014774 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/570e173c-fc3d-4a1a-b1dd-c404acaed5e4-lib-modules\") pod \"kube-proxy-dnkpr\" (UID: \"570e173c-fc3d-4a1a-b1dd-c404acaed5e4\") " pod="kube-system/kube-proxy-dnkpr" Jan 28 00:59:14.017544 kubelet[2638]: I0128 00:59:14.015054 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zkhb\" (UniqueName: \"kubernetes.io/projected/570e173c-fc3d-4a1a-b1dd-c404acaed5e4-kube-api-access-8zkhb\") pod \"kube-proxy-dnkpr\" (UID: \"570e173c-fc3d-4a1a-b1dd-c404acaed5e4\") " pod="kube-system/kube-proxy-dnkpr" Jan 28 00:59:14.017544 kubelet[2638]: I0128 00:59:14.015316 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/570e173c-fc3d-4a1a-b1dd-c404acaed5e4-kube-proxy\") pod \"kube-proxy-dnkpr\" (UID: \"570e173c-fc3d-4a1a-b1dd-c404acaed5e4\") " pod="kube-system/kube-proxy-dnkpr" Jan 28 00:59:14.017544 kubelet[2638]: I0128 00:59:14.015609 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/570e173c-fc3d-4a1a-b1dd-c404acaed5e4-xtables-lock\") pod \"kube-proxy-dnkpr\" (UID: \"570e173c-fc3d-4a1a-b1dd-c404acaed5e4\") " pod="kube-system/kube-proxy-dnkpr" Jan 28 00:59:14.210666 kubelet[2638]: E0128 00:59:14.209723 2638 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 28 00:59:14.210666 kubelet[2638]: E0128 00:59:14.209951 2638 projected.go:194] Error preparing data for projected volume kube-api-access-8zkhb for pod kube-system/kube-proxy-dnkpr: configmap "kube-root-ca.crt" not found Jan 28 00:59:14.210666 kubelet[2638]: E0128 00:59:14.210039 2638 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/570e173c-fc3d-4a1a-b1dd-c404acaed5e4-kube-api-access-8zkhb podName:570e173c-fc3d-4a1a-b1dd-c404acaed5e4 nodeName:}" failed. No retries permitted until 2026-01-28 00:59:14.710010582 +0000 UTC m=+4.441244095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8zkhb" (UniqueName: "kubernetes.io/projected/570e173c-fc3d-4a1a-b1dd-c404acaed5e4-kube-api-access-8zkhb") pod "kube-proxy-dnkpr" (UID: "570e173c-fc3d-4a1a-b1dd-c404acaed5e4") : configmap "kube-root-ca.crt" not found Jan 28 00:59:14.368514 kubelet[2638]: W0128 00:59:14.368304 2638 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 28 00:59:14.368769 kubelet[2638]: E0128 00:59:14.368520 2638 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 00:59:14.368769 kubelet[2638]: W0128 00:59:14.368612 2638 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 28 00:59:14.368769 kubelet[2638]: E0128 00:59:14.368687 2638 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 00:59:14.381872 systemd[1]: Created slice kubepods-burstable-pod0a8faae1_0c6d_49da_9e35_1289786290f3.slice - libcontainer container kubepods-burstable-pod0a8faae1_0c6d_49da_9e35_1289786290f3.slice. Jan 28 00:59:14.385118 kubelet[2638]: I0128 00:59:14.383037 2638 status_manager.go:890] "Failed to get status for pod" podUID="0a8faae1-0c6d-49da-9e35-1289786290f3" pod="kube-system/cilium-jmzcd" err="pods \"cilium-jmzcd\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jan 28 00:59:14.437850 kubelet[2638]: I0128 00:59:14.437707 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-cgroup\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.437850 kubelet[2638]: I0128 00:59:14.437792 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-lib-modules\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.437850 kubelet[2638]: I0128 00:59:14.437833 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a8faae1-0c6d-49da-9e35-1289786290f3-clustermesh-secrets\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.437850 kubelet[2638]: I0128 00:59:14.437859 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a8faae1-0c6d-49da-9e35-1289786290f3-hubble-tls\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438202 kubelet[2638]: I0128 00:59:14.437925 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-xtables-lock\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438202 kubelet[2638]: I0128 00:59:14.437957 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-bpf-maps\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438202 kubelet[2638]: I0128 00:59:14.437980 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-config-path\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438202 kubelet[2638]: I0128 00:59:14.438005 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-host-proc-sys-kernel\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438202 kubelet[2638]: I0128 00:59:14.438026 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-hostproc\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438202 kubelet[2638]: I0128 00:59:14.438044 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-run\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438874 kubelet[2638]: I0128 00:59:14.438069 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-etc-cni-netd\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438874 kubelet[2638]: I0128 00:59:14.438091 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-host-proc-sys-net\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438874 kubelet[2638]: I0128 00:59:14.438111 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r75h\" (UniqueName: \"kubernetes.io/projected/0a8faae1-0c6d-49da-9e35-1289786290f3-kube-api-access-4r75h\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.438874 kubelet[2638]: I0128 00:59:14.438136 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cni-path\") pod \"cilium-jmzcd\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " pod="kube-system/cilium-jmzcd" Jan 28 00:59:14.577725 systemd[1]: Created slice kubepods-besteffort-pod36cf2c6c_70d9_4912_aeb5_3a9679d20de3.slice - libcontainer container kubepods-besteffort-pod36cf2c6c_70d9_4912_aeb5_3a9679d20de3.slice. Jan 28 00:59:14.640858 kubelet[2638]: I0128 00:59:14.640641 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpcz6\" (UniqueName: \"kubernetes.io/projected/36cf2c6c-70d9-4912-aeb5-3a9679d20de3-kube-api-access-rpcz6\") pod \"cilium-operator-6c4d7847fc-v9rpc\" (UID: \"36cf2c6c-70d9-4912-aeb5-3a9679d20de3\") " pod="kube-system/cilium-operator-6c4d7847fc-v9rpc" Jan 28 00:59:14.644680 kubelet[2638]: I0128 00:59:14.644551 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36cf2c6c-70d9-4912-aeb5-3a9679d20de3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-v9rpc\" (UID: \"36cf2c6c-70d9-4912-aeb5-3a9679d20de3\") " pod="kube-system/cilium-operator-6c4d7847fc-v9rpc" Jan 28 00:59:14.857198 kubelet[2638]: E0128 00:59:14.856763 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:14.859872 containerd[1483]: time="2026-01-28T00:59:14.859672411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnkpr,Uid:570e173c-fc3d-4a1a-b1dd-c404acaed5e4,Namespace:kube-system,Attempt:0,}" Jan 28 00:59:14.939770 kubelet[2638]: E0128 00:59:14.937927 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:15.096751 containerd[1483]: time="2026-01-28T00:59:15.095595798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v9rpc,Uid:36cf2c6c-70d9-4912-aeb5-3a9679d20de3,Namespace:kube-system,Attempt:0,}" Jan 28 00:59:15.174937 kubelet[2638]: E0128 00:59:15.173051 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:15.522587 containerd[1483]: time="2026-01-28T00:59:15.520523134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:59:15.522587 containerd[1483]: time="2026-01-28T00:59:15.520585380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:59:15.522587 containerd[1483]: time="2026-01-28T00:59:15.521161213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:15.522587 containerd[1483]: time="2026-01-28T00:59:15.521776309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:15.522587 containerd[1483]: time="2026-01-28T00:59:15.519605404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:59:15.522587 containerd[1483]: time="2026-01-28T00:59:15.519667872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:59:15.522587 containerd[1483]: time="2026-01-28T00:59:15.519679744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:15.522587 containerd[1483]: time="2026-01-28T00:59:15.520607960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:15.546856 kubelet[2638]: E0128 00:59:15.546656 2638 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 28 00:59:15.547025 kubelet[2638]: E0128 00:59:15.547001 2638 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a8faae1-0c6d-49da-9e35-1289786290f3-clustermesh-secrets podName:0a8faae1-0c6d-49da-9e35-1289786290f3 nodeName:}" failed. No retries permitted until 2026-01-28 00:59:16.046880059 +0000 UTC m=+5.778113572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/0a8faae1-0c6d-49da-9e35-1289786290f3-clustermesh-secrets") pod "cilium-jmzcd" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3") : failed to sync secret cache: timed out waiting for the condition Jan 28 00:59:15.602750 systemd[1]: Started cri-containerd-937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274.scope - libcontainer container 937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274. Jan 28 00:59:15.612598 systemd[1]: Started cri-containerd-81284fbafb854d0c06f079d4a5494d44f91374c48427926f2c3930c3509793be.scope - libcontainer container 81284fbafb854d0c06f079d4a5494d44f91374c48427926f2c3930c3509793be. Jan 28 00:59:15.761800 containerd[1483]: time="2026-01-28T00:59:15.760777828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v9rpc,Uid:36cf2c6c-70d9-4912-aeb5-3a9679d20de3,Namespace:kube-system,Attempt:0,} returns sandbox id \"937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274\"" Jan 28 00:59:15.763995 kubelet[2638]: E0128 00:59:15.763803 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:15.768866 containerd[1483]: time="2026-01-28T00:59:15.768822074Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 28 00:59:15.777202 containerd[1483]: time="2026-01-28T00:59:15.776996985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnkpr,Uid:570e173c-fc3d-4a1a-b1dd-c404acaed5e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"81284fbafb854d0c06f079d4a5494d44f91374c48427926f2c3930c3509793be\"" Jan 28 00:59:15.783948 kubelet[2638]: E0128 00:59:15.783913 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:15.795711 containerd[1483]: time="2026-01-28T00:59:15.795119796Z" level=info msg="CreateContainer within sandbox \"81284fbafb854d0c06f079d4a5494d44f91374c48427926f2c3930c3509793be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 00:59:15.841136 containerd[1483]: time="2026-01-28T00:59:15.840973398Z" level=info msg="CreateContainer within sandbox \"81284fbafb854d0c06f079d4a5494d44f91374c48427926f2c3930c3509793be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"53271a42b8d5b7d5ac4e8886c38257c182c81f8d1a3bebd18c8202350eec763e\"" Jan 28 00:59:15.841826 containerd[1483]: time="2026-01-28T00:59:15.841740912Z" level=info msg="StartContainer for \"53271a42b8d5b7d5ac4e8886c38257c182c81f8d1a3bebd18c8202350eec763e\"" Jan 28 00:59:15.976850 systemd[1]: Started cri-containerd-53271a42b8d5b7d5ac4e8886c38257c182c81f8d1a3bebd18c8202350eec763e.scope - libcontainer container 53271a42b8d5b7d5ac4e8886c38257c182c81f8d1a3bebd18c8202350eec763e. Jan 28 00:59:16.101658 containerd[1483]: time="2026-01-28T00:59:16.100657593Z" level=info msg="StartContainer for \"53271a42b8d5b7d5ac4e8886c38257c182c81f8d1a3bebd18c8202350eec763e\" returns successfully" Jan 28 00:59:16.133751 kubelet[2638]: E0128 00:59:16.132893 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:16.133751 kubelet[2638]: E0128 00:59:16.133683 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:16.207705 kubelet[2638]: E0128 00:59:16.205760 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:16.209207 containerd[1483]: time="2026-01-28T00:59:16.208973162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jmzcd,Uid:0a8faae1-0c6d-49da-9e35-1289786290f3,Namespace:kube-system,Attempt:0,}" Jan 28 00:59:16.349617 containerd[1483]: time="2026-01-28T00:59:16.341855882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:59:16.349617 containerd[1483]: time="2026-01-28T00:59:16.342349530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:59:16.349617 containerd[1483]: time="2026-01-28T00:59:16.342675193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:16.349617 containerd[1483]: time="2026-01-28T00:59:16.342800509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:16.431071 systemd[1]: Started cri-containerd-46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c.scope - libcontainer container 46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c. Jan 28 00:59:16.518699 containerd[1483]: time="2026-01-28T00:59:16.518568806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jmzcd,Uid:0a8faae1-0c6d-49da-9e35-1289786290f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\"" Jan 28 00:59:16.522582 kubelet[2638]: E0128 00:59:16.521553 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:16.795577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2129243385.mount: Deactivated successfully. Jan 28 00:59:17.706112 update_engine[1467]: I20260128 00:59:17.705159 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 00:59:17.706112 update_engine[1467]: I20260128 00:59:17.706208 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 00:59:17.708511 update_engine[1467]: I20260128 00:59:17.707058 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 00:59:17.724941 update_engine[1467]: E20260128 00:59:17.724895 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 00:59:17.725126 update_engine[1467]: I20260128 00:59:17.725101 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 00:59:18.611680 kubelet[2638]: E0128 00:59:18.610839 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:18.716542 kubelet[2638]: I0128 00:59:18.715739 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dnkpr" podStartSLOduration=5.715716584 podStartE2EDuration="5.715716584s" podCreationTimestamp="2026-01-28 00:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:59:16.156497999 +0000 UTC m=+5.887731522" watchObservedRunningTime="2026-01-28 00:59:18.715716584 +0000 UTC m=+8.446950097" Jan 28 00:59:18.800117 containerd[1483]: time="2026-01-28T00:59:18.799798399Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:18.802922 containerd[1483]: time="2026-01-28T00:59:18.802576138Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 28 00:59:18.807172 containerd[1483]: time="2026-01-28T00:59:18.806117986Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:18.812775 containerd[1483]: time="2026-01-28T00:59:18.812246364Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.042466048s" Jan 28 00:59:18.812900 containerd[1483]: time="2026-01-28T00:59:18.812290366Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 28 00:59:18.815896 containerd[1483]: time="2026-01-28T00:59:18.815732669Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 28 00:59:18.817907 containerd[1483]: time="2026-01-28T00:59:18.817875631Z" level=info msg="CreateContainer within sandbox \"937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 28 00:59:18.896542 containerd[1483]: time="2026-01-28T00:59:18.895277194Z" level=info msg="CreateContainer within sandbox \"937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\"" Jan 28 00:59:18.899927 containerd[1483]: time="2026-01-28T00:59:18.899890199Z" level=info msg="StartContainer for \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\"" Jan 28 00:59:19.022005 systemd[1]: run-containerd-runc-k8s.io-c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637-runc.MewauN.mount: Deactivated successfully. Jan 28 00:59:19.034975 systemd[1]: Started cri-containerd-c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637.scope - libcontainer container c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637. Jan 28 00:59:19.163100 containerd[1483]: time="2026-01-28T00:59:19.162776636Z" level=info msg="StartContainer for \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\" returns successfully" Jan 28 00:59:19.189548 kubelet[2638]: E0128 00:59:19.189141 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:19.190734 kubelet[2638]: E0128 00:59:19.190260 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:20.199884 kubelet[2638]: E0128 00:59:20.196867 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:21.020893 kubelet[2638]: E0128 00:59:21.020547 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:21.267084 kubelet[2638]: I0128 00:59:21.266895 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-v9rpc" podStartSLOduration=4.219050414 podStartE2EDuration="7.266871247s" podCreationTimestamp="2026-01-28 00:59:14 +0000 UTC" firstStartedPulling="2026-01-28 00:59:15.767122338 +0000 UTC m=+5.498355861" lastFinishedPulling="2026-01-28 00:59:18.814943181 +0000 UTC m=+8.546176694" observedRunningTime="2026-01-28 00:59:19.377296689 +0000 UTC m=+9.108530212" watchObservedRunningTime="2026-01-28 00:59:21.266871247 +0000 UTC m=+10.998104770" Jan 28 00:59:21.310578 kubelet[2638]: E0128 00:59:21.309305 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:27.722799 update_engine[1467]: I20260128 00:59:27.720767 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 00:59:27.768123 update_engine[1467]: I20260128 00:59:27.726096 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 00:59:27.768123 update_engine[1467]: I20260128 00:59:27.727046 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 00:59:27.847103 update_engine[1467]: E20260128 00:59:27.846710 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 00:59:27.847611 update_engine[1467]: I20260128 00:59:27.847318 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 00:59:27.847611 update_engine[1467]: I20260128 00:59:27.847476 1467 omaha_request_action.cc:617] Omaha request response: Jan 28 00:59:27.849010 update_engine[1467]: E20260128 00:59:27.848867 1467 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 00:59:27.849654 update_engine[1467]: I20260128 00:59:27.849488 1467 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 00:59:27.849724 update_engine[1467]: I20260128 00:59:27.849645 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 00:59:27.849724 update_engine[1467]: I20260128 00:59:27.849664 1467 update_attempter.cc:306] Processing Done. Jan 28 00:59:27.850086 update_engine[1467]: E20260128 00:59:27.849804 1467 update_attempter.cc:619] Update failed. Jan 28 00:59:27.850086 update_engine[1467]: I20260128 00:59:27.850059 1467 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 00:59:27.850086 update_engine[1467]: I20260128 00:59:27.850082 1467 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 00:59:27.850215 update_engine[1467]: I20260128 00:59:27.850095 1467 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 00:59:27.851229 update_engine[1467]: I20260128 00:59:27.851106 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 00:59:27.853846 update_engine[1467]: I20260128 00:59:27.851212 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 00:59:27.854589 update_engine[1467]: I20260128 00:59:27.854099 1467 omaha_request_action.cc:272] Request: Jan 28 00:59:27.854589 update_engine[1467]: Jan 28 00:59:27.854589 update_engine[1467]: Jan 28 00:59:27.854589 update_engine[1467]: Jan 28 00:59:27.854589 update_engine[1467]: Jan 28 00:59:27.854589 update_engine[1467]: Jan 28 00:59:27.854589 update_engine[1467]: Jan 28 00:59:27.854934 update_engine[1467]: I20260128 00:59:27.854893 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 00:59:27.857682 update_engine[1467]: I20260128 00:59:27.857316 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 00:59:27.860637 update_engine[1467]: I20260128 00:59:27.860163 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 00:59:27.869749 locksmithd[1504]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 00:59:27.883907 update_engine[1467]: E20260128 00:59:27.882937 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 00:59:27.883907 update_engine[1467]: I20260128 00:59:27.883052 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 00:59:27.883907 update_engine[1467]: I20260128 00:59:27.883075 1467 omaha_request_action.cc:617] Omaha request response: Jan 28 00:59:27.883907 update_engine[1467]: I20260128 00:59:27.883090 1467 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 00:59:27.883907 update_engine[1467]: I20260128 00:59:27.883266 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 00:59:27.883907 update_engine[1467]: I20260128 00:59:27.883282 1467 update_attempter.cc:306] Processing Done. Jan 28 00:59:27.883907 update_engine[1467]: I20260128 00:59:27.883836 1467 update_attempter.cc:310] Error event sent. Jan 28 00:59:27.885301 update_engine[1467]: I20260128 00:59:27.883874 1467 update_check_scheduler.cc:74] Next update check in 43m48s Jan 28 00:59:27.893733 locksmithd[1504]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 00:59:38.574535 kubelet[2638]: E0128 00:59:38.574234 2638 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.563s" Jan 28 00:59:50.739707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083338308.mount: Deactivated successfully. Jan 28 00:59:57.600126 containerd[1483]: time="2026-01-28T00:59:57.599334501Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:57.603339 containerd[1483]: time="2026-01-28T00:59:57.603061597Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 28 00:59:57.605844 containerd[1483]: time="2026-01-28T00:59:57.605769859Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:57.610904 containerd[1483]: time="2026-01-28T00:59:57.610859220Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 38.795089261s" Jan 28 00:59:57.611244 containerd[1483]: time="2026-01-28T00:59:57.611032259Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 28 00:59:57.617798 containerd[1483]: time="2026-01-28T00:59:57.617172086Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 00:59:57.661953 containerd[1483]: time="2026-01-28T00:59:57.661163890Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27\"" Jan 28 00:59:57.662744 containerd[1483]: time="2026-01-28T00:59:57.662714475Z" level=info msg="StartContainer for \"57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27\"" Jan 28 00:59:57.904291 systemd[1]: Started cri-containerd-57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27.scope - libcontainer container 57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27. Jan 28 00:59:58.078060 containerd[1483]: time="2026-01-28T00:59:58.077322265Z" level=info msg="StartContainer for \"57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27\" returns successfully" Jan 28 00:59:58.108046 systemd[1]: cri-containerd-57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27.scope: Deactivated successfully. Jan 28 00:59:58.450862 containerd[1483]: time="2026-01-28T00:59:58.449862881Z" level=info msg="shim disconnected" id=57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27 namespace=k8s.io Jan 28 00:59:58.450862 containerd[1483]: time="2026-01-28T00:59:58.450052070Z" level=warning msg="cleaning up after shim disconnected" id=57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27 namespace=k8s.io Jan 28 00:59:58.450862 containerd[1483]: time="2026-01-28T00:59:58.450068210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:59:58.646960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27-rootfs.mount: Deactivated successfully. Jan 28 00:59:58.948263 kubelet[2638]: E0128 00:59:58.947870 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:58.957504 containerd[1483]: time="2026-01-28T00:59:58.957082769Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 00:59:58.988816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990925485.mount: Deactivated successfully. Jan 28 00:59:58.996329 containerd[1483]: time="2026-01-28T00:59:58.994031888Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7\"" Jan 28 00:59:58.998815 containerd[1483]: time="2026-01-28T00:59:58.998665967Z" level=info msg="StartContainer for \"b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7\"" Jan 28 00:59:59.081869 systemd[1]: Started cri-containerd-b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7.scope - libcontainer container b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7. Jan 28 00:59:59.163331 containerd[1483]: time="2026-01-28T00:59:59.163289444Z" level=info msg="StartContainer for \"b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7\" returns successfully" Jan 28 00:59:59.198785 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:59:59.199208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:59:59.199571 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:59:59.209750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:59:59.210716 systemd[1]: cri-containerd-b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7.scope: Deactivated successfully. Jan 28 00:59:59.274248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:59:59.289776 containerd[1483]: time="2026-01-28T00:59:59.289682243Z" level=info msg="shim disconnected" id=b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7 namespace=k8s.io Jan 28 00:59:59.290237 containerd[1483]: time="2026-01-28T00:59:59.290118611Z" level=warning msg="cleaning up after shim disconnected" id=b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7 namespace=k8s.io Jan 28 00:59:59.290237 containerd[1483]: time="2026-01-28T00:59:59.290219493Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:59:59.648894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7-rootfs.mount: Deactivated successfully. Jan 28 01:00:00.103156 kubelet[2638]: E0128 01:00:00.103067 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:00.110866 containerd[1483]: time="2026-01-28T01:00:00.110196540Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 01:00:00.182577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380203657.mount: Deactivated successfully. Jan 28 01:00:00.189751 containerd[1483]: time="2026-01-28T01:00:00.189553200Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70\"" Jan 28 01:00:00.191212 containerd[1483]: time="2026-01-28T01:00:00.191155833Z" level=info msg="StartContainer for \"917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70\"" Jan 28 01:00:00.371795 systemd[1]: Started cri-containerd-917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70.scope - libcontainer container 917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70. Jan 28 01:00:00.475077 containerd[1483]: time="2026-01-28T01:00:00.474892054Z" level=info msg="StartContainer for \"917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70\" returns successfully" Jan 28 01:00:00.480801 systemd[1]: cri-containerd-917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70.scope: Deactivated successfully. Jan 28 01:00:00.573867 containerd[1483]: time="2026-01-28T01:00:00.573326156Z" level=info msg="shim disconnected" id=917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70 namespace=k8s.io Jan 28 01:00:00.573867 containerd[1483]: time="2026-01-28T01:00:00.573573185Z" level=warning msg="cleaning up after shim disconnected" id=917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70 namespace=k8s.io Jan 28 01:00:00.573867 containerd[1483]: time="2026-01-28T01:00:00.573591369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:00:00.652247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70-rootfs.mount: Deactivated successfully. Jan 28 01:00:01.127596 kubelet[2638]: E0128 01:00:01.127018 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:01.135343 containerd[1483]: time="2026-01-28T01:00:01.134286838Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 01:00:01.264908 containerd[1483]: time="2026-01-28T01:00:01.264781247Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629\"" Jan 28 01:00:01.269585 containerd[1483]: time="2026-01-28T01:00:01.267271654Z" level=info msg="StartContainer for \"fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629\"" Jan 28 01:00:01.443059 systemd[1]: Started cri-containerd-fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629.scope - libcontainer container fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629. Jan 28 01:00:01.583628 systemd[1]: cri-containerd-fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629.scope: Deactivated successfully. Jan 28 01:00:01.596278 containerd[1483]: time="2026-01-28T01:00:01.596229946Z" level=info msg="StartContainer for \"fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629\" returns successfully" Jan 28 01:00:01.683848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629-rootfs.mount: Deactivated successfully. Jan 28 01:00:01.702590 containerd[1483]: time="2026-01-28T01:00:01.702209510Z" level=info msg="shim disconnected" id=fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629 namespace=k8s.io Jan 28 01:00:01.702590 containerd[1483]: time="2026-01-28T01:00:01.702290444Z" level=warning msg="cleaning up after shim disconnected" id=fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629 namespace=k8s.io Jan 28 01:00:01.702590 containerd[1483]: time="2026-01-28T01:00:01.702307076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:00:02.158224 kubelet[2638]: E0128 01:00:02.157936 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:02.169338 containerd[1483]: time="2026-01-28T01:00:02.168208086Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 01:00:02.302028 containerd[1483]: time="2026-01-28T01:00:02.301068841Z" level=info msg="CreateContainer within sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28\"" Jan 28 01:00:02.305937 containerd[1483]: time="2026-01-28T01:00:02.303562978Z" level=info msg="StartContainer for \"b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28\"" Jan 28 01:00:02.489648 systemd[1]: Started cri-containerd-b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28.scope - libcontainer container b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28. Jan 28 01:00:02.678118 containerd[1483]: time="2026-01-28T01:00:02.677335218Z" level=info msg="StartContainer for \"b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28\" returns successfully" Jan 28 01:00:03.193191 kubelet[2638]: E0128 01:00:03.192141 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:03.208707 kubelet[2638]: I0128 01:00:03.208582 2638 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:00:03.310991 kubelet[2638]: I0128 01:00:03.310261 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jmzcd" podStartSLOduration=8.224833239 podStartE2EDuration="49.310234998s" podCreationTimestamp="2026-01-28 00:59:14 +0000 UTC" firstStartedPulling="2026-01-28 00:59:16.527626243 +0000 UTC m=+6.258859757" lastFinishedPulling="2026-01-28 00:59:57.613027993 +0000 UTC m=+47.344261516" observedRunningTime="2026-01-28 01:00:03.295641062 +0000 UTC m=+53.026874604" watchObservedRunningTime="2026-01-28 01:00:03.310234998 +0000 UTC m=+53.041468511" Jan 28 01:00:03.343695 kubelet[2638]: I0128 01:00:03.343511 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9ac489b-d012-47ea-bfa1-f29a57b941a9-config-volume\") pod \"coredns-668d6bf9bc-vxr92\" (UID: \"a9ac489b-d012-47ea-bfa1-f29a57b941a9\") " pod="kube-system/coredns-668d6bf9bc-vxr92" Jan 28 01:00:03.343695 kubelet[2638]: I0128 01:00:03.343575 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf61ad82-d7e7-4105-ab0e-5d43f44e2034-config-volume\") pod \"coredns-668d6bf9bc-xgv7k\" (UID: \"cf61ad82-d7e7-4105-ab0e-5d43f44e2034\") " pod="kube-system/coredns-668d6bf9bc-xgv7k" Jan 28 01:00:03.343695 kubelet[2638]: I0128 01:00:03.343631 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8zk\" (UniqueName: \"kubernetes.io/projected/a9ac489b-d012-47ea-bfa1-f29a57b941a9-kube-api-access-gl8zk\") pod \"coredns-668d6bf9bc-vxr92\" (UID: \"a9ac489b-d012-47ea-bfa1-f29a57b941a9\") " pod="kube-system/coredns-668d6bf9bc-vxr92" Jan 28 01:00:03.344019 kubelet[2638]: I0128 01:00:03.343708 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8sj\" (UniqueName: \"kubernetes.io/projected/cf61ad82-d7e7-4105-ab0e-5d43f44e2034-kube-api-access-xs8sj\") pod \"coredns-668d6bf9bc-xgv7k\" (UID: \"cf61ad82-d7e7-4105-ab0e-5d43f44e2034\") " pod="kube-system/coredns-668d6bf9bc-xgv7k" Jan 28 01:00:03.348167 systemd[1]: Created slice kubepods-burstable-poda9ac489b_d012_47ea_bfa1_f29a57b941a9.slice - libcontainer container kubepods-burstable-poda9ac489b_d012_47ea_bfa1_f29a57b941a9.slice. Jan 28 01:00:03.372534 systemd[1]: Created slice kubepods-burstable-podcf61ad82_d7e7_4105_ab0e_5d43f44e2034.slice - libcontainer container kubepods-burstable-podcf61ad82_d7e7_4105_ab0e_5d43f44e2034.slice. Jan 28 01:00:03.658729 kubelet[2638]: E0128 01:00:03.658211 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:03.661064 containerd[1483]: time="2026-01-28T01:00:03.660639327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vxr92,Uid:a9ac489b-d012-47ea-bfa1-f29a57b941a9,Namespace:kube-system,Attempt:0,}" Jan 28 01:00:03.686175 kubelet[2638]: E0128 01:00:03.685948 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:03.689020 containerd[1483]: time="2026-01-28T01:00:03.688725044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xgv7k,Uid:cf61ad82-d7e7-4105-ab0e-5d43f44e2034,Namespace:kube-system,Attempt:0,}" Jan 28 01:00:04.196612 kubelet[2638]: E0128 01:00:04.196532 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:05.201184 kubelet[2638]: E0128 01:00:05.200750 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:05.258287 systemd[1]: run-containerd-runc-k8s.io-b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28-runc.9fJXk6.mount: Deactivated successfully. Jan 28 01:00:06.575836 systemd-networkd[1407]: cilium_host: Link UP Jan 28 01:00:06.577983 systemd-networkd[1407]: cilium_net: Link UP Jan 28 01:00:06.578972 systemd-networkd[1407]: cilium_net: Gained carrier Jan 28 01:00:06.579282 systemd-networkd[1407]: cilium_host: Gained carrier Jan 28 01:00:06.614774 systemd-networkd[1407]: cilium_host: Gained IPv6LL Jan 28 01:00:06.811557 systemd-networkd[1407]: cilium_net: Gained IPv6LL Jan 28 01:00:06.949102 systemd-networkd[1407]: cilium_vxlan: Link UP Jan 28 01:00:06.949112 systemd-networkd[1407]: cilium_vxlan: Gained carrier Jan 28 01:00:07.512104 kernel: NET: Registered PF_ALG protocol family Jan 28 01:00:08.890882 systemd-networkd[1407]: cilium_vxlan: Gained IPv6LL Jan 28 01:00:09.763655 systemd-networkd[1407]: lxc_health: Link UP Jan 28 01:00:09.775863 systemd-networkd[1407]: lxc_health: Gained carrier Jan 28 01:00:11.026501 systemd-networkd[1407]: lxc_health: Gained IPv6LL Jan 28 01:00:11.919977 systemd-networkd[1407]: lxca97d1a2d6e4e: Link UP Jan 28 01:00:12.025966 kernel: eth0: renamed from tmpfb625 Jan 28 01:00:12.093028 systemd-networkd[1407]: lxca97d1a2d6e4e: Gained carrier Jan 28 01:00:14.160871 systemd-networkd[1407]: lxca97d1a2d6e4e: Gained IPv6LL Jan 28 01:00:14.182795 systemd-networkd[1407]: lxc26c135e9d58f: Link UP Jan 28 01:00:14.212849 kernel: eth0: renamed from tmp5db55 Jan 28 01:00:14.244154 systemd-networkd[1407]: lxc26c135e9d58f: Gained carrier Jan 28 01:00:14.337305 kubelet[2638]: E0128 01:00:14.336733 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:15.313876 kubelet[2638]: E0128 01:00:15.313528 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:16.330764 systemd-networkd[1407]: lxc26c135e9d58f: Gained IPv6LL Jan 28 01:00:16.669898 systemd[1]: run-containerd-runc-k8s.io-b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28-runc.aBPO3V.mount: Deactivated successfully. Jan 28 01:00:19.139205 sudo[1661]: pam_unix(sudo:session): session closed for user root Jan 28 01:00:19.150080 sshd[1658]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:19.159544 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:36064.service: Deactivated successfully. Jan 28 01:00:19.166922 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:00:19.167672 systemd[1]: session-9.scope: Consumed 27.175s CPU time, 159.8M memory peak, 0B memory swap peak. Jan 28 01:00:19.170984 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:00:19.176247 systemd-logind[1463]: Removed session 9. Jan 28 01:00:22.629237 containerd[1483]: time="2026-01-28T01:00:22.625081244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:22.641779 containerd[1483]: time="2026-01-28T01:00:22.640774375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:22.641779 containerd[1483]: time="2026-01-28T01:00:22.640931363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:22.641779 containerd[1483]: time="2026-01-28T01:00:22.641277140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:22.652902 containerd[1483]: time="2026-01-28T01:00:22.652767720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:22.654617 containerd[1483]: time="2026-01-28T01:00:22.654478814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:22.654760 containerd[1483]: time="2026-01-28T01:00:22.654725173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:22.656089 containerd[1483]: time="2026-01-28T01:00:22.656042298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:22.706263 systemd[1]: run-containerd-runc-k8s.io-fb6255a559e57031bf430608d0d2d9051a40c592005769d1a3e16b40de036cd0-runc.ZbLlUM.mount: Deactivated successfully. Jan 28 01:00:22.735246 systemd[1]: Started cri-containerd-5db55abcac0a9035b486266a895d1977ea1ef9a24d349c3500e5b9a41d1c24b5.scope - libcontainer container 5db55abcac0a9035b486266a895d1977ea1ef9a24d349c3500e5b9a41d1c24b5. Jan 28 01:00:22.746335 systemd[1]: Started cri-containerd-fb6255a559e57031bf430608d0d2d9051a40c592005769d1a3e16b40de036cd0.scope - libcontainer container fb6255a559e57031bf430608d0d2d9051a40c592005769d1a3e16b40de036cd0. Jan 28 01:00:22.775825 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:00:22.805236 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:00:22.902891 containerd[1483]: time="2026-01-28T01:00:22.900979700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vxr92,Uid:a9ac489b-d012-47ea-bfa1-f29a57b941a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5db55abcac0a9035b486266a895d1977ea1ef9a24d349c3500e5b9a41d1c24b5\"" Jan 28 01:00:22.907170 kubelet[2638]: E0128 01:00:22.905259 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:22.915640 containerd[1483]: time="2026-01-28T01:00:22.913106551Z" level=info msg="CreateContainer within sandbox \"5db55abcac0a9035b486266a895d1977ea1ef9a24d349c3500e5b9a41d1c24b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:00:22.950009 containerd[1483]: time="2026-01-28T01:00:22.949948192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xgv7k,Uid:cf61ad82-d7e7-4105-ab0e-5d43f44e2034,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb6255a559e57031bf430608d0d2d9051a40c592005769d1a3e16b40de036cd0\"" Jan 28 01:00:22.952322 kubelet[2638]: E0128 01:00:22.952217 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:22.971937 containerd[1483]: time="2026-01-28T01:00:22.971665644Z" level=info msg="CreateContainer within sandbox \"fb6255a559e57031bf430608d0d2d9051a40c592005769d1a3e16b40de036cd0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:00:23.061752 containerd[1483]: time="2026-01-28T01:00:23.061207545Z" level=info msg="CreateContainer within sandbox \"fb6255a559e57031bf430608d0d2d9051a40c592005769d1a3e16b40de036cd0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c5f29d84ede3a2108b0efce8da6b27d2e0f242b4fcd99799ee7dafa27f7c4fc\"" Jan 28 01:00:23.066089 containerd[1483]: time="2026-01-28T01:00:23.064149283Z" level=info msg="StartContainer for \"9c5f29d84ede3a2108b0efce8da6b27d2e0f242b4fcd99799ee7dafa27f7c4fc\"" Jan 28 01:00:23.113259 containerd[1483]: time="2026-01-28T01:00:23.112279575Z" level=info msg="CreateContainer within sandbox \"5db55abcac0a9035b486266a895d1977ea1ef9a24d349c3500e5b9a41d1c24b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a533ccad86fbe8ddf136e5f1fdf79ec4301c1d8e4872801bbdab3106a9e737b\"" Jan 28 01:00:23.121066 containerd[1483]: time="2026-01-28T01:00:23.120934033Z" level=info msg="StartContainer for \"4a533ccad86fbe8ddf136e5f1fdf79ec4301c1d8e4872801bbdab3106a9e737b\"" Jan 28 01:00:23.173232 systemd[1]: Started cri-containerd-9c5f29d84ede3a2108b0efce8da6b27d2e0f242b4fcd99799ee7dafa27f7c4fc.scope - libcontainer container 9c5f29d84ede3a2108b0efce8da6b27d2e0f242b4fcd99799ee7dafa27f7c4fc. Jan 28 01:00:23.275174 systemd[1]: Started cri-containerd-4a533ccad86fbe8ddf136e5f1fdf79ec4301c1d8e4872801bbdab3106a9e737b.scope - libcontainer container 4a533ccad86fbe8ddf136e5f1fdf79ec4301c1d8e4872801bbdab3106a9e737b. Jan 28 01:00:23.326064 containerd[1483]: time="2026-01-28T01:00:23.325769857Z" level=info msg="StartContainer for \"9c5f29d84ede3a2108b0efce8da6b27d2e0f242b4fcd99799ee7dafa27f7c4fc\" returns successfully" Jan 28 01:00:23.383080 containerd[1483]: time="2026-01-28T01:00:23.382714264Z" level=info msg="StartContainer for \"4a533ccad86fbe8ddf136e5f1fdf79ec4301c1d8e4872801bbdab3106a9e737b\" returns successfully" Jan 28 01:00:23.462307 kubelet[2638]: E0128 01:00:23.458701 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:23.473667 kubelet[2638]: E0128 01:00:23.473121 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:23.520799 kubelet[2638]: I0128 01:00:23.520111 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xgv7k" podStartSLOduration=69.520002759 podStartE2EDuration="1m9.520002759s" podCreationTimestamp="2026-01-28 00:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:00:23.491347169 +0000 UTC m=+73.222580742" watchObservedRunningTime="2026-01-28 01:00:23.520002759 +0000 UTC m=+73.251236273" Jan 28 01:00:23.661201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1520931555.mount: Deactivated successfully. Jan 28 01:00:24.480989 kubelet[2638]: E0128 01:00:24.479958 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:24.488485 kubelet[2638]: E0128 01:00:24.484784 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:24.558457 kubelet[2638]: I0128 01:00:24.552332 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vxr92" podStartSLOduration=70.552306931 podStartE2EDuration="1m10.552306931s" podCreationTimestamp="2026-01-28 00:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:00:23.524940464 +0000 UTC m=+73.256173987" watchObservedRunningTime="2026-01-28 01:00:24.552306931 +0000 UTC m=+74.283540444" Jan 28 01:00:25.484235 kubelet[2638]: E0128 01:00:25.483733 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:25.485134 kubelet[2638]: E0128 01:00:25.484770 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:25.680564 kubelet[2638]: E0128 01:00:25.680063 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:34.682785 kubelet[2638]: E0128 01:00:34.680280 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:38.685826 kubelet[2638]: E0128 01:00:38.682973 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:46.693806 kubelet[2638]: E0128 01:00:46.692894 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:49.685206 kubelet[2638]: E0128 01:00:49.684296 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:34.680004 kubelet[2638]: E0128 01:01:34.679612 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:35.680227 kubelet[2638]: E0128 01:01:35.679287 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:36.681988 kubelet[2638]: E0128 01:01:36.681205 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:52.683213 kubelet[2638]: E0128 01:01:52.682814 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:52.703614 kubelet[2638]: E0128 01:01:52.686659 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:52.703614 kubelet[2638]: E0128 01:01:52.687702 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:55.686713 kubelet[2638]: E0128 01:01:55.681210 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:02:00.679918 kubelet[2638]: E0128 01:02:00.679762 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:02:17.996940 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:51036.service - OpenSSH per-connection server daemon (10.0.0.1:51036). Jan 28 01:02:18.200374 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 51036 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:02:18.204615 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:18.240304 systemd-logind[1463]: New session 10 of user core. Jan 28 01:02:18.258537 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:02:18.887941 sshd[4160]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:18.905600 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:51036.service: Deactivated successfully. Jan 28 01:02:18.921643 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:02:18.945339 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:02:18.954190 systemd-logind[1463]: Removed session 10. Jan 28 01:02:24.064741 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:52106.service - OpenSSH per-connection server daemon (10.0.0.1:52106). Jan 28 01:02:24.355586 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 52106 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:02:24.359983 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:24.396573 systemd-logind[1463]: New session 11 of user core. Jan 28 01:02:24.417953 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:02:25.030311 sshd[4176]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:25.046165 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:52106.service: Deactivated successfully. Jan 28 01:02:25.052971 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:02:25.060065 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:02:25.071347 systemd-logind[1463]: Removed session 11. Jan 28 01:02:30.111628 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:52112.service - OpenSSH per-connection server daemon (10.0.0.1:52112). Jan 28 01:02:30.344109 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 52112 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:02:30.360525 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:30.417966 systemd-logind[1463]: New session 12 of user core. Jan 28 01:02:30.450156 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:02:31.093213 sshd[4192]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:31.117642 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:52112.service: Deactivated successfully. Jan 28 01:02:31.150854 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:02:31.168317 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:02:31.181076 systemd-logind[1463]: Removed session 12. Jan 28 01:02:36.176609 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:40006.service - OpenSSH per-connection server daemon (10.0.0.1:40006). Jan 28 01:02:36.406897 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 40006 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:02:36.417267 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:36.451884 systemd-logind[1463]: New session 13 of user core. Jan 28 01:02:36.462352 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:02:36.836747 sshd[4208]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:36.855833 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:40006.service: Deactivated successfully. Jan 28 01:02:36.863060 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:02:36.883263 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:02:36.894216 systemd-logind[1463]: Removed session 13. Jan 28 01:02:39.681222 kubelet[2638]: E0128 01:02:39.679538 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:02:41.879598 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:40072.service - OpenSSH per-connection server daemon (10.0.0.1:40072). Jan 28 01:02:42.043314 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 40072 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:02:42.055688 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:42.083196 systemd-logind[1463]: New session 14 of user core. Jan 28 01:02:42.092772 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:02:42.505551 sshd[4224]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:42.516169 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:40072.service: Deactivated successfully. Jan 28 01:02:42.523746 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:02:42.541677 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:02:42.551607 systemd-logind[1463]: Removed session 14. Jan 28 01:02:42.700240 kubelet[2638]: E0128 01:02:42.693935 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:02:47.614738 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:35802.service - OpenSSH per-connection server daemon (10.0.0.1:35802). Jan 28 01:02:47.959163 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 35802 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:02:47.960872 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:48.005701 systemd-logind[1463]: New session 15 of user core. Jan 28 01:02:48.020505 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:02:48.499641 sshd[4243]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:48.526679 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:35802.service: Deactivated successfully. Jan 28 01:02:48.533911 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:02:48.548958 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:02:48.563594 systemd-logind[1463]: Removed session 15. Jan 28 01:02:53.853780 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:35672.service - OpenSSH per-connection server daemon (10.0.0.1:35672). Jan 28 01:02:54.425272 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 35672 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:02:54.446082 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:54.513338 systemd-logind[1463]: New session 16 of user core. Jan 28 01:02:54.529049 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:02:55.609955 sshd[4258]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:55.629780 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:35672.service: Deactivated successfully. Jan 28 01:02:55.646959 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:02:55.662286 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:02:55.682060 systemd-logind[1463]: Removed session 16. Jan 28 01:02:56.803684 kubelet[2638]: E0128 01:02:56.802676 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:03:00.658229 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:35690.service - OpenSSH per-connection server daemon (10.0.0.1:35690). Jan 28 01:03:00.881964 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 35690 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:00.893047 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:00.918893 systemd-logind[1463]: New session 17 of user core. Jan 28 01:03:00.939076 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:03:01.351837 sshd[4273]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:01.376952 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:35690.service: Deactivated successfully. Jan 28 01:03:01.388200 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:03:01.394758 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:03:01.399187 systemd-logind[1463]: Removed session 17. Jan 28 01:03:04.711158 kubelet[2638]: E0128 01:03:04.686019 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:03:04.735225 kubelet[2638]: E0128 01:03:04.715009 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:03:06.406990 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:42596.service - OpenSSH per-connection server daemon (10.0.0.1:42596). Jan 28 01:03:06.524746 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 42596 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:06.533133 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:06.564761 systemd-logind[1463]: New session 18 of user core. Jan 28 01:03:06.579333 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:03:07.173343 sshd[4289]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:07.195840 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:42596.service: Deactivated successfully. Jan 28 01:03:07.203266 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:03:07.208750 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:03:07.218798 systemd-logind[1463]: Removed session 18. Jan 28 01:03:11.692070 kubelet[2638]: E0128 01:03:11.692011 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:03:12.304178 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:42626.service - OpenSSH per-connection server daemon (10.0.0.1:42626). Jan 28 01:03:12.492541 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 42626 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:12.508737 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:12.581277 systemd-logind[1463]: New session 19 of user core. Jan 28 01:03:12.636133 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:03:13.322074 sshd[4306]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:13.339130 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:42626.service: Deactivated successfully. Jan 28 01:03:13.356916 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:03:13.374760 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:03:13.380671 systemd-logind[1463]: Removed session 19. Jan 28 01:03:14.710715 kubelet[2638]: E0128 01:03:14.707041 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:03:18.411083 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:40030.service - OpenSSH per-connection server daemon (10.0.0.1:40030). Jan 28 01:03:18.729591 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 40030 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:18.767324 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:18.826662 systemd-logind[1463]: New session 20 of user core. Jan 28 01:03:18.883882 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:03:19.689813 sshd[4324]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:19.714678 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:40030.service: Deactivated successfully. Jan 28 01:03:19.725617 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:03:19.758631 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:03:19.786240 systemd-logind[1463]: Removed session 20. Jan 28 01:03:24.761302 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:52972.service - OpenSSH per-connection server daemon (10.0.0.1:52972). Jan 28 01:03:25.001611 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 52972 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:25.016035 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:25.081342 systemd-logind[1463]: New session 21 of user core. Jan 28 01:03:25.114708 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:03:25.703653 sshd[4339]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:25.723999 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:52972.service: Deactivated successfully. Jan 28 01:03:25.732348 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:03:25.764328 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:03:25.768198 systemd-logind[1463]: Removed session 21. Jan 28 01:03:29.684528 kubelet[2638]: E0128 01:03:29.680823 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:03:30.748482 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:52980.service - OpenSSH per-connection server daemon (10.0.0.1:52980). Jan 28 01:03:30.813729 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 52980 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:30.818675 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:30.832501 systemd-logind[1463]: New session 22 of user core. Jan 28 01:03:30.856862 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:03:32.794967 sshd[4355]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:32.807902 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:52980.service: Deactivated successfully. Jan 28 01:03:32.810671 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:03:32.811140 systemd[1]: session-22.scope: Consumed 1.507s CPU time. Jan 28 01:03:32.812916 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:03:32.825894 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:54576.service - OpenSSH per-connection server daemon (10.0.0.1:54576). Jan 28 01:03:32.829598 systemd-logind[1463]: Removed session 22. Jan 28 01:03:32.908204 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 54576 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:32.911214 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:32.920972 systemd-logind[1463]: New session 23 of user core. Jan 28 01:03:32.935833 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:03:33.307651 sshd[4372]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:33.317246 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:54576.service: Deactivated successfully. Jan 28 01:03:33.321071 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:03:33.324486 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:03:33.337659 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:54582.service - OpenSSH per-connection server daemon (10.0.0.1:54582). Jan 28 01:03:33.340719 systemd-logind[1463]: Removed session 23. Jan 28 01:03:33.416510 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 54582 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:33.419672 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:33.435584 systemd-logind[1463]: New session 24 of user core. Jan 28 01:03:33.445757 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:03:33.657183 sshd[4386]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:33.666726 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:54582.service: Deactivated successfully. Jan 28 01:03:33.670812 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:03:33.672831 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:03:33.675335 systemd-logind[1463]: Removed session 24. Jan 28 01:03:38.751629 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:54596.service - OpenSSH per-connection server daemon (10.0.0.1:54596). Jan 28 01:03:38.839110 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 54596 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:38.853813 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:38.879010 systemd-logind[1463]: New session 25 of user core. Jan 28 01:03:38.896582 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:03:39.248006 sshd[4403]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:39.257573 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:54596.service: Deactivated successfully. Jan 28 01:03:39.266961 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:03:39.270341 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:03:39.280110 systemd-logind[1463]: Removed session 25. Jan 28 01:03:44.333716 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:51832.service - OpenSSH per-connection server daemon (10.0.0.1:51832). Jan 28 01:03:44.587996 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 51832 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:44.603115 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:44.627929 systemd-logind[1463]: New session 26 of user core. Jan 28 01:03:44.659223 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:03:45.374175 sshd[4417]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:45.393082 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:51832.service: Deactivated successfully. Jan 28 01:03:45.404316 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:03:45.412548 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:03:45.419737 systemd-logind[1463]: Removed session 26. Jan 28 01:03:47.682682 kubelet[2638]: E0128 01:03:47.680224 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:03:50.397573 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:51838.service - OpenSSH per-connection server daemon (10.0.0.1:51838). Jan 28 01:03:50.496613 sshd[4433]: Accepted publickey for core from 10.0.0.1 port 51838 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:50.500648 sshd[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:50.523808 systemd-logind[1463]: New session 27 of user core. Jan 28 01:03:50.542616 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:03:50.919594 sshd[4433]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:50.928885 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:51838.service: Deactivated successfully. Jan 28 01:03:50.947806 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:03:50.960549 systemd-logind[1463]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:03:50.977274 systemd-logind[1463]: Removed session 27. Jan 28 01:03:55.959216 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:36682.service - OpenSSH per-connection server daemon (10.0.0.1:36682). Jan 28 01:03:56.073081 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 36682 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:03:56.079809 sshd[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:03:56.093920 systemd-logind[1463]: New session 28 of user core. Jan 28 01:03:56.099146 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:03:56.471583 sshd[4448]: pam_unix(sshd:session): session closed for user core Jan 28 01:03:56.482706 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:36682.service: Deactivated successfully. Jan 28 01:03:56.487987 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:03:56.491650 systemd-logind[1463]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:03:56.495764 systemd-logind[1463]: Removed session 28. Jan 28 01:04:01.538705 systemd[1]: Started sshd@28-10.0.0.13:22-10.0.0.1:36684.service - OpenSSH per-connection server daemon (10.0.0.1:36684). Jan 28 01:04:01.680299 kubelet[2638]: E0128 01:04:01.680243 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:04:01.681177 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 36684 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:01.684937 sshd[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:01.715865 systemd-logind[1463]: New session 29 of user core. Jan 28 01:04:01.729643 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:04:02.146193 sshd[4462]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:02.155625 systemd[1]: sshd@28-10.0.0.13:22-10.0.0.1:36684.service: Deactivated successfully. Jan 28 01:04:02.163674 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:04:02.204825 systemd-logind[1463]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:04:02.209725 systemd-logind[1463]: Removed session 29. Jan 28 01:04:07.206674 systemd[1]: Started sshd@29-10.0.0.13:22-10.0.0.1:52344.service - OpenSSH per-connection server daemon (10.0.0.1:52344). Jan 28 01:04:07.397116 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 52344 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:07.402133 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:07.450802 systemd-logind[1463]: New session 30 of user core. Jan 28 01:04:07.482707 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:04:07.984298 sshd[4477]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:08.008644 systemd[1]: sshd@29-10.0.0.13:22-10.0.0.1:52344.service: Deactivated successfully. Jan 28 01:04:08.013705 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:04:08.017545 systemd-logind[1463]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:04:08.024622 systemd-logind[1463]: Removed session 30. Jan 28 01:04:13.025227 systemd[1]: Started sshd@30-10.0.0.13:22-10.0.0.1:44696.service - OpenSSH per-connection server daemon (10.0.0.1:44696). Jan 28 01:04:13.169874 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 44696 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:13.183274 sshd[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:13.227705 systemd-logind[1463]: New session 31 of user core. Jan 28 01:04:13.245308 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:04:13.638246 sshd[4493]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:13.663636 systemd[1]: sshd@30-10.0.0.13:22-10.0.0.1:44696.service: Deactivated successfully. Jan 28 01:04:13.681758 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:04:13.692988 systemd-logind[1463]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:04:13.697960 systemd-logind[1463]: Removed session 31. Jan 28 01:04:18.676808 systemd[1]: Started sshd@31-10.0.0.13:22-10.0.0.1:44704.service - OpenSSH per-connection server daemon (10.0.0.1:44704). Jan 28 01:04:18.681543 kubelet[2638]: E0128 01:04:18.679989 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:04:18.796686 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 44704 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:18.802052 sshd[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:18.832662 systemd-logind[1463]: New session 32 of user core. Jan 28 01:04:18.847133 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 01:04:19.130802 sshd[4509]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:19.141882 systemd[1]: sshd@31-10.0.0.13:22-10.0.0.1:44704.service: Deactivated successfully. Jan 28 01:04:19.145089 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 01:04:19.149827 systemd-logind[1463]: Session 32 logged out. Waiting for processes to exit. Jan 28 01:04:19.156079 systemd-logind[1463]: Removed session 32. Jan 28 01:04:23.685136 kubelet[2638]: E0128 01:04:23.684815 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:04:24.253626 systemd[1]: Started sshd@32-10.0.0.13:22-10.0.0.1:52974.service - OpenSSH per-connection server daemon (10.0.0.1:52974). Jan 28 01:04:24.423870 sshd[4524]: Accepted publickey for core from 10.0.0.1 port 52974 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:24.430187 sshd[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:24.499167 systemd-logind[1463]: New session 33 of user core. Jan 28 01:04:24.557197 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 01:04:24.699003 kubelet[2638]: E0128 01:04:24.698963 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:04:25.168564 sshd[4524]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:25.199021 systemd[1]: sshd@32-10.0.0.13:22-10.0.0.1:52974.service: Deactivated successfully. Jan 28 01:04:25.223648 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 01:04:25.225257 systemd-logind[1463]: Session 33 logged out. Waiting for processes to exit. Jan 28 01:04:25.228594 systemd-logind[1463]: Removed session 33. Jan 28 01:04:25.690905 kubelet[2638]: E0128 01:04:25.688933 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:04:30.267029 systemd[1]: Started sshd@33-10.0.0.13:22-10.0.0.1:52982.service - OpenSSH per-connection server daemon (10.0.0.1:52982). Jan 28 01:04:30.427178 sshd[4540]: Accepted publickey for core from 10.0.0.1 port 52982 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:30.431326 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:30.463547 systemd-logind[1463]: New session 34 of user core. Jan 28 01:04:30.490559 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 01:04:30.698975 kubelet[2638]: E0128 01:04:30.695033 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:04:31.116797 sshd[4540]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:31.129734 systemd[1]: sshd@33-10.0.0.13:22-10.0.0.1:52982.service: Deactivated successfully. Jan 28 01:04:31.155608 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 01:04:31.167675 systemd-logind[1463]: Session 34 logged out. Waiting for processes to exit. Jan 28 01:04:31.177828 systemd-logind[1463]: Removed session 34. Jan 28 01:04:36.248675 systemd[1]: Started sshd@34-10.0.0.13:22-10.0.0.1:39166.service - OpenSSH per-connection server daemon (10.0.0.1:39166). Jan 28 01:04:36.484346 sshd[4555]: Accepted publickey for core from 10.0.0.1 port 39166 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:36.500870 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:36.577721 systemd-logind[1463]: New session 35 of user core. Jan 28 01:04:36.601290 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 01:04:37.156307 sshd[4555]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:37.169310 systemd[1]: sshd@34-10.0.0.13:22-10.0.0.1:39166.service: Deactivated successfully. Jan 28 01:04:37.174695 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 01:04:37.184303 systemd-logind[1463]: Session 35 logged out. Waiting for processes to exit. Jan 28 01:04:37.188641 systemd-logind[1463]: Removed session 35. Jan 28 01:04:42.302499 systemd[1]: Started sshd@35-10.0.0.13:22-10.0.0.1:39176.service - OpenSSH per-connection server daemon (10.0.0.1:39176). Jan 28 01:04:42.479938 sshd[4570]: Accepted publickey for core from 10.0.0.1 port 39176 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:42.495045 sshd[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:42.600869 systemd-logind[1463]: New session 36 of user core. Jan 28 01:04:42.655339 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 01:04:43.715134 sshd[4570]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:43.749669 systemd[1]: sshd@35-10.0.0.13:22-10.0.0.1:39176.service: Deactivated successfully. Jan 28 01:04:43.767121 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 01:04:43.775502 systemd-logind[1463]: Session 36 logged out. Waiting for processes to exit. Jan 28 01:04:43.788623 systemd-logind[1463]: Removed session 36. Jan 28 01:04:48.795923 systemd[1]: Started sshd@36-10.0.0.13:22-10.0.0.1:40128.service - OpenSSH per-connection server daemon (10.0.0.1:40128). Jan 28 01:04:48.991042 sshd[4590]: Accepted publickey for core from 10.0.0.1 port 40128 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:49.002823 sshd[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:49.034343 systemd-logind[1463]: New session 37 of user core. Jan 28 01:04:49.056915 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 01:04:49.744847 sshd[4590]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:49.792221 systemd[1]: sshd@36-10.0.0.13:22-10.0.0.1:40128.service: Deactivated successfully. Jan 28 01:04:49.818117 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 01:04:49.825974 systemd-logind[1463]: Session 37 logged out. Waiting for processes to exit. Jan 28 01:04:49.862994 systemd-logind[1463]: Removed session 37. Jan 28 01:04:54.848764 systemd[1]: Started sshd@37-10.0.0.13:22-10.0.0.1:44506.service - OpenSSH per-connection server daemon (10.0.0.1:44506). Jan 28 01:04:55.117994 sshd[4604]: Accepted publickey for core from 10.0.0.1 port 44506 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:55.118915 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:55.167875 systemd-logind[1463]: New session 38 of user core. Jan 28 01:04:55.205056 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 01:04:56.171241 sshd[4604]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:56.208186 systemd[1]: sshd@37-10.0.0.13:22-10.0.0.1:44506.service: Deactivated successfully. Jan 28 01:04:56.216680 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 01:04:56.224229 systemd-logind[1463]: Session 38 logged out. Waiting for processes to exit. Jan 28 01:04:56.295657 systemd[1]: Started sshd@38-10.0.0.13:22-10.0.0.1:44508.service - OpenSSH per-connection server daemon (10.0.0.1:44508). Jan 28 01:04:56.312231 systemd-logind[1463]: Removed session 38. Jan 28 01:04:56.489969 sshd[4618]: Accepted publickey for core from 10.0.0.1 port 44508 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:56.498723 sshd[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:56.544182 systemd-logind[1463]: New session 39 of user core. Jan 28 01:04:56.565078 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 01:04:58.790880 sshd[4618]: pam_unix(sshd:session): session closed for user core Jan 28 01:04:58.814902 systemd[1]: sshd@38-10.0.0.13:22-10.0.0.1:44508.service: Deactivated successfully. Jan 28 01:04:58.822713 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 01:04:58.830544 systemd-logind[1463]: Session 39 logged out. Waiting for processes to exit. Jan 28 01:04:58.858575 systemd[1]: Started sshd@39-10.0.0.13:22-10.0.0.1:44520.service - OpenSSH per-connection server daemon (10.0.0.1:44520). Jan 28 01:04:58.869162 systemd-logind[1463]: Removed session 39. Jan 28 01:04:58.995568 sshd[4631]: Accepted publickey for core from 10.0.0.1 port 44520 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:04:58.999212 sshd[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:04:59.014694 systemd-logind[1463]: New session 40 of user core. Jan 28 01:04:59.024595 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 28 01:04:59.680944 kubelet[2638]: E0128 01:04:59.680726 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:05:02.481867 sshd[4631]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:02.530333 systemd[1]: sshd@39-10.0.0.13:22-10.0.0.1:44520.service: Deactivated successfully. Jan 28 01:05:02.652183 systemd[1]: session-40.scope: Deactivated successfully. Jan 28 01:05:02.652967 systemd[1]: session-40.scope: Consumed 1.829s CPU time. Jan 28 01:05:02.657912 systemd-logind[1463]: Session 40 logged out. Waiting for processes to exit. Jan 28 01:05:02.687175 systemd[1]: Started sshd@40-10.0.0.13:22-10.0.0.1:47120.service - OpenSSH per-connection server daemon (10.0.0.1:47120). Jan 28 01:05:02.690886 systemd-logind[1463]: Removed session 40. Jan 28 01:05:02.809270 sshd[4658]: Accepted publickey for core from 10.0.0.1 port 47120 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:02.819844 sshd[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:02.855793 systemd-logind[1463]: New session 41 of user core. Jan 28 01:05:02.874017 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 28 01:05:03.683129 kubelet[2638]: E0128 01:05:03.681909 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:05:04.969631 sshd[4658]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:05.044134 systemd[1]: sshd@40-10.0.0.13:22-10.0.0.1:47120.service: Deactivated successfully. Jan 28 01:05:05.060633 systemd[1]: session-41.scope: Deactivated successfully. Jan 28 01:05:05.063696 systemd[1]: session-41.scope: Consumed 1.058s CPU time. Jan 28 01:05:05.076545 systemd-logind[1463]: Session 41 logged out. Waiting for processes to exit. Jan 28 01:05:05.090715 systemd[1]: Started sshd@41-10.0.0.13:22-10.0.0.1:47134.service - OpenSSH per-connection server daemon (10.0.0.1:47134). Jan 28 01:05:05.107233 systemd-logind[1463]: Removed session 41. Jan 28 01:05:05.343740 sshd[4671]: Accepted publickey for core from 10.0.0.1 port 47134 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:05.350292 sshd[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:05.382827 systemd-logind[1463]: New session 42 of user core. Jan 28 01:05:05.408754 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 28 01:05:06.269895 sshd[4671]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:06.532060 systemd[1]: sshd@41-10.0.0.13:22-10.0.0.1:47134.service: Deactivated successfully. Jan 28 01:05:06.594773 systemd[1]: session-42.scope: Deactivated successfully. Jan 28 01:05:06.623621 systemd-logind[1463]: Session 42 logged out. Waiting for processes to exit. Jan 28 01:05:06.655050 systemd-logind[1463]: Removed session 42. Jan 28 01:05:08.689544 kubelet[2638]: E0128 01:05:08.689097 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:05:11.428241 systemd[1]: Started sshd@42-10.0.0.13:22-10.0.0.1:47138.service - OpenSSH per-connection server daemon (10.0.0.1:47138). Jan 28 01:05:11.774018 sshd[4691]: Accepted publickey for core from 10.0.0.1 port 47138 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:11.803032 sshd[4691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:11.916113 systemd-logind[1463]: New session 43 of user core. Jan 28 01:05:11.961689 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 28 01:05:13.011326 sshd[4691]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:13.041283 systemd[1]: sshd@42-10.0.0.13:22-10.0.0.1:47138.service: Deactivated successfully. Jan 28 01:05:13.063286 systemd[1]: session-43.scope: Deactivated successfully. Jan 28 01:05:13.074938 systemd-logind[1463]: Session 43 logged out. Waiting for processes to exit. Jan 28 01:05:13.085259 systemd-logind[1463]: Removed session 43. Jan 28 01:05:18.130265 systemd[1]: Started sshd@43-10.0.0.13:22-10.0.0.1:32982.service - OpenSSH per-connection server daemon (10.0.0.1:32982). Jan 28 01:05:18.438309 sshd[4708]: Accepted publickey for core from 10.0.0.1 port 32982 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:18.467689 sshd[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:18.614644 systemd-logind[1463]: New session 44 of user core. Jan 28 01:05:18.689290 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 28 01:05:19.319670 sshd[4708]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:19.347142 systemd[1]: sshd@43-10.0.0.13:22-10.0.0.1:32982.service: Deactivated successfully. Jan 28 01:05:19.373786 systemd[1]: session-44.scope: Deactivated successfully. Jan 28 01:05:19.383890 systemd-logind[1463]: Session 44 logged out. Waiting for processes to exit. Jan 28 01:05:19.396818 systemd-logind[1463]: Removed session 44. Jan 28 01:05:24.594214 systemd[1]: Started sshd@44-10.0.0.13:22-10.0.0.1:53650.service - OpenSSH per-connection server daemon (10.0.0.1:53650). Jan 28 01:05:24.792965 sshd[4723]: Accepted publickey for core from 10.0.0.1 port 53650 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:24.794262 sshd[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:24.841698 systemd-logind[1463]: New session 45 of user core. Jan 28 01:05:24.876648 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 28 01:05:25.260827 sshd[4723]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:25.284791 systemd-logind[1463]: Session 45 logged out. Waiting for processes to exit. Jan 28 01:05:25.291634 systemd[1]: sshd@44-10.0.0.13:22-10.0.0.1:53650.service: Deactivated successfully. Jan 28 01:05:25.302820 systemd[1]: session-45.scope: Deactivated successfully. Jan 28 01:05:25.319940 systemd-logind[1463]: Removed session 45. Jan 28 01:05:25.683079 kubelet[2638]: E0128 01:05:25.682245 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:05:29.775000 kubelet[2638]: E0128 01:05:29.774344 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:05:31.209602 systemd[1]: Started sshd@45-10.0.0.13:22-10.0.0.1:53662.service - OpenSSH per-connection server daemon (10.0.0.1:53662). Jan 28 01:05:31.395897 sshd[4738]: Accepted publickey for core from 10.0.0.1 port 53662 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:31.406005 sshd[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:31.479577 systemd-logind[1463]: New session 46 of user core. Jan 28 01:05:31.494668 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 28 01:05:32.024008 sshd[4738]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:32.064903 systemd[1]: sshd@45-10.0.0.13:22-10.0.0.1:53662.service: Deactivated successfully. Jan 28 01:05:32.086888 systemd[1]: session-46.scope: Deactivated successfully. Jan 28 01:05:32.099603 systemd-logind[1463]: Session 46 logged out. Waiting for processes to exit. Jan 28 01:05:32.102982 systemd-logind[1463]: Removed session 46. Jan 28 01:05:37.166312 systemd[1]: Started sshd@46-10.0.0.13:22-10.0.0.1:43020.service - OpenSSH per-connection server daemon (10.0.0.1:43020). Jan 28 01:05:37.462155 sshd[4752]: Accepted publickey for core from 10.0.0.1 port 43020 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:37.470866 sshd[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:37.505783 systemd-logind[1463]: New session 47 of user core. Jan 28 01:05:37.523530 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 28 01:05:38.211303 sshd[4752]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:38.225160 systemd[1]: sshd@46-10.0.0.13:22-10.0.0.1:43020.service: Deactivated successfully. Jan 28 01:05:38.238700 systemd[1]: session-47.scope: Deactivated successfully. Jan 28 01:05:38.252523 systemd-logind[1463]: Session 47 logged out. Waiting for processes to exit. Jan 28 01:05:38.256118 systemd-logind[1463]: Removed session 47. Jan 28 01:05:42.687528 kubelet[2638]: E0128 01:05:42.682791 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:05:42.687528 kubelet[2638]: E0128 01:05:42.684141 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:05:43.517183 systemd[1]: Started sshd@47-10.0.0.13:22-10.0.0.1:58046.service - OpenSSH per-connection server daemon (10.0.0.1:58046). Jan 28 01:05:43.687685 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 58046 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:43.704530 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:43.745228 systemd-logind[1463]: New session 48 of user core. Jan 28 01:05:43.760680 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 28 01:05:44.531302 sshd[4767]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:44.553120 systemd-logind[1463]: Session 48 logged out. Waiting for processes to exit. Jan 28 01:05:44.553983 systemd[1]: sshd@47-10.0.0.13:22-10.0.0.1:58046.service: Deactivated successfully. Jan 28 01:05:44.577720 systemd[1]: session-48.scope: Deactivated successfully. Jan 28 01:05:44.580894 systemd-logind[1463]: Removed session 48. Jan 28 01:05:44.691236 kubelet[2638]: E0128 01:05:44.688189 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:05:49.618518 systemd[1]: Started sshd@48-10.0.0.13:22-10.0.0.1:58060.service - OpenSSH per-connection server daemon (10.0.0.1:58060). Jan 28 01:05:49.848830 sshd[4785]: Accepted publickey for core from 10.0.0.1 port 58060 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:49.855920 sshd[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:49.907889 systemd-logind[1463]: New session 49 of user core. Jan 28 01:05:49.923181 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 28 01:05:50.538068 sshd[4785]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:50.586190 systemd[1]: sshd@48-10.0.0.13:22-10.0.0.1:58060.service: Deactivated successfully. Jan 28 01:05:50.640071 systemd[1]: session-49.scope: Deactivated successfully. Jan 28 01:05:50.665595 systemd-logind[1463]: Session 49 logged out. Waiting for processes to exit. Jan 28 01:05:50.739502 systemd-logind[1463]: Removed session 49. Jan 28 01:05:56.103518 systemd[1]: Started sshd@49-10.0.0.13:22-10.0.0.1:46054.service - OpenSSH per-connection server daemon (10.0.0.1:46054). Jan 28 01:05:56.456186 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 46054 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:05:56.481094 sshd[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:05:56.525979 systemd-logind[1463]: New session 50 of user core. Jan 28 01:05:56.543236 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 28 01:05:57.596193 sshd[4800]: pam_unix(sshd:session): session closed for user core Jan 28 01:05:57.664909 systemd[1]: sshd@49-10.0.0.13:22-10.0.0.1:46054.service: Deactivated successfully. Jan 28 01:05:57.717769 systemd[1]: session-50.scope: Deactivated successfully. Jan 28 01:05:57.739126 systemd-logind[1463]: Session 50 logged out. Waiting for processes to exit. Jan 28 01:05:57.754807 systemd-logind[1463]: Removed session 50. Jan 28 01:06:02.690270 systemd[1]: Started sshd@50-10.0.0.13:22-10.0.0.1:51882.service - OpenSSH per-connection server daemon (10.0.0.1:51882). Jan 28 01:06:02.958135 sshd[4815]: Accepted publickey for core from 10.0.0.1 port 51882 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:06:02.980670 sshd[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:06:03.089002 systemd-logind[1463]: New session 51 of user core. Jan 28 01:06:03.133949 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 28 01:06:03.776008 sshd[4815]: pam_unix(sshd:session): session closed for user core Jan 28 01:06:03.792015 systemd-logind[1463]: Session 51 logged out. Waiting for processes to exit. Jan 28 01:06:03.800866 systemd[1]: sshd@50-10.0.0.13:22-10.0.0.1:51882.service: Deactivated successfully. Jan 28 01:06:03.821250 systemd[1]: session-51.scope: Deactivated successfully. Jan 28 01:06:03.829264 systemd-logind[1463]: Removed session 51. Jan 28 01:06:08.837981 systemd[1]: Started sshd@51-10.0.0.13:22-10.0.0.1:51924.service - OpenSSH per-connection server daemon (10.0.0.1:51924). Jan 28 01:06:09.009935 sshd[4830]: Accepted publickey for core from 10.0.0.1 port 51924 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:06:09.017232 sshd[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:06:09.068951 systemd-logind[1463]: New session 52 of user core. Jan 28 01:06:09.079052 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 28 01:06:09.776132 sshd[4830]: pam_unix(sshd:session): session closed for user core Jan 28 01:06:09.854966 systemd[1]: sshd@51-10.0.0.13:22-10.0.0.1:51924.service: Deactivated successfully. Jan 28 01:06:09.868973 systemd[1]: session-52.scope: Deactivated successfully. Jan 28 01:06:09.885754 systemd-logind[1463]: Session 52 logged out. Waiting for processes to exit. Jan 28 01:06:09.983048 systemd[1]: Started sshd@52-10.0.0.13:22-10.0.0.1:51930.service - OpenSSH per-connection server daemon (10.0.0.1:51930). Jan 28 01:06:09.986717 systemd-logind[1463]: Removed session 52. Jan 28 01:06:10.176709 sshd[4844]: Accepted publickey for core from 10.0.0.1 port 51930 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:06:10.200185 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:06:10.248605 systemd-logind[1463]: New session 53 of user core. Jan 28 01:06:10.286105 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 28 01:06:13.863932 containerd[1483]: time="2026-01-28T01:06:13.863749640Z" level=info msg="StopContainer for \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\" with timeout 30 (s)" Jan 28 01:06:13.895475 containerd[1483]: time="2026-01-28T01:06:13.884877945Z" level=info msg="Stop container \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\" with signal terminated" Jan 28 01:06:14.313660 systemd[1]: cri-containerd-c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637.scope: Deactivated successfully. Jan 28 01:06:14.382537 systemd[1]: cri-containerd-c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637.scope: Consumed 8.051s CPU time. Jan 28 01:06:14.573066 containerd[1483]: time="2026-01-28T01:06:14.572848056Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:06:14.587570 containerd[1483]: time="2026-01-28T01:06:14.584179819Z" level=info msg="StopContainer for \"b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28\" with timeout 2 (s)" Jan 28 01:06:14.598602 containerd[1483]: time="2026-01-28T01:06:14.595941485Z" level=info msg="Stop container \"b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28\" with signal terminated" Jan 28 01:06:14.706599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637-rootfs.mount: Deactivated successfully. Jan 28 01:06:14.758538 systemd-networkd[1407]: lxc_health: Link DOWN Jan 28 01:06:14.758550 systemd-networkd[1407]: lxc_health: Lost carrier Jan 28 01:06:14.776615 kubelet[2638]: E0128 01:06:14.775653 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:14.926977 containerd[1483]: time="2026-01-28T01:06:14.912628620Z" level=info msg="shim disconnected" id=c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637 namespace=k8s.io Jan 28 01:06:14.926977 containerd[1483]: time="2026-01-28T01:06:14.912860288Z" level=warning msg="cleaning up after shim disconnected" id=c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637 namespace=k8s.io Jan 28 01:06:14.926977 containerd[1483]: time="2026-01-28T01:06:14.912875186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:06:15.005853 systemd[1]: cri-containerd-b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28.scope: Deactivated successfully. Jan 28 01:06:15.025487 systemd[1]: cri-containerd-b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28.scope: Consumed 36.879s CPU time. Jan 28 01:06:15.079752 sshd[4844]: pam_unix(sshd:session): session closed for user core Jan 28 01:06:15.181086 systemd[1]: sshd@52-10.0.0.13:22-10.0.0.1:51930.service: Deactivated successfully. Jan 28 01:06:15.203130 systemd[1]: session-53.scope: Deactivated successfully. Jan 28 01:06:15.204075 systemd[1]: session-53.scope: Consumed 1.606s CPU time. Jan 28 01:06:15.220350 systemd-logind[1463]: Session 53 logged out. Waiting for processes to exit. Jan 28 01:06:15.294682 systemd[1]: Started sshd@53-10.0.0.13:22-10.0.0.1:50376.service - OpenSSH per-connection server daemon (10.0.0.1:50376). Jan 28 01:06:15.307539 systemd-logind[1463]: Removed session 53. Jan 28 01:06:15.496665 containerd[1483]: time="2026-01-28T01:06:15.479632107Z" level=info msg="StopContainer for \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\" returns successfully" Jan 28 01:06:15.582843 containerd[1483]: time="2026-01-28T01:06:15.578882149Z" level=info msg="StopPodSandbox for \"937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274\"" Jan 28 01:06:15.644100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28-rootfs.mount: Deactivated successfully. Jan 28 01:06:15.689174 containerd[1483]: time="2026-01-28T01:06:15.663608141Z" level=info msg="Container to stop \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:06:15.691116 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274-shm.mount: Deactivated successfully. Jan 28 01:06:15.765792 containerd[1483]: time="2026-01-28T01:06:15.761622797Z" level=info msg="shim disconnected" id=b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28 namespace=k8s.io Jan 28 01:06:15.766008 containerd[1483]: time="2026-01-28T01:06:15.765970628Z" level=warning msg="cleaning up after shim disconnected" id=b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28 namespace=k8s.io Jan 28 01:06:15.766105 containerd[1483]: time="2026-01-28T01:06:15.766085299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:06:15.867943 systemd[1]: cri-containerd-937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274.scope: Deactivated successfully. Jan 28 01:06:15.894888 containerd[1483]: time="2026-01-28T01:06:15.894489931Z" level=info msg="StopContainer for \"b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28\" returns successfully" Jan 28 01:06:15.898042 containerd[1483]: time="2026-01-28T01:06:15.898004476Z" level=info msg="StopPodSandbox for \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\"" Jan 28 01:06:15.921429 containerd[1483]: time="2026-01-28T01:06:15.920883552Z" level=info msg="Container to stop \"57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:06:15.921429 containerd[1483]: time="2026-01-28T01:06:15.920937973Z" level=info msg="Container to stop \"fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:06:15.921429 containerd[1483]: time="2026-01-28T01:06:15.920957089Z" level=info msg="Container to stop \"b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:06:15.921429 containerd[1483]: time="2026-01-28T01:06:15.920970894Z" level=info msg="Container to stop \"b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:06:15.921429 containerd[1483]: time="2026-01-28T01:06:15.920984780Z" level=info msg="Container to stop \"917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:06:15.936983 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c-shm.mount: Deactivated successfully. Jan 28 01:06:15.986704 sshd[4923]: Accepted publickey for core from 10.0.0.1 port 50376 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:06:16.003907 sshd[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:06:16.008611 systemd[1]: cri-containerd-46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c.scope: Deactivated successfully. Jan 28 01:06:16.034894 systemd-logind[1463]: New session 54 of user core. Jan 28 01:06:16.061775 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 28 01:06:16.081592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274-rootfs.mount: Deactivated successfully. Jan 28 01:06:16.163684 containerd[1483]: time="2026-01-28T01:06:16.162796563Z" level=info msg="shim disconnected" id=937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274 namespace=k8s.io Jan 28 01:06:16.163684 containerd[1483]: time="2026-01-28T01:06:16.162881129Z" level=warning msg="cleaning up after shim disconnected" id=937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274 namespace=k8s.io Jan 28 01:06:16.163684 containerd[1483]: time="2026-01-28T01:06:16.162896157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:06:16.179953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c-rootfs.mount: Deactivated successfully. Jan 28 01:06:16.225996 containerd[1483]: time="2026-01-28T01:06:16.225924678Z" level=info msg="shim disconnected" id=46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c namespace=k8s.io Jan 28 01:06:16.226866 containerd[1483]: time="2026-01-28T01:06:16.226834121Z" level=warning msg="cleaning up after shim disconnected" id=46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c namespace=k8s.io Jan 28 01:06:16.226977 containerd[1483]: time="2026-01-28T01:06:16.226956646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:06:16.331053 containerd[1483]: time="2026-01-28T01:06:16.329985095Z" level=info msg="TearDown network for sandbox \"937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274\" successfully" Jan 28 01:06:16.331053 containerd[1483]: time="2026-01-28T01:06:16.330028706Z" level=info msg="StopPodSandbox for \"937d7df1e50df72610ea7846e42c2f9305e1751336fb83624db2185de2519274\" returns successfully" Jan 28 01:06:16.456081 containerd[1483]: time="2026-01-28T01:06:16.456021909Z" level=info msg="TearDown network for sandbox \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" successfully" Jan 28 01:06:16.461189 kubelet[2638]: I0128 01:06:16.460787 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpcz6\" (UniqueName: \"kubernetes.io/projected/36cf2c6c-70d9-4912-aeb5-3a9679d20de3-kube-api-access-rpcz6\") pod \"36cf2c6c-70d9-4912-aeb5-3a9679d20de3\" (UID: \"36cf2c6c-70d9-4912-aeb5-3a9679d20de3\") " Jan 28 01:06:16.461988 containerd[1483]: time="2026-01-28T01:06:16.457831746Z" level=info msg="StopPodSandbox for \"46b5ed9b7fc10acc71a3debd1ff13c9def72179a9f0f02c755ba4dbff708bf6c\" returns successfully" Jan 28 01:06:16.482842 kubelet[2638]: I0128 01:06:16.482807 2638 scope.go:117] "RemoveContainer" containerID="c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637" Jan 28 01:06:16.489775 kubelet[2638]: I0128 01:06:16.489683 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36cf2c6c-70d9-4912-aeb5-3a9679d20de3-cilium-config-path\") pod \"36cf2c6c-70d9-4912-aeb5-3a9679d20de3\" (UID: \"36cf2c6c-70d9-4912-aeb5-3a9679d20de3\") " Jan 28 01:06:16.504328 kubelet[2638]: I0128 01:06:16.504128 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cf2c6c-70d9-4912-aeb5-3a9679d20de3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "36cf2c6c-70d9-4912-aeb5-3a9679d20de3" (UID: "36cf2c6c-70d9-4912-aeb5-3a9679d20de3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:06:16.514717 kubelet[2638]: I0128 01:06:16.511654 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36cf2c6c-70d9-4912-aeb5-3a9679d20de3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.545101 containerd[1483]: time="2026-01-28T01:06:16.535900552Z" level=info msg="RemoveContainer for \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\"" Jan 28 01:06:16.595799 kubelet[2638]: I0128 01:06:16.595010 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cf2c6c-70d9-4912-aeb5-3a9679d20de3-kube-api-access-rpcz6" (OuterVolumeSpecName: "kube-api-access-rpcz6") pod "36cf2c6c-70d9-4912-aeb5-3a9679d20de3" (UID: "36cf2c6c-70d9-4912-aeb5-3a9679d20de3"). InnerVolumeSpecName "kube-api-access-rpcz6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:06:16.619316 kubelet[2638]: I0128 01:06:16.618343 2638 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rpcz6\" (UniqueName: \"kubernetes.io/projected/36cf2c6c-70d9-4912-aeb5-3a9679d20de3-kube-api-access-rpcz6\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.621872 containerd[1483]: time="2026-01-28T01:06:16.621834742Z" level=info msg="RemoveContainer for \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\" returns successfully" Jan 28 01:06:16.622895 kubelet[2638]: I0128 01:06:16.622175 2638 scope.go:117] "RemoveContainer" containerID="c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637" Jan 28 01:06:16.624146 systemd[1]: var-lib-kubelet-pods-36cf2c6c\x2d70d9\x2d4912\x2daeb5\x2d3a9679d20de3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drpcz6.mount: Deactivated successfully. Jan 28 01:06:16.647111 containerd[1483]: time="2026-01-28T01:06:16.626569540Z" level=error msg="ContainerStatus for \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\": not found" Jan 28 01:06:16.653626 kubelet[2638]: E0128 01:06:16.652959 2638 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\": not found" containerID="c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637" Jan 28 01:06:16.653626 kubelet[2638]: I0128 01:06:16.653086 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637"} err="failed to get container status \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0f313c5471a341c67c1ab93fa44b91fd98dde5297cdaec792b7e1d7a46cd637\": not found" Jan 28 01:06:16.722968 kubelet[2638]: I0128 01:06:16.722925 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-etc-cni-netd\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.723193 kubelet[2638]: I0128 01:06:16.723171 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cni-path\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.723581 kubelet[2638]: I0128 01:06:16.723560 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-run\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.744175 kubelet[2638]: I0128 01:06:16.744121 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.759174 kubelet[2638]: I0128 01:06:16.745110 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.759174 kubelet[2638]: I0128 01:06:16.746770 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.759174 kubelet[2638]: I0128 01:06:16.752077 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.759174 kubelet[2638]: I0128 01:06:16.747162 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-host-proc-sys-kernel\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.759174 kubelet[2638]: I0128 01:06:16.758601 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-hostproc\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.759919 kubelet[2638]: I0128 01:06:16.758631 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-bpf-maps\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.759919 kubelet[2638]: I0128 01:06:16.758653 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-cgroup\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.759919 kubelet[2638]: I0128 01:06:16.758680 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-config-path\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.759919 kubelet[2638]: I0128 01:06:16.758706 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-host-proc-sys-net\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.759919 kubelet[2638]: I0128 01:06:16.758735 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a8faae1-0c6d-49da-9e35-1289786290f3-clustermesh-secrets\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.764508 kubelet[2638]: I0128 01:06:16.761528 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.764626 kubelet[2638]: I0128 01:06:16.762316 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.764709 kubelet[2638]: I0128 01:06:16.762286 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-xtables-lock\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.764814 kubelet[2638]: I0128 01:06:16.764795 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r75h\" (UniqueName: \"kubernetes.io/projected/0a8faae1-0c6d-49da-9e35-1289786290f3-kube-api-access-4r75h\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.764904 kubelet[2638]: I0128 01:06:16.764887 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-lib-modules\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.765019 kubelet[2638]: I0128 01:06:16.764998 2638 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a8faae1-0c6d-49da-9e35-1289786290f3-hubble-tls\") pod \"0a8faae1-0c6d-49da-9e35-1289786290f3\" (UID: \"0a8faae1-0c6d-49da-9e35-1289786290f3\") " Jan 28 01:06:16.765167 kubelet[2638]: I0128 01:06:16.765147 2638 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.770577 kubelet[2638]: I0128 01:06:16.765569 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.770577 kubelet[2638]: I0128 01:06:16.765612 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.770577 kubelet[2638]: I0128 01:06:16.765644 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.770577 kubelet[2638]: I0128 01:06:16.766101 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.770577 kubelet[2638]: I0128 01:06:16.766130 2638 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.770577 kubelet[2638]: I0128 01:06:16.766144 2638 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.770823 kubelet[2638]: I0128 01:06:16.766158 2638 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.770823 kubelet[2638]: I0128 01:06:16.766173 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.774730 kubelet[2638]: I0128 01:06:16.774697 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:06:16.775541 kubelet[2638]: I0128 01:06:16.775343 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:06:16.782200 systemd[1]: Removed slice kubepods-besteffort-pod36cf2c6c_70d9_4912_aeb5_3a9679d20de3.slice - libcontainer container kubepods-besteffort-pod36cf2c6c_70d9_4912_aeb5_3a9679d20de3.slice. Jan 28 01:06:16.782564 systemd[1]: kubepods-besteffort-pod36cf2c6c_70d9_4912_aeb5_3a9679d20de3.slice: Consumed 8.136s CPU time. Jan 28 01:06:16.812965 systemd[1]: var-lib-kubelet-pods-0a8faae1\x2d0c6d\x2d49da\x2d9e35\x2d1289786290f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4r75h.mount: Deactivated successfully. Jan 28 01:06:16.832809 kubelet[2638]: I0128 01:06:16.832761 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8faae1-0c6d-49da-9e35-1289786290f3-kube-api-access-4r75h" (OuterVolumeSpecName: "kube-api-access-4r75h") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "kube-api-access-4r75h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:06:16.852317 systemd[1]: var-lib-kubelet-pods-0a8faae1\x2d0c6d\x2d49da\x2d9e35\x2d1289786290f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 28 01:06:16.852647 systemd[1]: var-lib-kubelet-pods-0a8faae1\x2d0c6d\x2d49da\x2d9e35\x2d1289786290f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 28 01:06:16.867704 kubelet[2638]: I0128 01:06:16.867295 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8faae1-0c6d-49da-9e35-1289786290f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:06:16.867704 kubelet[2638]: I0128 01:06:16.867591 2638 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.867704 kubelet[2638]: I0128 01:06:16.867612 2638 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.867704 kubelet[2638]: I0128 01:06:16.867625 2638 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a8faae1-0c6d-49da-9e35-1289786290f3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.867704 kubelet[2638]: I0128 01:06:16.867643 2638 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a8faae1-0c6d-49da-9e35-1289786290f3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.867704 kubelet[2638]: I0128 01:06:16.867654 2638 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.867704 kubelet[2638]: I0128 01:06:16.867665 2638 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4r75h\" (UniqueName: \"kubernetes.io/projected/0a8faae1-0c6d-49da-9e35-1289786290f3-kube-api-access-4r75h\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.868193 kubelet[2638]: I0128 01:06:16.867677 2638 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a8faae1-0c6d-49da-9e35-1289786290f3-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:16.882514 kubelet[2638]: I0128 01:06:16.881826 2638 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8faae1-0c6d-49da-9e35-1289786290f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0a8faae1-0c6d-49da-9e35-1289786290f3" (UID: "0a8faae1-0c6d-49da-9e35-1289786290f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:06:16.979771 kubelet[2638]: I0128 01:06:16.977322 2638 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a8faae1-0c6d-49da-9e35-1289786290f3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 28 01:06:17.679929 kubelet[2638]: I0128 01:06:17.679725 2638 scope.go:117] "RemoveContainer" containerID="b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28" Jan 28 01:06:17.779866 containerd[1483]: time="2026-01-28T01:06:17.771747811Z" level=info msg="RemoveContainer for \"b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28\"" Jan 28 01:06:17.782633 systemd[1]: Removed slice kubepods-burstable-pod0a8faae1_0c6d_49da_9e35_1289786290f3.slice - libcontainer container kubepods-burstable-pod0a8faae1_0c6d_49da_9e35_1289786290f3.slice. Jan 28 01:06:17.782783 systemd[1]: kubepods-burstable-pod0a8faae1_0c6d_49da_9e35_1289786290f3.slice: Consumed 37.235s CPU time. Jan 28 01:06:17.821343 containerd[1483]: time="2026-01-28T01:06:17.821109904Z" level=info msg="RemoveContainer for \"b007402b323c8aa586346c322291d4876d950bdb881918a576ffe5dd32748c28\" returns successfully" Jan 28 01:06:17.828820 kubelet[2638]: I0128 01:06:17.822167 2638 scope.go:117] "RemoveContainer" containerID="fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629" Jan 28 01:06:17.852784 containerd[1483]: time="2026-01-28T01:06:17.852740555Z" level=info msg="RemoveContainer for \"fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629\"" Jan 28 01:06:17.909111 containerd[1483]: time="2026-01-28T01:06:17.909060860Z" level=info msg="RemoveContainer for \"fb0e9d95ffa600a17b9fed87330659c605034c958e0e1a0b669e36600a0fd629\" returns successfully" Jan 28 01:06:17.946518 kubelet[2638]: I0128 01:06:17.943526 2638 scope.go:117] "RemoveContainer" containerID="917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70" Jan 28 01:06:17.955474 containerd[1483]: time="2026-01-28T01:06:17.951731512Z" level=info msg="RemoveContainer for \"917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70\"" Jan 28 01:06:18.015860 containerd[1483]: time="2026-01-28T01:06:18.006138480Z" level=info msg="RemoveContainer for \"917fab72000084944559956d3764ac70d9408ce08e07d14e934faf044d8ddf70\" returns successfully" Jan 28 01:06:18.018056 kubelet[2638]: I0128 01:06:18.013662 2638 scope.go:117] "RemoveContainer" containerID="b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7" Jan 28 01:06:18.034136 containerd[1483]: time="2026-01-28T01:06:18.034085956Z" level=info msg="RemoveContainer for \"b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7\"" Jan 28 01:06:18.079696 containerd[1483]: time="2026-01-28T01:06:18.078779202Z" level=info msg="RemoveContainer for \"b8ff8681a1cf8a8c61cfe97c37661f24a4610ebb6e911a0cb266f6ded0539da7\" returns successfully" Jan 28 01:06:18.079865 kubelet[2638]: I0128 01:06:18.079317 2638 scope.go:117] "RemoveContainer" containerID="57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27" Jan 28 01:06:18.086078 containerd[1483]: time="2026-01-28T01:06:18.085618939Z" level=info msg="RemoveContainer for \"57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27\"" Jan 28 01:06:18.102579 containerd[1483]: time="2026-01-28T01:06:18.102084199Z" level=info msg="RemoveContainer for \"57788596c5788af922025d77c46102a62bbdbfb49b0e8da6b308263a0d9d9f27\" returns successfully" Jan 28 01:06:18.652561 kubelet[2638]: E0128 01:06:18.648984 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:06:18.711814 kubelet[2638]: I0128 01:06:18.708791 2638 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8faae1-0c6d-49da-9e35-1289786290f3" path="/var/lib/kubelet/pods/0a8faae1-0c6d-49da-9e35-1289786290f3/volumes" Jan 28 01:06:18.717558 kubelet[2638]: I0128 01:06:18.714848 2638 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36cf2c6c-70d9-4912-aeb5-3a9679d20de3" path="/var/lib/kubelet/pods/36cf2c6c-70d9-4912-aeb5-3a9679d20de3/volumes" Jan 28 01:06:19.442998 sshd[4923]: pam_unix(sshd:session): session closed for user core Jan 28 01:06:19.579929 systemd[1]: sshd@53-10.0.0.13:22-10.0.0.1:50376.service: Deactivated successfully. Jan 28 01:06:19.650870 systemd[1]: session-54.scope: Deactivated successfully. Jan 28 01:06:19.652605 systemd[1]: session-54.scope: Consumed 1.486s CPU time. Jan 28 01:06:19.679124 systemd-logind[1463]: Session 54 logged out. Waiting for processes to exit. Jan 28 01:06:19.745658 systemd[1]: Started sshd@54-10.0.0.13:22-10.0.0.1:50378.service - OpenSSH per-connection server daemon (10.0.0.1:50378). Jan 28 01:06:19.850966 kubelet[2638]: I0128 01:06:19.763750 2638 memory_manager.go:355] "RemoveStaleState removing state" podUID="36cf2c6c-70d9-4912-aeb5-3a9679d20de3" containerName="cilium-operator" Jan 28 01:06:19.850966 kubelet[2638]: I0128 01:06:19.763777 2638 memory_manager.go:355] "RemoveStaleState removing state" podUID="0a8faae1-0c6d-49da-9e35-1289786290f3" containerName="cilium-agent" Jan 28 01:06:19.830059 systemd-logind[1463]: Removed session 54. Jan 28 01:06:19.912816 systemd[1]: Created slice kubepods-burstable-pod2f12dcdf_c9d4_4c49_bc1e_3cfc327fb570.slice - libcontainer container kubepods-burstable-pod2f12dcdf_c9d4_4c49_bc1e_3cfc327fb570.slice. Jan 28 01:06:19.946741 kubelet[2638]: I0128 01:06:19.941761 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-cilium-config-path\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.946741 kubelet[2638]: I0128 01:06:19.941806 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-host-proc-sys-net\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.946741 kubelet[2638]: I0128 01:06:19.941831 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-hubble-tls\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.946741 kubelet[2638]: I0128 01:06:19.941854 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-bpf-maps\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.946741 kubelet[2638]: I0128 01:06:19.941874 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-xtables-lock\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.946741 kubelet[2638]: I0128 01:06:19.941897 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-lib-modules\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.947093 kubelet[2638]: I0128 01:06:19.941915 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-cni-path\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.947093 kubelet[2638]: I0128 01:06:19.941935 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9wf6\" (UniqueName: \"kubernetes.io/projected/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-kube-api-access-w9wf6\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.947093 kubelet[2638]: I0128 01:06:19.941958 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-host-proc-sys-kernel\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.947093 kubelet[2638]: I0128 01:06:19.941993 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-hostproc\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.947093 kubelet[2638]: I0128 01:06:19.942013 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-cilium-run\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.947093 kubelet[2638]: I0128 01:06:19.942032 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-cilium-cgroup\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.947545 kubelet[2638]: I0128 01:06:19.942151 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-cilium-ipsec-secrets\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.947545 kubelet[2638]: I0128 01:06:19.946505 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-etc-cni-netd\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:19.947545 kubelet[2638]: I0128 01:06:19.946611 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570-clustermesh-secrets\") pod \"cilium-f2jvd\" (UID: \"2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570\") " pod="kube-system/cilium-f2jvd" Jan 28 01:06:20.114295 sshd[5026]: Accepted publickey for core from 10.0.0.1 port 50378 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:06:20.141991 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:06:20.229724 systemd-logind[1463]: New session 55 of user core. Jan 28 01:06:20.306662 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 28 01:06:20.424844 sshd[5026]: pam_unix(sshd:session): session closed for user core Jan 28 01:06:20.464845 systemd[1]: sshd@54-10.0.0.13:22-10.0.0.1:50378.service: Deactivated successfully. Jan 28 01:06:20.507071 systemd[1]: session-55.scope: Deactivated successfully. Jan 28 01:06:20.546697 kubelet[2638]: E0128 01:06:20.546652 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:20.559143 systemd-logind[1463]: Session 55 logged out. Waiting for processes to exit. Jan 28 01:06:20.566119 containerd[1483]: time="2026-01-28T01:06:20.565796153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f2jvd,Uid:2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570,Namespace:kube-system,Attempt:0,}" Jan 28 01:06:20.579533 systemd[1]: Started sshd@55-10.0.0.13:22-10.0.0.1:50394.service - OpenSSH per-connection server daemon (10.0.0.1:50394). Jan 28 01:06:20.599949 systemd-logind[1463]: Removed session 55. Jan 28 01:06:20.820711 kubelet[2638]: I0128 01:06:20.819642 2638 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T01:06:20Z","lastTransitionTime":"2026-01-28T01:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 28 01:06:20.832098 sshd[5038]: Accepted publickey for core from 10.0.0.1 port 50394 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:06:20.845661 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:06:20.879582 containerd[1483]: time="2026-01-28T01:06:20.878617764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:06:20.879582 containerd[1483]: time="2026-01-28T01:06:20.878691161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:06:20.921853 containerd[1483]: time="2026-01-28T01:06:20.879666486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:06:20.921853 containerd[1483]: time="2026-01-28T01:06:20.880703104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:06:20.924822 systemd-logind[1463]: New session 56 of user core. Jan 28 01:06:20.933867 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 28 01:06:21.082001 systemd[1]: Started cri-containerd-0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa.scope - libcontainer container 0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa. Jan 28 01:06:21.378646 containerd[1483]: time="2026-01-28T01:06:21.369564485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f2jvd,Uid:2f12dcdf-c9d4-4c49-bc1e-3cfc327fb570,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\"" Jan 28 01:06:21.378774 kubelet[2638]: E0128 01:06:21.375915 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:21.384536 containerd[1483]: time="2026-01-28T01:06:21.382347084Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 01:06:21.725647 containerd[1483]: time="2026-01-28T01:06:21.722864721Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa\"" Jan 28 01:06:21.730347 containerd[1483]: time="2026-01-28T01:06:21.727772756Z" level=info msg="StartContainer for \"2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa\"" Jan 28 01:06:22.015583 systemd[1]: Started cri-containerd-2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa.scope - libcontainer container 2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa. Jan 28 01:06:22.243045 systemd[1]: run-containerd-runc-k8s.io-2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa-runc.om4EkC.mount: Deactivated successfully. Jan 28 01:06:22.327980 containerd[1483]: time="2026-01-28T01:06:22.324857412Z" level=info msg="StartContainer for \"2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa\" returns successfully" Jan 28 01:06:22.424691 systemd[1]: cri-containerd-2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa.scope: Deactivated successfully. Jan 28 01:06:22.649755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa-rootfs.mount: Deactivated successfully. Jan 28 01:06:22.724348 containerd[1483]: time="2026-01-28T01:06:22.722060110Z" level=info msg="shim disconnected" id=2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa namespace=k8s.io Jan 28 01:06:22.724348 containerd[1483]: time="2026-01-28T01:06:22.722237639Z" level=warning msg="cleaning up after shim disconnected" id=2469442d6823065941cbb1ec556aa5bf505c2244445ce60a0133f74914a8c6aa namespace=k8s.io Jan 28 01:06:22.724348 containerd[1483]: time="2026-01-28T01:06:22.722256394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:06:23.045335 kubelet[2638]: E0128 01:06:23.043586 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:23.055774 containerd[1483]: time="2026-01-28T01:06:23.055557701Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 01:06:23.280839 containerd[1483]: time="2026-01-28T01:06:23.279899383Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8110e7d30ff355fbde5aaeaac40dcc3f045400bd0654ce47ccc662586ad43ef3\"" Jan 28 01:06:23.292599 containerd[1483]: time="2026-01-28T01:06:23.291744256Z" level=info msg="StartContainer for \"8110e7d30ff355fbde5aaeaac40dcc3f045400bd0654ce47ccc662586ad43ef3\"" Jan 28 01:06:23.557326 systemd[1]: Started cri-containerd-8110e7d30ff355fbde5aaeaac40dcc3f045400bd0654ce47ccc662586ad43ef3.scope - libcontainer container 8110e7d30ff355fbde5aaeaac40dcc3f045400bd0654ce47ccc662586ad43ef3. Jan 28 01:06:23.659670 kubelet[2638]: E0128 01:06:23.659514 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:06:23.848322 containerd[1483]: time="2026-01-28T01:06:23.848020578Z" level=info msg="StartContainer for \"8110e7d30ff355fbde5aaeaac40dcc3f045400bd0654ce47ccc662586ad43ef3\" returns successfully" Jan 28 01:06:23.955685 systemd[1]: cri-containerd-8110e7d30ff355fbde5aaeaac40dcc3f045400bd0654ce47ccc662586ad43ef3.scope: Deactivated successfully. Jan 28 01:06:24.090299 kubelet[2638]: E0128 01:06:24.083238 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:24.177576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8110e7d30ff355fbde5aaeaac40dcc3f045400bd0654ce47ccc662586ad43ef3-rootfs.mount: Deactivated successfully. Jan 28 01:06:24.228925 containerd[1483]: time="2026-01-28T01:06:24.226037952Z" level=info msg="shim disconnected" id=8110e7d30ff355fbde5aaeaac40dcc3f045400bd0654ce47ccc662586ad43ef3 namespace=k8s.io Jan 28 01:06:24.228925 containerd[1483]: time="2026-01-28T01:06:24.227701089Z" level=warning msg="cleaning up after shim disconnected" id=8110e7d30ff355fbde5aaeaac40dcc3f045400bd0654ce47ccc662586ad43ef3 namespace=k8s.io Jan 28 01:06:24.228925 containerd[1483]: time="2026-01-28T01:06:24.227730564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:06:25.108686 kubelet[2638]: E0128 01:06:25.105983 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:25.123027 containerd[1483]: time="2026-01-28T01:06:25.121657974Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 01:06:25.269350 containerd[1483]: time="2026-01-28T01:06:25.267702826Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"036409caa25dd88a07dc914f0acb327fa4c0b13030e291640cc103d9bf184cee\"" Jan 28 01:06:25.274615 containerd[1483]: time="2026-01-28T01:06:25.272050555Z" level=info msg="StartContainer for \"036409caa25dd88a07dc914f0acb327fa4c0b13030e291640cc103d9bf184cee\"" Jan 28 01:06:25.586874 systemd[1]: Started cri-containerd-036409caa25dd88a07dc914f0acb327fa4c0b13030e291640cc103d9bf184cee.scope - libcontainer container 036409caa25dd88a07dc914f0acb327fa4c0b13030e291640cc103d9bf184cee. Jan 28 01:06:25.893833 containerd[1483]: time="2026-01-28T01:06:25.892329072Z" level=info msg="StartContainer for \"036409caa25dd88a07dc914f0acb327fa4c0b13030e291640cc103d9bf184cee\" returns successfully" Jan 28 01:06:25.905621 systemd[1]: cri-containerd-036409caa25dd88a07dc914f0acb327fa4c0b13030e291640cc103d9bf184cee.scope: Deactivated successfully. Jan 28 01:06:26.008211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-036409caa25dd88a07dc914f0acb327fa4c0b13030e291640cc103d9bf184cee-rootfs.mount: Deactivated successfully. Jan 28 01:06:26.071024 containerd[1483]: time="2026-01-28T01:06:26.069598670Z" level=info msg="shim disconnected" id=036409caa25dd88a07dc914f0acb327fa4c0b13030e291640cc103d9bf184cee namespace=k8s.io Jan 28 01:06:26.071024 containerd[1483]: time="2026-01-28T01:06:26.070150593Z" level=warning msg="cleaning up after shim disconnected" id=036409caa25dd88a07dc914f0acb327fa4c0b13030e291640cc103d9bf184cee namespace=k8s.io Jan 28 01:06:26.071024 containerd[1483]: time="2026-01-28T01:06:26.070169659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:06:26.264737 kubelet[2638]: E0128 01:06:26.229714 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:27.361249 kubelet[2638]: E0128 01:06:27.360807 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:27.414940 containerd[1483]: time="2026-01-28T01:06:27.412906148Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 01:06:27.652790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593996757.mount: Deactivated successfully. Jan 28 01:06:27.672904 containerd[1483]: time="2026-01-28T01:06:27.670306217Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"db33e2aee4a1db007febe35114ab9ce579cd0afd26d7ee794aa22546208ab8bc\"" Jan 28 01:06:27.674499 containerd[1483]: time="2026-01-28T01:06:27.674272136Z" level=info msg="StartContainer for \"db33e2aee4a1db007febe35114ab9ce579cd0afd26d7ee794aa22546208ab8bc\"" Jan 28 01:06:27.887267 systemd[1]: Started cri-containerd-db33e2aee4a1db007febe35114ab9ce579cd0afd26d7ee794aa22546208ab8bc.scope - libcontainer container db33e2aee4a1db007febe35114ab9ce579cd0afd26d7ee794aa22546208ab8bc. Jan 28 01:06:28.217226 containerd[1483]: time="2026-01-28T01:06:28.211640669Z" level=info msg="StartContainer for \"db33e2aee4a1db007febe35114ab9ce579cd0afd26d7ee794aa22546208ab8bc\" returns successfully" Jan 28 01:06:28.212846 systemd[1]: cri-containerd-db33e2aee4a1db007febe35114ab9ce579cd0afd26d7ee794aa22546208ab8bc.scope: Deactivated successfully. Jan 28 01:06:28.415258 kubelet[2638]: E0128 01:06:28.412797 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:28.457204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db33e2aee4a1db007febe35114ab9ce579cd0afd26d7ee794aa22546208ab8bc-rootfs.mount: Deactivated successfully. Jan 28 01:06:28.522803 containerd[1483]: time="2026-01-28T01:06:28.514161794Z" level=info msg="shim disconnected" id=db33e2aee4a1db007febe35114ab9ce579cd0afd26d7ee794aa22546208ab8bc namespace=k8s.io Jan 28 01:06:28.522803 containerd[1483]: time="2026-01-28T01:06:28.516256065Z" level=warning msg="cleaning up after shim disconnected" id=db33e2aee4a1db007febe35114ab9ce579cd0afd26d7ee794aa22546208ab8bc namespace=k8s.io Jan 28 01:06:28.522803 containerd[1483]: time="2026-01-28T01:06:28.516281381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:06:28.620907 containerd[1483]: time="2026-01-28T01:06:28.620781672Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:06:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:06:28.665816 kubelet[2638]: E0128 01:06:28.665636 2638 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:06:29.472774 kubelet[2638]: E0128 01:06:29.472736 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:29.498707 containerd[1483]: time="2026-01-28T01:06:29.496859160Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 01:06:29.711231 containerd[1483]: time="2026-01-28T01:06:29.707272450Z" level=info msg="CreateContainer within sandbox \"0e49b37064c4a1d2d86e616f84ba8fb9a4809af3b2739c360a8aab1aa04db0fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bcfb5a6a0cfbddb5388e1f0a3cf69b3681201107852a1fd86383631382f4f8dc\"" Jan 28 01:06:29.725237 containerd[1483]: time="2026-01-28T01:06:29.718912931Z" level=info msg="StartContainer for \"bcfb5a6a0cfbddb5388e1f0a3cf69b3681201107852a1fd86383631382f4f8dc\"" Jan 28 01:06:29.932757 systemd[1]: Started cri-containerd-bcfb5a6a0cfbddb5388e1f0a3cf69b3681201107852a1fd86383631382f4f8dc.scope - libcontainer container bcfb5a6a0cfbddb5388e1f0a3cf69b3681201107852a1fd86383631382f4f8dc. Jan 28 01:06:30.204276 containerd[1483]: time="2026-01-28T01:06:30.203952892Z" level=info msg="StartContainer for \"bcfb5a6a0cfbddb5388e1f0a3cf69b3681201107852a1fd86383631382f4f8dc\" returns successfully" Jan 28 01:06:30.685777 kubelet[2638]: E0128 01:06:30.685556 2638 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-xgv7k" podUID="cf61ad82-d7e7-4105-ab0e-5d43f44e2034" Jan 28 01:06:31.596234 kubelet[2638]: E0128 01:06:31.595992 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:31.717718 kubelet[2638]: I0128 01:06:31.717591 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f2jvd" podStartSLOduration=12.717567101 podStartE2EDuration="12.717567101s" podCreationTimestamp="2026-01-28 01:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:06:31.708954458 +0000 UTC m=+441.440187992" watchObservedRunningTime="2026-01-28 01:06:31.717567101 +0000 UTC m=+441.448800655" Jan 28 01:06:32.570605 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 28 01:06:32.605635 kubelet[2638]: E0128 01:06:32.605348 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:32.694842 kubelet[2638]: E0128 01:06:32.685550 2638 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-xgv7k" podUID="cf61ad82-d7e7-4105-ab0e-5d43f44e2034" Jan 28 01:06:33.707078 kubelet[2638]: E0128 01:06:33.694282 2638 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-xgv7k" podUID="cf61ad82-d7e7-4105-ab0e-5d43f44e2034" Jan 28 01:06:35.973746 kubelet[2638]: E0128 01:06:35.972095 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:43.739872 kubelet[2638]: E0128 01:06:43.725608 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:46.157223 systemd[1]: run-containerd-runc-k8s.io-bcfb5a6a0cfbddb5388e1f0a3cf69b3681201107852a1fd86383631382f4f8dc-runc.fg3AJJ.mount: Deactivated successfully. Jan 28 01:06:50.570593 kubelet[2638]: E0128 01:06:50.566350 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:54.151054 systemd-networkd[1407]: lxc_health: Link UP Jan 28 01:06:54.205014 systemd-networkd[1407]: lxc_health: Gained carrier Jan 28 01:06:54.567340 kubelet[2638]: E0128 01:06:54.566247 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:54.706346 kubelet[2638]: E0128 01:06:54.702769 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:54.875105 kubelet[2638]: E0128 01:06:54.873621 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:06:55.997854 systemd-networkd[1407]: lxc_health: Gained IPv6LL Jan 28 01:06:57.681558 kubelet[2638]: E0128 01:06:57.681158 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:07:00.364843 sshd[5038]: pam_unix(sshd:session): session closed for user core Jan 28 01:07:00.394742 systemd-logind[1463]: Session 56 logged out. Waiting for processes to exit. Jan 28 01:07:00.402641 systemd[1]: sshd@55-10.0.0.13:22-10.0.0.1:50394.service: Deactivated successfully. Jan 28 01:07:00.424668 systemd[1]: session-56.scope: Deactivated successfully. Jan 28 01:07:00.429526 systemd[1]: session-56.scope: Consumed 2.996s CPU time. Jan 28 01:07:00.459986 systemd-logind[1463]: Removed session 56.