Jan 20 00:32:21.125853 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:32:21.125873 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:32:21.125885 kernel: BIOS-provided physical RAM map: Jan 20 00:32:21.125890 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 00:32:21.125895 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 00:32:21.125901 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 00:32:21.125907 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 00:32:21.125912 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 00:32:21.125918 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 20 00:32:21.125923 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 20 00:32:21.125931 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 20 00:32:21.125936 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 20 00:32:21.125941 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 20 00:32:21.125947 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 20 00:32:21.125953 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 20 00:32:21.125959 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 00:32:21.125968 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 20 00:32:21.125973 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 20 00:32:21.125979 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 00:32:21.125985 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:32:21.125990 kernel: NX (Execute Disable) protection: active Jan 20 00:32:21.125996 kernel: APIC: Static calls initialized Jan 20 00:32:21.126001 kernel: efi: EFI v2.7 by EDK II Jan 20 00:32:21.126007 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 20 00:32:21.126013 kernel: SMBIOS 2.8 present. Jan 20 00:32:21.126019 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 20 00:32:21.126024 kernel: Hypervisor detected: KVM Jan 20 00:32:21.126033 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:32:21.126039 kernel: kvm-clock: using sched offset of 5997664614 cycles Jan 20 00:32:21.126044 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:32:21.126051 kernel: tsc: Detected 2445.426 MHz processor Jan 20 00:32:21.126057 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:32:21.126063 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:32:21.126069 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 20 00:32:21.126075 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 00:32:21.126084 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:32:21.126093 kernel: Using GB pages for direct mapping Jan 20 00:32:21.126099 kernel: Secure boot disabled Jan 20 00:32:21.126104 kernel: ACPI: Early table checksum verification disabled Jan 20 00:32:21.126110 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 00:32:21.126120 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 00:32:21.126126 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:21.126132 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:21.126141 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 00:32:21.126147 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:21.126153 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:21.126159 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:21.126165 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:21.126172 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 00:32:21.126178 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 00:32:21.126187 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 00:32:21.126193 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 00:32:21.126199 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 00:32:21.126205 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 00:32:21.126211 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 00:32:21.126217 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 00:32:21.126223 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 00:32:21.126229 kernel: No NUMA configuration found Jan 20 00:32:21.126235 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 20 00:32:21.126244 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 20 00:32:21.126250 kernel: Zone ranges: Jan 20 00:32:21.126256 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:32:21.126262 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 20 00:32:21.126268 kernel: Normal empty Jan 20 00:32:21.126274 kernel: Movable zone start for each node Jan 20 00:32:21.126280 kernel: Early memory node ranges Jan 20 00:32:21.126286 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 00:32:21.126293 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 00:32:21.126305 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 00:32:21.126321 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 20 00:32:21.126332 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 20 00:32:21.126344 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 20 00:32:21.126354 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 20 00:32:21.126366 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:32:21.126377 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 00:32:21.126383 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 00:32:21.126389 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:32:21.126395 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 20 00:32:21.126402 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 20 00:32:21.126411 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 20 00:32:21.126417 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:32:21.126423 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:32:21.126429 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:32:21.126435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:32:21.126441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:32:21.126447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:32:21.126454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:32:21.126460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:32:21.126498 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:32:21.126504 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:32:21.126510 kernel: TSC deadline timer available Jan 20 00:32:21.126516 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:32:21.126522 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:32:21.126528 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:32:21.126534 kernel: kvm-guest: setup PV sched yield Jan 20 00:32:21.126540 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 20 00:32:21.126547 kernel: Booting paravirtualized kernel on KVM Jan 20 00:32:21.126555 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:32:21.126562 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:32:21.126568 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:32:21.126574 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:32:21.126580 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:32:21.126586 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:32:21.126592 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:32:21.126599 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:32:21.126608 kernel: random: crng init done Jan 20 00:32:21.126614 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:32:21.126620 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:32:21.126626 kernel: Fallback order for Node 0: 0 Jan 20 00:32:21.126632 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 20 00:32:21.126638 kernel: Policy zone: DMA32 Jan 20 00:32:21.126644 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:32:21.126651 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 166124K reserved, 0K cma-reserved) Jan 20 00:32:21.126657 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:32:21.126665 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:32:21.126671 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:32:21.126677 kernel: Dynamic Preempt: voluntary Jan 20 00:32:21.126684 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:32:21.126698 kernel: rcu: RCU event tracing is enabled. Jan 20 00:32:21.126707 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:32:21.126713 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:32:21.126720 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:32:21.126727 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:32:21.126733 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:32:21.126739 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:32:21.126746 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:32:21.126755 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:32:21.126761 kernel: Console: colour dummy device 80x25 Jan 20 00:32:21.126799 kernel: printk: console [ttyS0] enabled Jan 20 00:32:21.126806 kernel: ACPI: Core revision 20230628 Jan 20 00:32:21.126812 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:32:21.126822 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:32:21.126828 kernel: x2apic enabled Jan 20 00:32:21.126835 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:32:21.126841 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:32:21.126848 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:32:21.126854 kernel: kvm-guest: setup PV IPIs Jan 20 00:32:21.126861 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:32:21.126867 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:32:21.126873 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 00:32:21.126882 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:32:21.126888 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:32:21.126895 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:32:21.126901 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:32:21.126907 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:32:21.126914 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:32:21.126920 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:32:21.126926 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:32:21.126933 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:32:21.126942 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:32:21.126949 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:32:21.126955 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:32:21.126961 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:32:21.126968 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:32:21.126974 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:32:21.126981 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:32:21.126987 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:32:21.126996 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:32:21.127002 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:32:21.127009 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:32:21.127015 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:32:21.127021 kernel: landlock: Up and running. Jan 20 00:32:21.127028 kernel: SELinux: Initializing. Jan 20 00:32:21.127034 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:32:21.127041 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:32:21.127048 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:32:21.127056 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:32:21.127063 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:32:21.127069 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:32:21.127076 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:32:21.127082 kernel: signal: max sigframe size: 1776 Jan 20 00:32:21.127088 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:32:21.127095 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:32:21.127101 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:32:21.127108 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:32:21.127117 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:32:21.127123 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:32:21.127129 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:32:21.127136 kernel: smpboot: Max logical packages: 1 Jan 20 00:32:21.127142 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 00:32:21.127148 kernel: devtmpfs: initialized Jan 20 00:32:21.127155 kernel: x86/mm: Memory block size: 128MB Jan 20 00:32:21.127161 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 00:32:21.127167 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 00:32:21.127176 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 20 00:32:21.127183 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 00:32:21.127189 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 00:32:21.127196 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:32:21.127202 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:32:21.127208 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:32:21.127215 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:32:21.127221 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:32:21.127227 kernel: audit: type=2000 audit(1768869139.029:1): state=initialized audit_enabled=0 res=1 Jan 20 00:32:21.127236 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:32:21.127243 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:32:21.127249 kernel: cpuidle: using governor menu Jan 20 00:32:21.127255 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:32:21.127261 kernel: dca service started, version 1.12.1 Jan 20 00:32:21.127268 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:32:21.127276 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:32:21.127288 kernel: PCI: Using configuration type 1 for base access Jan 20 00:32:21.127300 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:32:21.127317 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:32:21.127328 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:32:21.127341 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:32:21.127354 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:32:21.127365 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:32:21.127377 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:32:21.127388 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:32:21.127399 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:32:21.127410 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:32:21.127424 kernel: ACPI: Interpreter enabled Jan 20 00:32:21.127435 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:32:21.127446 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:32:21.127457 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:32:21.127504 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:32:21.127515 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:32:21.127526 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:32:21.127843 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:32:21.132519 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:32:21.132754 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:32:21.132834 kernel: PCI host bridge to bus 0000:00 Jan 20 00:32:21.133012 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:32:21.133172 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:32:21.133334 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:32:21.133541 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:32:21.133722 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:32:21.133950 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 20 00:32:21.134108 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:32:21.134249 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:32:21.134420 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:32:21.134586 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 20 00:32:21.134728 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 20 00:32:21.134980 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 20 00:32:21.135166 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 20 00:32:21.135362 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:32:21.135622 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:32:21.135868 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 20 00:32:21.136037 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 20 00:32:21.136218 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 20 00:32:21.141558 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:32:21.142352 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 20 00:32:21.142956 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 20 00:32:21.143441 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 20 00:32:21.144078 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:32:21.144581 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 20 00:32:21.144881 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 20 00:32:21.145061 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 20 00:32:21.145232 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 20 00:32:21.145428 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:32:21.145658 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:32:21.145896 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:32:21.146085 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 20 00:32:21.146268 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 20 00:32:21.146513 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:32:21.146700 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 20 00:32:21.146717 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:32:21.146729 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:32:21.146741 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:32:21.146752 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:32:21.146820 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:32:21.146835 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:32:21.146847 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:32:21.146858 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:32:21.146869 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:32:21.146881 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:32:21.146893 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:32:21.146904 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:32:21.146916 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:32:21.146933 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:32:21.146944 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:32:21.146956 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:32:21.146968 kernel: iommu: Default domain type: Translated Jan 20 00:32:21.146980 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:32:21.146994 kernel: efivars: Registered efivars operations Jan 20 00:32:21.147005 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:32:21.147016 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:32:21.147028 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 00:32:21.147044 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 20 00:32:21.147056 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 20 00:32:21.147067 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 20 00:32:21.147248 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:32:21.147433 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:32:21.147656 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:32:21.147673 kernel: vgaarb: loaded Jan 20 00:32:21.147685 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:32:21.147696 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:32:21.147712 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:32:21.147724 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:32:21.147736 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:32:21.147747 kernel: pnp: PnP ACPI init Jan 20 00:32:21.148027 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:32:21.148045 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:32:21.148057 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:32:21.148069 kernel: NET: Registered PF_INET protocol family Jan 20 00:32:21.148086 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:32:21.148098 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:32:21.148110 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:32:21.148121 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:32:21.148133 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:32:21.148144 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:32:21.148155 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:32:21.148167 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:32:21.148178 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:32:21.148193 kernel: NET: Registered PF_XDP protocol family Jan 20 00:32:21.148374 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 20 00:32:21.148595 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 20 00:32:21.148759 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:32:21.148976 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:32:21.149139 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:32:21.149287 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:32:21.149448 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:32:21.149657 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 20 00:32:21.149673 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:32:21.149685 kernel: Initialise system trusted keyrings Jan 20 00:32:21.149697 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:32:21.149708 kernel: Key type asymmetric registered Jan 20 00:32:21.149720 kernel: Asymmetric key parser 'x509' registered Jan 20 00:32:21.149731 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:32:21.149743 kernel: io scheduler mq-deadline registered Jan 20 00:32:21.149759 kernel: io scheduler kyber registered Jan 20 00:32:21.149811 kernel: io scheduler bfq registered Jan 20 00:32:21.149824 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:32:21.149836 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:32:21.149848 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:32:21.149859 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:32:21.149871 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:32:21.149882 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:32:21.149894 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:32:21.149905 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:32:21.149921 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:32:21.150100 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:32:21.150117 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:32:21.150269 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:32:21.150515 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:32:20 UTC (1768869140) Jan 20 00:32:21.150677 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:32:21.150691 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:32:21.150708 kernel: efifb: probing for efifb Jan 20 00:32:21.150719 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 20 00:32:21.150731 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 20 00:32:21.150742 kernel: efifb: scrolling: redraw Jan 20 00:32:21.150754 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 20 00:32:21.150806 kernel: Console: switching to colour frame buffer device 100x37 Jan 20 00:32:21.150819 kernel: fb0: EFI VGA frame buffer device Jan 20 00:32:21.150831 kernel: pstore: Using crash dump compression: deflate Jan 20 00:32:21.150842 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 00:32:21.150858 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:32:21.150869 kernel: Segment Routing with IPv6 Jan 20 00:32:21.150881 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:32:21.150892 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:32:21.150903 kernel: Key type dns_resolver registered Jan 20 00:32:21.150915 kernel: IPI shorthand broadcast: enabled Jan 20 00:32:21.150951 kernel: sched_clock: Marking stable (1982016140, 387794729)->(2578408840, -208597971) Jan 20 00:32:21.150966 kernel: registered taskstats version 1 Jan 20 00:32:21.150978 kernel: Loading compiled-in X.509 certificates Jan 20 00:32:21.150991 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:32:21.151006 kernel: Key type .fscrypt registered Jan 20 00:32:21.151017 kernel: Key type fscrypt-provisioning registered Jan 20 00:32:21.151029 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:32:21.151041 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:32:21.151053 kernel: ima: No architecture policies found Jan 20 00:32:21.151064 kernel: clk: Disabling unused clocks Jan 20 00:32:21.151076 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:32:21.151088 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:32:21.151103 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:32:21.151115 kernel: Run /init as init process Jan 20 00:32:21.151127 kernel: with arguments: Jan 20 00:32:21.151139 kernel: /init Jan 20 00:32:21.151150 kernel: with environment: Jan 20 00:32:21.151162 kernel: HOME=/ Jan 20 00:32:21.151173 kernel: TERM=linux Jan 20 00:32:21.151187 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:32:21.151205 systemd[1]: Detected virtualization kvm. Jan 20 00:32:21.151218 systemd[1]: Detected architecture x86-64. Jan 20 00:32:21.151230 systemd[1]: Running in initrd. Jan 20 00:32:21.151242 systemd[1]: No hostname configured, using default hostname. Jan 20 00:32:21.151254 systemd[1]: Hostname set to . Jan 20 00:32:21.151267 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:32:21.151279 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:32:21.151292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:32:21.151310 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:32:21.151323 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:32:21.151337 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:32:21.151350 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:32:21.151366 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:32:21.151385 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:32:21.151397 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:32:21.151410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:32:21.151422 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:32:21.151435 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:32:21.151447 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:32:21.151460 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:32:21.151510 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:32:21.151523 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:32:21.151536 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:32:21.151548 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:32:21.151561 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:32:21.151573 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:32:21.151586 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:32:21.151602 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:32:21.151618 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:32:21.151630 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:32:21.151643 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:32:21.151655 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:32:21.151668 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:32:21.151681 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:32:21.151693 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:32:21.151706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:21.151718 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:32:21.151733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:32:21.151745 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:32:21.151758 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:32:21.151880 systemd-journald[194]: Collecting audit messages is disabled. Jan 20 00:32:21.151913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:21.151927 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:32:21.151940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:32:21.151952 systemd-journald[194]: Journal started Jan 20 00:32:21.151979 systemd-journald[194]: Runtime Journal (/run/log/journal/e420f05b3d134c928f0a9f9f1c293145) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:32:21.152032 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:32:21.108006 systemd-modules-load[195]: Inserted module 'overlay' Jan 20 00:32:21.159433 kernel: Bridge firewalling registered Jan 20 00:32:21.159436 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 20 00:32:21.173161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:32:21.178667 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:32:21.179215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:32:21.183164 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:21.191008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:32:21.214102 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:32:21.219517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:32:21.224650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:32:21.235852 dracut-cmdline[221]: dracut-dracut-053 Jan 20 00:32:21.238529 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:32:21.248185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:32:21.258248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:32:21.275968 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:32:21.316210 systemd-resolved[261]: Positive Trust Anchors: Jan 20 00:32:21.317633 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:32:21.317678 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:32:21.320832 systemd-resolved[261]: Defaulting to hostname 'linux'. Jan 20 00:32:21.322404 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:32:21.326522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:32:21.386860 kernel: SCSI subsystem initialized Jan 20 00:32:21.402854 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:32:21.413834 kernel: iscsi: registered transport (tcp) Jan 20 00:32:21.436256 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:32:21.436340 kernel: QLogic iSCSI HBA Driver Jan 20 00:32:21.491093 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:32:21.501089 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:32:21.532167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:32:21.532224 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:32:21.534856 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:32:21.580862 kernel: raid6: avx2x4 gen() 33407 MB/s Jan 20 00:32:21.598840 kernel: raid6: avx2x2 gen() 30771 MB/s Jan 20 00:32:21.617739 kernel: raid6: avx2x1 gen() 26620 MB/s Jan 20 00:32:21.617828 kernel: raid6: using algorithm avx2x4 gen() 33407 MB/s Jan 20 00:32:21.636681 kernel: raid6: .... xor() 4873 MB/s, rmw enabled Jan 20 00:32:21.636811 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:32:21.664868 kernel: xor: automatically using best checksumming function avx Jan 20 00:32:21.814872 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:32:21.829244 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:32:21.849057 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:32:21.868988 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 20 00:32:21.876396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:32:21.891083 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:32:21.907652 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Jan 20 00:32:21.948093 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:32:21.967107 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:32:22.063741 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:32:22.076060 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:32:22.099122 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:32:22.102550 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:32:22.108754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:32:22.119660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:32:22.137113 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:32:22.154448 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:32:22.168878 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:32:22.173875 kernel: libata version 3.00 loaded. Jan 20 00:32:22.177955 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:32:22.183847 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:32:22.187363 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:32:22.204248 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:32:22.204270 kernel: GPT:9289727 != 19775487 Jan 20 00:32:22.204281 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:32:22.204290 kernel: GPT:9289727 != 19775487 Jan 20 00:32:22.213621 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:32:22.213694 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:22.213804 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:32:22.218529 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:32:22.218555 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:32:22.218603 kernel: AES CTR mode by8 optimization enabled Jan 20 00:32:22.187578 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:22.246183 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:32:22.246518 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:32:22.246745 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (465) Jan 20 00:32:22.246828 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Jan 20 00:32:22.213729 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:32:22.254718 kernel: scsi host0: ahci Jan 20 00:32:22.224140 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:32:22.224576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:22.268949 kernel: scsi host1: ahci Jan 20 00:32:22.253371 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:22.275447 kernel: scsi host2: ahci Jan 20 00:32:22.275872 kernel: scsi host3: ahci Jan 20 00:32:22.276078 kernel: scsi host4: ahci Jan 20 00:32:22.278253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:22.302687 kernel: scsi host5: ahci Jan 20 00:32:22.303084 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 20 00:32:22.303119 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 20 00:32:22.303315 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 20 00:32:22.303398 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 20 00:32:22.303420 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 20 00:32:22.303438 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 20 00:32:22.325085 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:32:22.339288 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:32:22.352191 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:32:22.353561 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:32:22.365926 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:32:22.383159 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:32:22.384437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:32:22.384570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:22.391324 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:22.402061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:22.418639 disk-uuid[560]: Primary Header is updated. Jan 20 00:32:22.418639 disk-uuid[560]: Secondary Entries is updated. Jan 20 00:32:22.418639 disk-uuid[560]: Secondary Header is updated. Jan 20 00:32:22.438952 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:22.424230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:22.445839 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:22.447092 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:32:22.458293 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:22.470044 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:22.595833 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:22.595890 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:22.597815 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:32:22.600791 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:22.602838 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:22.604858 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:22.607855 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:32:22.610090 kernel: ata3.00: applying bridge limits Jan 20 00:32:22.611733 kernel: ata3.00: configured for UDMA/100 Jan 20 00:32:22.615874 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:32:22.661513 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:32:22.662015 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:32:22.676850 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:32:23.455959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:23.456863 disk-uuid[562]: The operation has completed successfully. Jan 20 00:32:23.500657 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:32:23.500866 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:32:23.529174 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:32:23.537746 sh[601]: Success Jan 20 00:32:23.558807 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:32:23.611028 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:32:23.633237 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:32:23.641115 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:32:23.703446 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:32:23.703569 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:23.703591 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:32:23.710464 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:32:23.710559 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:32:23.746343 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:32:23.762010 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:32:23.796027 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:32:23.810679 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:32:23.860940 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:23.861007 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:23.861025 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:32:23.885900 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:32:23.914361 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:32:23.923890 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:23.958595 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:32:23.982624 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:32:24.114081 ignition[713]: Ignition 2.19.0 Jan 20 00:32:24.116004 ignition[713]: Stage: fetch-offline Jan 20 00:32:24.116073 ignition[713]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:24.116088 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:24.116227 ignition[713]: parsed url from cmdline: "" Jan 20 00:32:24.116233 ignition[713]: no config URL provided Jan 20 00:32:24.116241 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:32:24.116253 ignition[713]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:32:24.116289 ignition[713]: op(1): [started] loading QEMU firmware config module Jan 20 00:32:24.116297 ignition[713]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:32:24.143606 ignition[713]: op(1): [finished] loading QEMU firmware config module Jan 20 00:32:24.218209 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:32:24.250198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:32:24.336291 systemd-networkd[788]: lo: Link UP Jan 20 00:32:24.336330 systemd-networkd[788]: lo: Gained carrier Jan 20 00:32:24.341002 systemd-networkd[788]: Enumeration completed Jan 20 00:32:24.341153 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:32:24.346642 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:24.346649 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:32:24.349148 systemd-networkd[788]: eth0: Link UP Jan 20 00:32:24.349154 systemd-networkd[788]: eth0: Gained carrier Jan 20 00:32:24.349164 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:24.365730 systemd[1]: Reached target network.target - Network. Jan 20 00:32:24.403911 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:32:24.524709 ignition[713]: parsing config with SHA512: d431e2751d3b1c5810f30f90491345d0fd3b7b75fe078412734e8a8e10d3043cd1a0e8c1a2db5097778073cc0b121024bb4ceeee20c714a8204c8f2f428f105a Jan 20 00:32:24.531192 unknown[713]: fetched base config from "system" Jan 20 00:32:24.531238 unknown[713]: fetched user config from "qemu" Jan 20 00:32:24.532070 ignition[713]: fetch-offline: fetch-offline passed Jan 20 00:32:24.535351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:32:24.532175 ignition[713]: Ignition finished successfully Jan 20 00:32:24.536802 systemd-resolved[261]: Detected conflict on linux IN A 10.0.0.11 Jan 20 00:32:24.536816 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jan 20 00:32:24.578723 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:32:24.617017 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:32:24.666898 ignition[794]: Ignition 2.19.0 Jan 20 00:32:24.666931 ignition[794]: Stage: kargs Jan 20 00:32:24.667180 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:24.667197 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:24.674731 ignition[794]: kargs: kargs passed Jan 20 00:32:24.674883 ignition[794]: Ignition finished successfully Jan 20 00:32:24.698457 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:32:24.730139 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:32:24.772007 ignition[802]: Ignition 2.19.0 Jan 20 00:32:24.772038 ignition[802]: Stage: disks Jan 20 00:32:24.772922 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:24.772944 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:24.774249 ignition[802]: disks: disks passed Jan 20 00:32:24.774326 ignition[802]: Ignition finished successfully Jan 20 00:32:24.805154 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:32:24.808281 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:32:24.830094 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:32:24.830993 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:32:24.833843 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:32:24.865932 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:32:24.891111 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:32:24.945756 systemd-fsck[813]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:32:24.961328 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:32:24.994233 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:32:25.330311 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:32:25.332475 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:32:25.338954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:32:25.368280 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:32:25.375707 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:32:25.395239 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:32:25.417846 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (821) Jan 20 00:32:25.417882 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:25.395346 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:32:25.446030 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:25.446079 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:32:25.446101 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:32:25.395388 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:32:25.458011 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:32:25.473034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:32:25.496098 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:32:25.608884 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:32:25.626033 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:32:25.644927 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:32:25.657182 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:32:25.811708 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:32:25.835082 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:32:25.837915 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:32:25.858209 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:32:25.865232 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:25.868125 systemd-networkd[788]: eth0: Gained IPv6LL Jan 20 00:32:25.879885 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:32:25.894215 ignition[934]: INFO : Ignition 2.19.0 Jan 20 00:32:25.894215 ignition[934]: INFO : Stage: mount Jan 20 00:32:25.900289 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:25.900289 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:25.900289 ignition[934]: INFO : mount: mount passed Jan 20 00:32:25.900289 ignition[934]: INFO : Ignition finished successfully Jan 20 00:32:25.913231 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:32:25.924322 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:32:26.344091 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:32:26.355883 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Jan 20 00:32:26.362479 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:26.362562 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:26.362581 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:32:26.371924 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:32:26.373470 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:32:26.413815 ignition[965]: INFO : Ignition 2.19.0 Jan 20 00:32:26.413815 ignition[965]: INFO : Stage: files Jan 20 00:32:26.413815 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:26.413815 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:26.428336 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:32:26.428336 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:32:26.428336 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:32:26.428336 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:32:26.428336 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:32:26.428336 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:32:26.423260 unknown[965]: wrote ssh authorized keys file for user: core Jan 20 00:32:26.462851 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:32:26.462851 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 00:32:26.496135 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 00:32:26.711558 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:32:26.711558 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 00:32:26.730723 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 20 00:32:26.880328 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 00:32:27.169722 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 00:32:27.169722 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:32:27.186318 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 00:32:27.412986 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 00:32:27.997355 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:32:27.997355 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 20 00:32:28.012383 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:32:28.012383 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:32:28.012383 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 20 00:32:28.012383 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 20 00:32:28.012383 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:32:28.012383 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:32:28.012383 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 20 00:32:28.012383 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:32:28.073038 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:32:28.086946 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:32:28.093253 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:32:28.093253 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:32:28.093253 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:32:28.093253 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:32:28.093253 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:32:28.093253 ignition[965]: INFO : files: files passed Jan 20 00:32:28.093253 ignition[965]: INFO : Ignition finished successfully Jan 20 00:32:28.107706 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:32:28.133122 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:32:28.140978 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:32:28.158595 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:32:28.158860 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:32:28.174881 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:32:28.183753 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:32:28.183753 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:32:28.193199 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:32:28.200970 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:32:28.203022 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:32:28.220075 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:32:28.256403 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:32:28.256646 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:32:28.268912 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:32:28.271676 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:32:28.285919 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:32:28.301024 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:32:28.321820 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:32:28.344232 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:32:28.358177 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:32:28.362625 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:32:28.371276 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:32:28.379111 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:32:28.379359 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:32:28.388026 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:32:28.394025 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:32:28.400628 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:32:28.407353 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:32:28.414613 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:32:28.422402 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:32:28.429692 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:32:28.438761 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:32:28.444669 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:32:28.451028 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:32:28.456996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:32:28.457229 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:32:28.465042 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:32:28.471153 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:32:28.479074 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:32:28.479362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:32:28.487227 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:32:28.487492 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:32:28.495998 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:32:28.496226 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:32:28.503866 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:32:28.510403 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:32:28.513961 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:32:28.518444 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:32:28.525712 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:32:28.533179 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:32:28.533385 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:32:28.540730 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:32:28.540957 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:32:28.599012 ignition[1019]: INFO : Ignition 2.19.0 Jan 20 00:32:28.599012 ignition[1019]: INFO : Stage: umount Jan 20 00:32:28.599012 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:28.599012 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:28.599012 ignition[1019]: INFO : umount: umount passed Jan 20 00:32:28.599012 ignition[1019]: INFO : Ignition finished successfully Jan 20 00:32:28.548333 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:32:28.548476 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:32:28.555844 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:32:28.555973 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:32:28.579167 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:32:28.584000 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:32:28.584226 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:32:28.596031 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:32:28.598969 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:32:28.601951 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:32:28.603580 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:32:28.603843 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:32:28.613897 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:32:28.614032 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:32:28.622153 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:32:28.622278 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:32:28.632047 systemd[1]: Stopped target network.target - Network. Jan 20 00:32:28.637375 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:32:28.637449 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:32:28.638749 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:32:28.638846 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:32:28.639389 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:32:28.639437 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:32:28.640757 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:32:28.640846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:32:28.641610 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:32:28.642907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:32:28.644434 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:32:28.645071 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:32:28.645195 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:32:28.645937 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:32:28.645987 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:32:28.666447 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:32:28.666807 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:32:28.674216 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:32:28.674280 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:32:28.680907 systemd-networkd[788]: eth0: DHCPv6 lease lost Jan 20 00:32:28.684653 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:32:28.684903 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:32:28.692975 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:32:28.693068 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:32:28.713068 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:32:28.719067 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:32:28.719179 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:32:28.887829 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 20 00:32:28.728559 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:32:28.728667 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:32:28.736568 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:32:28.736665 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:32:28.739037 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:32:28.753158 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:32:28.753342 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:32:28.760130 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:32:28.760367 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:32:28.769541 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:32:28.769628 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:32:28.776358 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:32:28.776418 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:32:28.777833 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:32:28.777910 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:32:28.779648 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:32:28.779726 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:32:28.782814 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:32:28.782891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:28.803096 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:32:28.807431 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:32:28.807576 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:32:28.813130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:32:28.813206 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:28.819072 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:32:28.819214 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:32:28.824630 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:32:28.839975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:32:28.852424 systemd[1]: Switching root. Jan 20 00:32:28.991955 systemd-journald[194]: Journal stopped Jan 20 00:32:30.381921 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:32:30.382028 kernel: SELinux: policy capability open_perms=1 Jan 20 00:32:30.382055 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:32:30.382076 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:32:30.382095 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:32:30.382112 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:32:30.382138 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:32:30.382166 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:32:30.382186 kernel: audit: type=1403 audit(1768869149.105:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:32:30.382206 systemd[1]: Successfully loaded SELinux policy in 55.173ms. Jan 20 00:32:30.382247 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.099ms. Jan 20 00:32:30.382274 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:32:30.382297 systemd[1]: Detected virtualization kvm. Jan 20 00:32:30.382318 systemd[1]: Detected architecture x86-64. Jan 20 00:32:30.382338 systemd[1]: Detected first boot. Jan 20 00:32:30.382357 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:32:30.382377 zram_generator::config[1064]: No configuration found. Jan 20 00:32:30.382397 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:32:30.382417 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:32:30.382441 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:32:30.382461 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:32:30.382481 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:32:30.382501 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:32:30.382555 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:32:30.382575 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:32:30.382595 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:32:30.382615 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:32:30.382634 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:32:30.382660 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:32:30.382679 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:32:30.382699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:32:30.382718 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:32:30.382736 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:32:30.382757 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:32:30.382828 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:32:30.382850 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:32:30.382871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:32:30.382896 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:32:30.382916 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:32:30.382936 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:32:30.382964 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:32:30.382983 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:32:30.383002 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:32:30.383020 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:32:30.383040 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:32:30.383065 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:32:30.383085 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:32:30.383103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:32:30.383123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:32:30.383143 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:32:30.383161 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:32:30.383181 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:32:30.383199 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:32:30.383218 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:32:30.383245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:30.383265 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:32:30.383283 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:32:30.383303 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:32:30.383323 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:32:30.383342 systemd[1]: Reached target machines.target - Containers. Jan 20 00:32:30.383360 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:32:30.383379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:32:30.383405 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:32:30.383424 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:32:30.383444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:32:30.383463 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:32:30.383483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:32:30.383502 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:32:30.383554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:32:30.383573 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:32:30.383598 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:32:30.383617 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:32:30.383636 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:32:30.383655 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:32:30.383673 kernel: fuse: init (API version 7.39) Jan 20 00:32:30.383695 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:32:30.383716 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:32:30.383733 kernel: ACPI: bus type drm_connector registered Jan 20 00:32:30.383752 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:32:30.383828 kernel: loop: module loaded Jan 20 00:32:30.383851 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:32:30.383900 systemd-journald[1148]: Collecting audit messages is disabled. Jan 20 00:32:30.383937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:32:30.383957 systemd-journald[1148]: Journal started Jan 20 00:32:30.383988 systemd-journald[1148]: Runtime Journal (/run/log/journal/e420f05b3d134c928f0a9f9f1c293145) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:32:29.879934 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:32:29.899486 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:32:29.900217 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:32:29.900713 systemd[1]: systemd-journald.service: Consumed 1.781s CPU time. Jan 20 00:32:30.396327 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 00:32:30.396401 systemd[1]: Stopped verity-setup.service. Jan 20 00:32:30.406861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:30.415142 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:32:30.419744 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:32:30.424006 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:32:30.428429 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:32:30.432666 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:32:30.436914 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:32:30.440958 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:32:30.445001 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:32:30.449486 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:32:30.454420 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:32:30.454846 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:32:30.460065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:32:30.460287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:32:30.465196 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:32:30.465468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:32:30.469661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:32:30.469943 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:32:30.474815 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:32:30.475068 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:32:30.479299 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:32:30.479610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:32:30.483670 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:32:30.488009 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:32:30.492317 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:32:30.515285 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:32:30.530101 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:32:30.535729 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:32:30.538640 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:32:30.538694 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:32:30.542457 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:32:30.547647 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:32:30.552160 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:32:30.556361 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:32:30.558439 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:32:30.566562 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:32:30.570932 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:32:30.573284 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:32:30.578475 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:32:30.579672 systemd-journald[1148]: Time spent on flushing to /var/log/journal/e420f05b3d134c928f0a9f9f1c293145 is 92.930ms for 989 entries. Jan 20 00:32:30.579672 systemd-journald[1148]: System Journal (/var/log/journal/e420f05b3d134c928f0a9f9f1c293145) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:32:30.705836 systemd-journald[1148]: Received client request to flush runtime journal. Jan 20 00:32:30.586045 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:32:30.593448 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:32:30.606147 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:32:30.612892 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:32:30.691154 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:32:30.694666 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:32:30.698121 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:32:30.706290 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:32:30.710302 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:32:30.746588 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:32:30.779877 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:32:30.792959 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:32:30.829070 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 20 00:32:30.850093 kernel: loop0: detected capacity change from 0 to 142488 Jan 20 00:32:30.853133 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:32:30.864057 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:32:30.870281 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:32:30.871401 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:32:30.884905 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:32:30.891085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:32:30.917119 kernel: loop1: detected capacity change from 0 to 229808 Jan 20 00:32:30.945558 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 20 00:32:30.945588 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 20 00:32:30.956726 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:32:30.991841 kernel: loop2: detected capacity change from 0 to 140768 Jan 20 00:32:31.113880 kernel: loop3: detected capacity change from 0 to 142488 Jan 20 00:32:31.148892 kernel: loop4: detected capacity change from 0 to 229808 Jan 20 00:32:31.166835 kernel: loop5: detected capacity change from 0 to 140768 Jan 20 00:32:31.226729 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:32:31.227845 (sd-merge)[1202]: Merged extensions into '/usr'. Jan 20 00:32:31.237098 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:32:31.237136 systemd[1]: Reloading... Jan 20 00:32:31.436877 zram_generator::config[1228]: No configuration found. Jan 20 00:32:31.624324 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:32:31.629047 ldconfig[1173]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:32:31.670451 systemd[1]: Reloading finished in 432 ms. Jan 20 00:32:31.834646 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:32:31.838680 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:32:31.877195 systemd[1]: Starting ensure-sysext.service... Jan 20 00:32:31.914212 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:32:31.958676 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:32:31.958727 systemd[1]: Reloading... Jan 20 00:32:32.021058 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:32:32.021611 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:32:32.023210 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:32:32.023648 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 20 00:32:32.023837 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 20 00:32:32.028508 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:32:32.028578 systemd-tmpfiles[1266]: Skipping /boot Jan 20 00:32:32.047819 zram_generator::config[1299]: No configuration found. Jan 20 00:32:32.057021 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:32:32.057063 systemd-tmpfiles[1266]: Skipping /boot Jan 20 00:32:32.302070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:32:32.348615 systemd[1]: Reloading finished in 389 ms. Jan 20 00:32:32.400626 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:32:32.427673 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:32:32.477984 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:32:32.487030 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:32:32.492056 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:32:32.507076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:32:32.520477 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:32:32.526571 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:32:32.545731 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:32:32.550332 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:32.550509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:32:32.552390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:32:32.561073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:32:32.568386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:32:32.572083 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:32:32.572223 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:32.573467 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:32:32.580469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:32:32.580711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:32:32.587244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:32:32.587444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:32:32.592442 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:32:32.592651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:32:32.594937 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Jan 20 00:32:32.603401 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:32:32.614939 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:32:32.619594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:32.620277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:32:32.628741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:32:32.637022 augenrules[1367]: No rules Jan 20 00:32:32.638728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:32:32.647581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:32:32.651452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:32:32.656656 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:32:32.661609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:32.663566 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:32:32.679367 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:32:32.684675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:32:32.684996 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:32:32.690955 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:32:32.701299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:32:32.701503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:32:32.708960 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:32:32.709376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:32:32.738836 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:32:32.749062 systemd[1]: Finished ensure-sysext.service. Jan 20 00:32:32.758351 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:32.758654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:32:32.767106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:32:32.772467 systemd-resolved[1336]: Positive Trust Anchors: Jan 20 00:32:32.772484 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:32:32.772511 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:32:32.776862 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:32:32.777069 systemd-resolved[1336]: Defaulting to hostname 'linux'. Jan 20 00:32:32.785060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:32:32.793828 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1396) Jan 20 00:32:32.794030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:32:32.799742 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:32:32.810707 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:32:32.885920 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:32:32.890882 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:32:32.890939 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:32.891627 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:32:32.897117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:32:32.897381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:32:32.903278 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:32:32.903611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:32:32.910498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:32:32.911119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:32:32.916951 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:32:32.917231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:32:32.947426 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 00:32:32.947659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:32:32.956391 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:32:32.956499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:32:33.054945 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 20 00:32:33.158872 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:32:33.182880 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:32:33.195864 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 00:32:33.196230 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:32:33.198838 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:32:33.198979 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:32:33.199874 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:32:33.207101 systemd-networkd[1406]: lo: Link UP Jan 20 00:32:33.207126 systemd-networkd[1406]: lo: Gained carrier Jan 20 00:32:33.209846 systemd-networkd[1406]: Enumeration completed Jan 20 00:32:33.210698 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:33.210707 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:32:33.212052 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:32:33.212742 systemd-networkd[1406]: eth0: Link UP Jan 20 00:32:33.212747 systemd-networkd[1406]: eth0: Gained carrier Jan 20 00:32:33.212761 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:33.217898 systemd[1]: Reached target network.target - Network. Jan 20 00:32:33.231268 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:32:33.234025 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:32:33.251124 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:32:34.174895 systemd-resolved[1336]: Clock change detected. Flushing caches. Jan 20 00:32:34.174926 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:32:34.174969 systemd-timesyncd[1408]: Initial clock synchronization to Tue 2026-01-20 00:32:34.174814 UTC. Jan 20 00:32:34.183969 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:32:34.291271 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:32:34.390355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:34.430145 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:32:34.430921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:34.434768 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 00:32:34.472501 kernel: kvm_amd: TSC scaling supported Jan 20 00:32:34.472837 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:32:34.472880 kernel: kvm_amd: Nested Paging enabled Jan 20 00:32:34.476284 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:32:34.476379 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:32:34.591727 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:32:34.599039 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:34.621025 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:32:34.664896 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:32:34.674905 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:32:34.686705 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:32:34.689848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:34.734529 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:32:34.738826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:32:34.742133 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:32:34.745222 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:32:34.748469 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:32:34.752141 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:32:34.762997 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:32:34.767134 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:32:34.771044 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:32:34.771098 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:32:34.774207 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:32:34.779287 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:32:34.790762 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:32:34.817827 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:32:34.835879 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:32:34.839482 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:32:34.842224 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:32:34.844697 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:32:34.847010 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:32:34.847056 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:32:34.848458 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:32:34.852906 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:32:34.860296 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:32:34.860818 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:32:34.874923 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:32:34.877730 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:32:34.881402 jq[1447]: false Jan 20 00:32:34.881914 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:32:34.886820 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:32:34.892840 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:32:34.894746 dbus-daemon[1446]: [system] SELinux support is enabled Jan 20 00:32:34.897073 extend-filesystems[1448]: Found loop3 Jan 20 00:32:34.897073 extend-filesystems[1448]: Found loop4 Jan 20 00:32:34.897073 extend-filesystems[1448]: Found loop5 Jan 20 00:32:34.897073 extend-filesystems[1448]: Found sr0 Jan 20 00:32:34.907328 extend-filesystems[1448]: Found vda Jan 20 00:32:34.907328 extend-filesystems[1448]: Found vda1 Jan 20 00:32:34.907328 extend-filesystems[1448]: Found vda2 Jan 20 00:32:34.907328 extend-filesystems[1448]: Found vda3 Jan 20 00:32:34.907328 extend-filesystems[1448]: Found usr Jan 20 00:32:34.907328 extend-filesystems[1448]: Found vda4 Jan 20 00:32:34.907328 extend-filesystems[1448]: Found vda6 Jan 20 00:32:34.907328 extend-filesystems[1448]: Found vda7 Jan 20 00:32:34.907328 extend-filesystems[1448]: Found vda9 Jan 20 00:32:34.907328 extend-filesystems[1448]: Checking size of /dev/vda9 Jan 20 00:32:34.943290 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1381) Jan 20 00:32:34.943339 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:32:34.944585 extend-filesystems[1448]: Resized partition /dev/vda9 Jan 20 00:32:34.911298 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:32:34.950028 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:32:34.953000 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:32:34.967129 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:32:34.967942 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:32:34.978882 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:32:34.983556 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:32:34.989718 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:32:34.992073 jq[1470]: true Jan 20 00:32:34.995960 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:32:34.999713 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:32:35.006805 update_engine[1469]: I20260120 00:32:35.005539 1469 main.cc:92] Flatcar Update Engine starting Jan 20 00:32:35.011601 update_engine[1469]: I20260120 00:32:35.006938 1469 update_check_scheduler.cc:74] Next update check in 3m44s Jan 20 00:32:35.011109 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:32:35.011352 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:32:35.012021 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:32:35.012296 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:32:35.017149 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:32:35.017388 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:32:35.031328 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:32:35.031328 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:32:35.031328 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:32:35.029611 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:32:35.043160 extend-filesystems[1448]: Resized filesystem in /dev/vda9 Jan 20 00:32:35.047963 jq[1473]: true Jan 20 00:32:35.029947 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:32:35.039335 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:32:35.056638 tar[1472]: linux-amd64/LICENSE Jan 20 00:32:35.056638 tar[1472]: linux-amd64/helm Jan 20 00:32:35.060043 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:32:35.063244 systemd-logind[1468]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:32:35.063271 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:32:35.064841 systemd-logind[1468]: New seat seat0. Jan 20 00:32:35.069567 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:32:35.069614 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:32:35.076269 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:32:35.076320 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:32:35.090911 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:32:35.094210 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:32:35.106757 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:32:35.108757 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:32:35.114195 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:32:35.136975 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:32:35.247819 containerd[1475]: time="2026-01-20T00:32:35.247138526Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:32:35.271286 containerd[1475]: time="2026-01-20T00:32:35.271249036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:35.273555 containerd[1475]: time="2026-01-20T00:32:35.273479250Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:35.273555 containerd[1475]: time="2026-01-20T00:32:35.273527200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:32:35.273555 containerd[1475]: time="2026-01-20T00:32:35.273546125Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:32:35.273863 containerd[1475]: time="2026-01-20T00:32:35.273809867Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:32:35.273893 containerd[1475]: time="2026-01-20T00:32:35.273859911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:35.273985 containerd[1475]: time="2026-01-20T00:32:35.273936714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:35.273985 containerd[1475]: time="2026-01-20T00:32:35.273969205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:35.274201 containerd[1475]: time="2026-01-20T00:32:35.274151495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:35.274201 containerd[1475]: time="2026-01-20T00:32:35.274186781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:35.274201 containerd[1475]: time="2026-01-20T00:32:35.274199875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:35.274255 containerd[1475]: time="2026-01-20T00:32:35.274208892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:35.274326 containerd[1475]: time="2026-01-20T00:32:35.274299411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:35.274646 containerd[1475]: time="2026-01-20T00:32:35.274597538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:35.274857 containerd[1475]: time="2026-01-20T00:32:35.274798533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:35.274857 containerd[1475]: time="2026-01-20T00:32:35.274830252Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:32:35.274953 containerd[1475]: time="2026-01-20T00:32:35.274923767Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:32:35.275017 containerd[1475]: time="2026-01-20T00:32:35.274991704Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:32:35.280871 containerd[1475]: time="2026-01-20T00:32:35.280839401Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:32:35.280909 containerd[1475]: time="2026-01-20T00:32:35.280884225Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:32:35.280909 containerd[1475]: time="2026-01-20T00:32:35.280898271Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:32:35.280943 containerd[1475]: time="2026-01-20T00:32:35.280924160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:32:35.280943 containerd[1475]: time="2026-01-20T00:32:35.280936122Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:32:35.281098 containerd[1475]: time="2026-01-20T00:32:35.281068108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:32:35.281298 containerd[1475]: time="2026-01-20T00:32:35.281272029Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:32:35.281468 containerd[1475]: time="2026-01-20T00:32:35.281394658Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:32:35.281493 containerd[1475]: time="2026-01-20T00:32:35.281464569Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:32:35.281493 containerd[1475]: time="2026-01-20T00:32:35.281478324Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:32:35.281493 containerd[1475]: time="2026-01-20T00:32:35.281489765Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:32:35.281546 containerd[1475]: time="2026-01-20T00:32:35.281502459Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:32:35.281546 containerd[1475]: time="2026-01-20T00:32:35.281512538Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:32:35.281546 containerd[1475]: time="2026-01-20T00:32:35.281523137Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:32:35.281546 containerd[1475]: time="2026-01-20T00:32:35.281536673Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:32:35.281603 containerd[1475]: time="2026-01-20T00:32:35.281547493Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:32:35.281603 containerd[1475]: time="2026-01-20T00:32:35.281558183Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:32:35.281603 containerd[1475]: time="2026-01-20T00:32:35.281568182Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:32:35.281603 containerd[1475]: time="2026-01-20T00:32:35.281584642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281708 containerd[1475]: time="2026-01-20T00:32:35.281606834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281708 containerd[1475]: time="2026-01-20T00:32:35.281617674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281708 containerd[1475]: time="2026-01-20T00:32:35.281628484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281708 containerd[1475]: time="2026-01-20T00:32:35.281639435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281708 containerd[1475]: time="2026-01-20T00:32:35.281650465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281802 containerd[1475]: time="2026-01-20T00:32:35.281737247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281802 containerd[1475]: time="2026-01-20T00:32:35.281760631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281802 containerd[1475]: time="2026-01-20T00:32:35.281772243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281802 containerd[1475]: time="2026-01-20T00:32:35.281785868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281802 containerd[1475]: time="2026-01-20T00:32:35.281796327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281877 containerd[1475]: time="2026-01-20T00:32:35.281806807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281877 containerd[1475]: time="2026-01-20T00:32:35.281817617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281877 containerd[1475]: time="2026-01-20T00:32:35.281834349Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:32:35.281877 containerd[1475]: time="2026-01-20T00:32:35.281851070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281877 containerd[1475]: time="2026-01-20T00:32:35.281861399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.281877 containerd[1475]: time="2026-01-20T00:32:35.281875025Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:32:35.282000 containerd[1475]: time="2026-01-20T00:32:35.281941739Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:32:35.282000 containerd[1475]: time="2026-01-20T00:32:35.281960664Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:32:35.282091 containerd[1475]: time="2026-01-20T00:32:35.282050022Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:32:35.282091 containerd[1475]: time="2026-01-20T00:32:35.282079256Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:32:35.282091 containerd[1475]: time="2026-01-20T00:32:35.282088994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.282144 containerd[1475]: time="2026-01-20T00:32:35.282100055Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:32:35.282144 containerd[1475]: time="2026-01-20T00:32:35.282114382Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:32:35.282144 containerd[1475]: time="2026-01-20T00:32:35.282124099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:32:35.282451 containerd[1475]: time="2026-01-20T00:32:35.282351574Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:32:35.282451 containerd[1475]: time="2026-01-20T00:32:35.282413129Z" level=info msg="Connect containerd service" Jan 20 00:32:35.282611 containerd[1475]: time="2026-01-20T00:32:35.282487087Z" level=info msg="using legacy CRI server" Jan 20 00:32:35.282611 containerd[1475]: time="2026-01-20T00:32:35.282501213Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:32:35.282650 containerd[1475]: time="2026-01-20T00:32:35.282617260Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:32:35.283353 containerd[1475]: time="2026-01-20T00:32:35.283283334Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:32:35.283564 containerd[1475]: time="2026-01-20T00:32:35.283492344Z" level=info msg="Start subscribing containerd event" Jan 20 00:32:35.283594 containerd[1475]: time="2026-01-20T00:32:35.283575479Z" level=info msg="Start recovering state" Jan 20 00:32:35.284904 containerd[1475]: time="2026-01-20T00:32:35.284844819Z" level=info msg="Start event monitor" Jan 20 00:32:35.284904 containerd[1475]: time="2026-01-20T00:32:35.284862973Z" level=info msg="Start snapshots syncer" Jan 20 00:32:35.284904 containerd[1475]: time="2026-01-20T00:32:35.284876718Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:32:35.284904 containerd[1475]: time="2026-01-20T00:32:35.284885204Z" level=info msg="Start streaming server" Jan 20 00:32:35.285588 containerd[1475]: time="2026-01-20T00:32:35.285556458Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:32:35.285715 containerd[1475]: time="2026-01-20T00:32:35.285627249Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:32:35.286600 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:32:35.289473 containerd[1475]: time="2026-01-20T00:32:35.289374926Z" level=info msg="containerd successfully booted in 0.043288s" Jan 20 00:32:35.302020 systemd-networkd[1406]: eth0: Gained IPv6LL Jan 20 00:32:35.306203 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:32:35.309991 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:32:35.319951 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:32:35.323956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:32:35.330617 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:32:35.354585 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:32:35.356774 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:32:35.360178 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:32:35.364021 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:32:35.510728 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:32:35.523488 tar[1472]: linux-amd64/README.md Jan 20 00:32:35.535479 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:32:35.539637 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:32:35.545205 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:32:35.557936 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:32:35.558213 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:32:35.573186 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:32:35.586718 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:32:35.609173 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:32:35.613287 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:32:35.617521 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:32:36.175267 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:32:36.180064 systemd[1]: Started sshd@0-10.0.0.11:22-10.0.0.1:35070.service - OpenSSH per-connection server daemon (10.0.0.1:35070). Jan 20 00:32:36.262568 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 35070 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:36.264515 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:36.273777 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:32:36.286932 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:32:36.292779 systemd-logind[1468]: New session 1 of user core. Jan 20 00:32:36.319002 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:32:36.336148 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:32:36.343089 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:32:36.374709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:32:36.379421 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:32:36.390040 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:32:36.584772 systemd[1559]: Queued start job for default target default.target. Jan 20 00:32:36.594726 systemd[1559]: Created slice app.slice - User Application Slice. Jan 20 00:32:36.594767 systemd[1559]: Reached target paths.target - Paths. Jan 20 00:32:36.594781 systemd[1559]: Reached target timers.target - Timers. Jan 20 00:32:36.596486 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:32:36.615104 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:32:36.615252 systemd[1559]: Reached target sockets.target - Sockets. Jan 20 00:32:36.615276 systemd[1559]: Reached target basic.target - Basic System. Jan 20 00:32:36.615330 systemd[1559]: Reached target default.target - Main User Target. Jan 20 00:32:36.615367 systemd[1559]: Startup finished in 258ms. Jan 20 00:32:36.616421 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:32:36.627836 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:32:36.631326 systemd[1]: Startup finished in 2.138s (kernel) + 8.334s (initrd) + 6.658s (userspace) = 17.130s. Jan 20 00:32:36.705922 systemd[1]: Started sshd@1-10.0.0.11:22-10.0.0.1:35080.service - OpenSSH per-connection server daemon (10.0.0.1:35080). Jan 20 00:32:36.743244 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 35080 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:36.746048 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:36.750806 systemd-logind[1468]: New session 2 of user core. Jan 20 00:32:36.761834 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:32:36.820879 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:36.834236 systemd[1]: sshd@1-10.0.0.11:22-10.0.0.1:35080.service: Deactivated successfully. Jan 20 00:32:36.835837 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:32:36.837151 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:32:36.838405 systemd[1]: Started sshd@2-10.0.0.11:22-10.0.0.1:35092.service - OpenSSH per-connection server daemon (10.0.0.1:35092). Jan 20 00:32:36.839354 systemd-logind[1468]: Removed session 2. Jan 20 00:32:36.879639 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 35092 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:36.882102 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:36.887022 systemd-logind[1468]: New session 3 of user core. Jan 20 00:32:36.895845 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:32:36.976399 sshd[1592]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:36.987625 systemd[1]: sshd@2-10.0.0.11:22-10.0.0.1:35092.service: Deactivated successfully. Jan 20 00:32:36.989317 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:32:36.990897 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:32:36.997969 systemd[1]: Started sshd@3-10.0.0.11:22-10.0.0.1:35098.service - OpenSSH per-connection server daemon (10.0.0.1:35098). Jan 20 00:32:36.998799 systemd-logind[1468]: Removed session 3. Jan 20 00:32:37.032808 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 35098 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:37.035379 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:37.040360 systemd-logind[1468]: New session 4 of user core. Jan 20 00:32:37.050901 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:32:37.112764 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:37.124231 systemd[1]: sshd@3-10.0.0.11:22-10.0.0.1:35098.service: Deactivated successfully. Jan 20 00:32:37.125951 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:32:37.127333 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:32:37.135530 systemd[1]: Started sshd@4-10.0.0.11:22-10.0.0.1:35108.service - OpenSSH per-connection server daemon (10.0.0.1:35108). Jan 20 00:32:37.137145 systemd-logind[1468]: Removed session 4. Jan 20 00:32:37.169274 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 35108 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:37.172266 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:37.176868 systemd-logind[1468]: New session 5 of user core. Jan 20 00:32:37.183815 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:32:37.212979 kubelet[1566]: E0120 00:32:37.212895 1566 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:32:37.216414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:32:37.216711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:32:37.217053 systemd[1]: kubelet.service: Consumed 1.640s CPU time. Jan 20 00:32:37.247043 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:32:37.247392 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:32:37.269747 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 20 00:32:37.272000 sshd[1607]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:37.279245 systemd[1]: sshd@4-10.0.0.11:22-10.0.0.1:35108.service: Deactivated successfully. Jan 20 00:32:37.280917 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:32:37.282419 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:32:37.286981 systemd[1]: Started sshd@5-10.0.0.11:22-10.0.0.1:35112.service - OpenSSH per-connection server daemon (10.0.0.1:35112). Jan 20 00:32:37.287960 systemd-logind[1468]: Removed session 5. Jan 20 00:32:37.320456 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 35112 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:37.321996 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:37.326975 systemd-logind[1468]: New session 6 of user core. Jan 20 00:32:37.336806 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:32:37.428904 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:32:37.429312 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:32:37.433952 sudo[1621]: pam_unix(sudo:session): session closed for user root Jan 20 00:32:37.589786 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:32:37.590199 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:32:37.638965 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:32:37.641846 auditctl[1624]: No rules Jan 20 00:32:37.642324 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:32:37.642617 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:32:37.645506 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:32:37.679581 augenrules[1642]: No rules Jan 20 00:32:37.681032 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:32:37.682020 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 20 00:32:37.684016 sshd[1617]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:37.688314 systemd[1]: Started sshd@6-10.0.0.11:22-10.0.0.1:35126.service - OpenSSH per-connection server daemon (10.0.0.1:35126). Jan 20 00:32:37.700119 systemd[1]: sshd@5-10.0.0.11:22-10.0.0.1:35112.service: Deactivated successfully. Jan 20 00:32:37.701635 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:32:37.703144 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:32:37.704408 systemd-logind[1468]: Removed session 6. Jan 20 00:32:37.723763 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 35126 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:37.725172 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:37.729516 systemd-logind[1468]: New session 7 of user core. Jan 20 00:32:37.736829 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:32:37.790339 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:32:37.790788 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:32:38.292147 kernel: hrtimer: interrupt took 3244186 ns Jan 20 00:32:38.684941 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:32:38.685101 (dockerd)[1672]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:32:39.501253 dockerd[1672]: time="2026-01-20T00:32:39.501079472Z" level=info msg="Starting up" Jan 20 00:32:39.860037 dockerd[1672]: time="2026-01-20T00:32:39.859953408Z" level=info msg="Loading containers: start." Jan 20 00:32:40.012733 kernel: Initializing XFRM netlink socket Jan 20 00:32:40.130307 systemd-networkd[1406]: docker0: Link UP Jan 20 00:32:40.154544 dockerd[1672]: time="2026-01-20T00:32:40.154437262Z" level=info msg="Loading containers: done." Jan 20 00:32:40.176154 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1647328649-merged.mount: Deactivated successfully. Jan 20 00:32:40.176950 dockerd[1672]: time="2026-01-20T00:32:40.176866469Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:32:40.177021 dockerd[1672]: time="2026-01-20T00:32:40.177006880Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 00:32:40.177215 dockerd[1672]: time="2026-01-20T00:32:40.177172600Z" level=info msg="Daemon has completed initialization" Jan 20 00:32:40.230787 dockerd[1672]: time="2026-01-20T00:32:40.230481406Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:32:40.230785 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:32:41.703729 containerd[1475]: time="2026-01-20T00:32:41.703504282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 00:32:42.462624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219299236.mount: Deactivated successfully. Jan 20 00:32:44.241336 containerd[1475]: time="2026-01-20T00:32:44.241244438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:44.242547 containerd[1475]: time="2026-01-20T00:32:44.242415054Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 20 00:32:44.243704 containerd[1475]: time="2026-01-20T00:32:44.243623900Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:44.246929 containerd[1475]: time="2026-01-20T00:32:44.246873837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:44.248493 containerd[1475]: time="2026-01-20T00:32:44.248432934Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.544880261s" Jan 20 00:32:44.248548 containerd[1475]: time="2026-01-20T00:32:44.248505198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 00:32:44.251988 containerd[1475]: time="2026-01-20T00:32:44.251818635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 00:32:46.073003 containerd[1475]: time="2026-01-20T00:32:46.072871894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:46.073901 containerd[1475]: time="2026-01-20T00:32:46.073795409Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 20 00:32:46.075027 containerd[1475]: time="2026-01-20T00:32:46.074974700Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:46.078306 containerd[1475]: time="2026-01-20T00:32:46.078267559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:46.079785 containerd[1475]: time="2026-01-20T00:32:46.079745910Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.827898802s" Jan 20 00:32:46.079785 containerd[1475]: time="2026-01-20T00:32:46.079783119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 00:32:46.080793 containerd[1475]: time="2026-01-20T00:32:46.080759998Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 00:32:47.320517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:32:47.333940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:32:47.511218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:32:47.517271 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:32:47.600165 kubelet[1894]: E0120 00:32:47.599947 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:32:47.606380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:32:47.606780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:32:47.743352 containerd[1475]: time="2026-01-20T00:32:47.743243312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:47.744577 containerd[1475]: time="2026-01-20T00:32:47.744459936Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 20 00:32:47.746563 containerd[1475]: time="2026-01-20T00:32:47.746405015Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:47.750628 containerd[1475]: time="2026-01-20T00:32:47.750569455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:47.753755 containerd[1475]: time="2026-01-20T00:32:47.753639872Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.672830672s" Jan 20 00:32:47.753824 containerd[1475]: time="2026-01-20T00:32:47.753755117Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 00:32:47.754438 containerd[1475]: time="2026-01-20T00:32:47.754401664Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 00:32:48.764107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713961936.mount: Deactivated successfully. Jan 20 00:32:49.195872 containerd[1475]: time="2026-01-20T00:32:49.195761014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:49.197281 containerd[1475]: time="2026-01-20T00:32:49.197193278Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 20 00:32:49.198757 containerd[1475]: time="2026-01-20T00:32:49.198655170Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:49.202277 containerd[1475]: time="2026-01-20T00:32:49.202215577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:49.203078 containerd[1475]: time="2026-01-20T00:32:49.203010654Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.448564046s" Jan 20 00:32:49.203078 containerd[1475]: time="2026-01-20T00:32:49.203063522Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 00:32:49.204320 containerd[1475]: time="2026-01-20T00:32:49.204073005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 00:32:50.376878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844246974.mount: Deactivated successfully. Jan 20 00:32:52.446658 containerd[1475]: time="2026-01-20T00:32:52.446447577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:52.448014 containerd[1475]: time="2026-01-20T00:32:52.447950873Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 20 00:32:52.449464 containerd[1475]: time="2026-01-20T00:32:52.449382265Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:52.458456 containerd[1475]: time="2026-01-20T00:32:52.457103574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:52.461618 containerd[1475]: time="2026-01-20T00:32:52.461437605Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.257318554s" Jan 20 00:32:52.461618 containerd[1475]: time="2026-01-20T00:32:52.461582044Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 00:32:52.468707 containerd[1475]: time="2026-01-20T00:32:52.468643375Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 00:32:53.063947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609387637.mount: Deactivated successfully. Jan 20 00:32:53.076955 containerd[1475]: time="2026-01-20T00:32:53.076800121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:53.078189 containerd[1475]: time="2026-01-20T00:32:53.078136439Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 00:32:53.081087 containerd[1475]: time="2026-01-20T00:32:53.080935194Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:53.085588 containerd[1475]: time="2026-01-20T00:32:53.085403782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:53.088109 containerd[1475]: time="2026-01-20T00:32:53.088059582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 619.273751ms" Jan 20 00:32:53.088319 containerd[1475]: time="2026-01-20T00:32:53.088111850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 00:32:53.091221 containerd[1475]: time="2026-01-20T00:32:53.091107515Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 00:32:53.647369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905967353.mount: Deactivated successfully. Jan 20 00:32:57.823025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 00:32:57.834032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:32:58.288076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:32:58.321434 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:32:58.423379 containerd[1475]: time="2026-01-20T00:32:58.417361307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:58.423379 containerd[1475]: time="2026-01-20T00:32:58.422722521Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 20 00:32:58.436421 containerd[1475]: time="2026-01-20T00:32:58.430971416Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:58.613088 containerd[1475]: time="2026-01-20T00:32:58.440089128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:58.641265 containerd[1475]: time="2026-01-20T00:32:58.640186541Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.548959563s" Jan 20 00:32:58.647915 containerd[1475]: time="2026-01-20T00:32:58.647077716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 00:32:58.715376 kubelet[2031]: E0120 00:32:58.715279 2031 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:32:58.719300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:32:58.719738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:33:03.273929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:03.295177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:03.371199 systemd[1]: Reloading requested from client PID 2069 ('systemctl') (unit session-7.scope)... Jan 20 00:33:03.371247 systemd[1]: Reloading... Jan 20 00:33:03.825401 zram_generator::config[2108]: No configuration found. Jan 20 00:33:04.063396 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:33:04.297612 systemd[1]: Reloading finished in 925 ms. Jan 20 00:33:04.471324 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 00:33:04.471525 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 00:33:04.472085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:04.475592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:04.803496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:04.830290 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:33:04.962210 kubelet[2156]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:04.962210 kubelet[2156]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:33:04.962210 kubelet[2156]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:04.962210 kubelet[2156]: I0120 00:33:04.962193 2156 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:33:05.377625 kubelet[2156]: I0120 00:33:05.377506 2156 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:33:05.377625 kubelet[2156]: I0120 00:33:05.377598 2156 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:33:05.378206 kubelet[2156]: I0120 00:33:05.378117 2156 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:33:05.412848 kubelet[2156]: E0120 00:33:05.412624 2156 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:33:05.414164 kubelet[2156]: I0120 00:33:05.414104 2156 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:33:05.428399 kubelet[2156]: E0120 00:33:05.428225 2156 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:33:05.428610 kubelet[2156]: I0120 00:33:05.428580 2156 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:33:05.646653 kubelet[2156]: I0120 00:33:05.646137 2156 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:33:05.647048 kubelet[2156]: I0120 00:33:05.646983 2156 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:33:05.647344 kubelet[2156]: I0120 00:33:05.647033 2156 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:33:05.647516 kubelet[2156]: I0120 00:33:05.647468 2156 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:33:05.647516 kubelet[2156]: I0120 00:33:05.647499 2156 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:33:05.649012 kubelet[2156]: I0120 00:33:05.648937 2156 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:05.651612 kubelet[2156]: I0120 00:33:05.651485 2156 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:33:05.651612 kubelet[2156]: I0120 00:33:05.651527 2156 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:33:05.651791 kubelet[2156]: I0120 00:33:05.651722 2156 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:33:05.665543 kubelet[2156]: I0120 00:33:05.665509 2156 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:33:05.675757 kubelet[2156]: I0120 00:33:05.675346 2156 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:33:05.677746 kubelet[2156]: E0120 00:33:05.677518 2156 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:33:05.679088 kubelet[2156]: I0120 00:33:05.678369 2156 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:33:05.681072 kubelet[2156]: W0120 00:33:05.680416 2156 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:33:05.682798 kubelet[2156]: E0120 00:33:05.681623 2156 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:33:05.692336 kubelet[2156]: I0120 00:33:05.691386 2156 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:33:05.692336 kubelet[2156]: I0120 00:33:05.691603 2156 server.go:1289] "Started kubelet" Jan 20 00:33:05.692336 kubelet[2156]: I0120 00:33:05.691898 2156 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:33:05.692831 kubelet[2156]: I0120 00:33:05.692617 2156 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:33:05.693620 kubelet[2156]: I0120 00:33:05.693442 2156 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:33:05.696229 kubelet[2156]: I0120 00:33:05.694069 2156 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:33:05.696229 kubelet[2156]: I0120 00:33:05.695324 2156 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:33:05.697747 kubelet[2156]: I0120 00:33:05.696985 2156 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:33:05.698720 kubelet[2156]: E0120 00:33:05.696956 2156 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.11:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c49270b45d338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:33:05.691439928 +0000 UTC m=+0.852169072,LastTimestamp:2026-01-20 00:33:05.691439928 +0000 UTC m=+0.852169072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:33:05.699243 kubelet[2156]: E0120 00:33:05.699163 2156 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:33:05.699489 kubelet[2156]: I0120 00:33:05.699421 2156 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:33:05.700619 kubelet[2156]: I0120 00:33:05.700087 2156 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:33:05.700619 kubelet[2156]: I0120 00:33:05.700300 2156 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:33:05.700619 kubelet[2156]: E0120 00:33:05.700500 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="200ms" Jan 20 00:33:05.702493 kubelet[2156]: I0120 00:33:05.702155 2156 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:33:05.702834 kubelet[2156]: E0120 00:33:05.702733 2156 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:33:05.703367 kubelet[2156]: E0120 00:33:05.703255 2156 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:33:05.704244 kubelet[2156]: I0120 00:33:05.704052 2156 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:33:05.704244 kubelet[2156]: I0120 00:33:05.704228 2156 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:33:05.785629 kubelet[2156]: I0120 00:33:05.785542 2156 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:33:05.785629 kubelet[2156]: I0120 00:33:05.785615 2156 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:33:05.785629 kubelet[2156]: I0120 00:33:05.785641 2156 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:05.790429 kubelet[2156]: I0120 00:33:05.790344 2156 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:33:05.799743 kubelet[2156]: E0120 00:33:05.799654 2156 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:33:05.801207 kubelet[2156]: I0120 00:33:05.801109 2156 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:33:05.801207 kubelet[2156]: I0120 00:33:05.801209 2156 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:33:05.801310 kubelet[2156]: I0120 00:33:05.801268 2156 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:33:05.801310 kubelet[2156]: I0120 00:33:05.801281 2156 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:33:05.801635 kubelet[2156]: E0120 00:33:05.801442 2156 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:33:05.802776 kubelet[2156]: E0120 00:33:05.802302 2156 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:33:05.816760 kubelet[2156]: I0120 00:33:05.816611 2156 policy_none.go:49] "None policy: Start" Jan 20 00:33:05.816760 kubelet[2156]: I0120 00:33:05.816757 2156 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:33:05.817009 kubelet[2156]: I0120 00:33:05.816804 2156 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:33:05.829361 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:33:05.850040 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:33:05.857085 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:33:05.880626 kubelet[2156]: E0120 00:33:05.880405 2156 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:33:05.881120 kubelet[2156]: I0120 00:33:05.881087 2156 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:33:05.881630 kubelet[2156]: I0120 00:33:05.881137 2156 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:33:05.881713 kubelet[2156]: I0120 00:33:05.881628 2156 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:33:05.884114 kubelet[2156]: E0120 00:33:05.884005 2156 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:33:05.884174 kubelet[2156]: E0120 00:33:05.884152 2156 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 00:33:05.901705 kubelet[2156]: E0120 00:33:05.901449 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="400ms" Jan 20 00:33:05.924227 systemd[1]: Created slice kubepods-burstable-pod0a90743ced4d9ab1e6ff838b04e0b2aa.slice - libcontainer container kubepods-burstable-pod0a90743ced4d9ab1e6ff838b04e0b2aa.slice. Jan 20 00:33:05.948808 kubelet[2156]: E0120 00:33:05.948480 2156 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:05.978825 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 20 00:33:05.982160 kubelet[2156]: E0120 00:33:05.982104 2156 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:05.984385 kubelet[2156]: I0120 00:33:05.984343 2156 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:05.984536 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 20 00:33:05.985020 kubelet[2156]: E0120 00:33:05.984975 2156 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Jan 20 00:33:05.993874 kubelet[2156]: E0120 00:33:05.993404 2156 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:06.102182 kubelet[2156]: I0120 00:33:06.102027 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:06.102182 kubelet[2156]: I0120 00:33:06.102102 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:06.102182 kubelet[2156]: I0120 00:33:06.102177 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:06.102182 kubelet[2156]: I0120 00:33:06.102220 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a90743ced4d9ab1e6ff838b04e0b2aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a90743ced4d9ab1e6ff838b04e0b2aa\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:06.102182 kubelet[2156]: I0120 00:33:06.102258 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:06.102729 kubelet[2156]: I0120 00:33:06.102282 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:06.102729 kubelet[2156]: I0120 00:33:06.102304 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:06.102729 kubelet[2156]: I0120 00:33:06.102336 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a90743ced4d9ab1e6ff838b04e0b2aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a90743ced4d9ab1e6ff838b04e0b2aa\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:06.102729 kubelet[2156]: I0120 00:33:06.102425 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a90743ced4d9ab1e6ff838b04e0b2aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a90743ced4d9ab1e6ff838b04e0b2aa\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:06.189364 kubelet[2156]: I0120 00:33:06.189241 2156 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:06.189854 kubelet[2156]: E0120 00:33:06.189776 2156 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Jan 20 00:33:06.250999 kubelet[2156]: E0120 00:33:06.250902 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:06.252224 containerd[1475]: time="2026-01-20T00:33:06.252169417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a90743ced4d9ab1e6ff838b04e0b2aa,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:06.286789 kubelet[2156]: E0120 00:33:06.286616 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:06.289125 containerd[1475]: time="2026-01-20T00:33:06.288646031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:06.297109 kubelet[2156]: E0120 00:33:06.296937 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:06.298469 containerd[1475]: time="2026-01-20T00:33:06.298428960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:06.307919 kubelet[2156]: E0120 00:33:06.307724 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="800ms" Jan 20 00:33:06.594961 kubelet[2156]: I0120 00:33:06.594855 2156 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:06.595653 kubelet[2156]: E0120 00:33:06.595511 2156 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Jan 20 00:33:06.727603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72538632.mount: Deactivated successfully. Jan 20 00:33:06.736418 containerd[1475]: time="2026-01-20T00:33:06.736287510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:06.740824 containerd[1475]: time="2026-01-20T00:33:06.740730249Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:33:06.742275 containerd[1475]: time="2026-01-20T00:33:06.742230387Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:06.743729 containerd[1475]: time="2026-01-20T00:33:06.743499056Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:06.744486 containerd[1475]: time="2026-01-20T00:33:06.744427278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:33:06.745719 containerd[1475]: time="2026-01-20T00:33:06.745549787Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:06.746498 containerd[1475]: time="2026-01-20T00:33:06.746400994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:33:06.751121 containerd[1475]: time="2026-01-20T00:33:06.750940929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:06.759845 containerd[1475]: time="2026-01-20T00:33:06.759657652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.765492ms" Jan 20 00:33:06.760272 containerd[1475]: time="2026-01-20T00:33:06.760209373Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 507.948135ms" Jan 20 00:33:06.766274 containerd[1475]: time="2026-01-20T00:33:06.766196190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.685436ms" Jan 20 00:33:06.772400 kubelet[2156]: E0120 00:33:06.772296 2156 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:33:07.122986 kubelet[2156]: E0120 00:33:07.121245 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="1.6s" Jan 20 00:33:07.122986 kubelet[2156]: E0120 00:33:07.121286 2156 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:33:07.124163 kubelet[2156]: E0120 00:33:07.124123 2156 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:33:07.250865 kubelet[2156]: E0120 00:33:07.250145 2156 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:33:07.345839 containerd[1475]: time="2026-01-20T00:33:07.343882692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:07.345839 containerd[1475]: time="2026-01-20T00:33:07.344893526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:07.345839 containerd[1475]: time="2026-01-20T00:33:07.344922239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:07.382007 containerd[1475]: time="2026-01-20T00:33:07.380496149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:07.409586 kubelet[2156]: I0120 00:33:07.408712 2156 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:07.409873 kubelet[2156]: E0120 00:33:07.409788 2156 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" Jan 20 00:33:07.427811 containerd[1475]: time="2026-01-20T00:33:07.427517976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:07.429460 containerd[1475]: time="2026-01-20T00:33:07.429300442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:07.429460 containerd[1475]: time="2026-01-20T00:33:07.429375392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:07.429460 containerd[1475]: time="2026-01-20T00:33:07.429405426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:07.430045 containerd[1475]: time="2026-01-20T00:33:07.429951854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:07.430045 containerd[1475]: time="2026-01-20T00:33:07.429239387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:07.430138 containerd[1475]: time="2026-01-20T00:33:07.430061624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:07.433775 containerd[1475]: time="2026-01-20T00:33:07.432849374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:07.473241 systemd[1]: Started cri-containerd-ebe590ed2c38aac920b62d7f4204017801abe3a2af19f1d310dc903fbdf1811c.scope - libcontainer container ebe590ed2c38aac920b62d7f4204017801abe3a2af19f1d310dc903fbdf1811c. Jan 20 00:33:07.474508 kubelet[2156]: E0120 00:33:07.474371 2156 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:33:07.521338 systemd[1]: Started cri-containerd-226f065ad2959ab47e6e2e7ca2e67321f9fe3291d79d9f934ef928d66633f94a.scope - libcontainer container 226f065ad2959ab47e6e2e7ca2e67321f9fe3291d79d9f934ef928d66633f94a. Jan 20 00:33:07.591595 systemd[1]: Started cri-containerd-05840f6cbdafd82f8dbbb5afdee5cfb6d2dba940493a31dbfcfef3e7d50ce80e.scope - libcontainer container 05840f6cbdafd82f8dbbb5afdee5cfb6d2dba940493a31dbfcfef3e7d50ce80e. Jan 20 00:33:07.697762 containerd[1475]: time="2026-01-20T00:33:07.695980421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"226f065ad2959ab47e6e2e7ca2e67321f9fe3291d79d9f934ef928d66633f94a\"" Jan 20 00:33:07.699154 containerd[1475]: time="2026-01-20T00:33:07.698784752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a90743ced4d9ab1e6ff838b04e0b2aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebe590ed2c38aac920b62d7f4204017801abe3a2af19f1d310dc903fbdf1811c\"" Jan 20 00:33:07.700959 kubelet[2156]: E0120 00:33:07.700244 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:07.702955 kubelet[2156]: E0120 00:33:07.702659 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:07.713039 containerd[1475]: time="2026-01-20T00:33:07.712989387Z" level=info msg="CreateContainer within sandbox \"226f065ad2959ab47e6e2e7ca2e67321f9fe3291d79d9f934ef928d66633f94a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:33:07.715267 containerd[1475]: time="2026-01-20T00:33:07.715214601Z" level=info msg="CreateContainer within sandbox \"ebe590ed2c38aac920b62d7f4204017801abe3a2af19f1d310dc903fbdf1811c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:33:07.727444 containerd[1475]: time="2026-01-20T00:33:07.727358666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"05840f6cbdafd82f8dbbb5afdee5cfb6d2dba940493a31dbfcfef3e7d50ce80e\"" Jan 20 00:33:07.729344 kubelet[2156]: E0120 00:33:07.729263 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:07.737462 containerd[1475]: time="2026-01-20T00:33:07.737399101Z" level=info msg="CreateContainer within sandbox \"05840f6cbdafd82f8dbbb5afdee5cfb6d2dba940493a31dbfcfef3e7d50ce80e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:33:07.764871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195672350.mount: Deactivated successfully. Jan 20 00:33:07.794832 containerd[1475]: time="2026-01-20T00:33:07.794462075Z" level=info msg="CreateContainer within sandbox \"ebe590ed2c38aac920b62d7f4204017801abe3a2af19f1d310dc903fbdf1811c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"66e409aaeb7c033682f00e2e35e6a9fff902e5064bda7405ae7c7c5be122caee\"" Jan 20 00:33:07.882863 containerd[1475]: time="2026-01-20T00:33:07.880148822Z" level=info msg="CreateContainer within sandbox \"226f065ad2959ab47e6e2e7ca2e67321f9fe3291d79d9f934ef928d66633f94a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a0cab6a5f5ecceb694f1ac1fec7996618f7ef9b420cf793db272ca2edcf6af0\"" Jan 20 00:33:07.924212 containerd[1475]: time="2026-01-20T00:33:07.924028139Z" level=info msg="StartContainer for \"66e409aaeb7c033682f00e2e35e6a9fff902e5064bda7405ae7c7c5be122caee\"" Jan 20 00:33:07.924866 containerd[1475]: time="2026-01-20T00:33:07.924350634Z" level=info msg="StartContainer for \"4a0cab6a5f5ecceb694f1ac1fec7996618f7ef9b420cf793db272ca2edcf6af0\"" Jan 20 00:33:07.932692 containerd[1475]: time="2026-01-20T00:33:07.932480491Z" level=info msg="CreateContainer within sandbox \"05840f6cbdafd82f8dbbb5afdee5cfb6d2dba940493a31dbfcfef3e7d50ce80e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"78c764a5db61f80dd04ae0ee41883b06894016f0d6c5b92b629a1ef41a2cb939\"" Jan 20 00:33:07.935363 containerd[1475]: time="2026-01-20T00:33:07.935332045Z" level=info msg="StartContainer for \"78c764a5db61f80dd04ae0ee41883b06894016f0d6c5b92b629a1ef41a2cb939\"" Jan 20 00:33:07.998102 systemd[1]: Started cri-containerd-78c764a5db61f80dd04ae0ee41883b06894016f0d6c5b92b629a1ef41a2cb939.scope - libcontainer container 78c764a5db61f80dd04ae0ee41883b06894016f0d6c5b92b629a1ef41a2cb939. Jan 20 00:33:08.010922 systemd[1]: Started cri-containerd-4a0cab6a5f5ecceb694f1ac1fec7996618f7ef9b420cf793db272ca2edcf6af0.scope - libcontainer container 4a0cab6a5f5ecceb694f1ac1fec7996618f7ef9b420cf793db272ca2edcf6af0. Jan 20 00:33:08.012830 systemd[1]: Started cri-containerd-66e409aaeb7c033682f00e2e35e6a9fff902e5064bda7405ae7c7c5be122caee.scope - libcontainer container 66e409aaeb7c033682f00e2e35e6a9fff902e5064bda7405ae7c7c5be122caee. Jan 20 00:33:08.142639 containerd[1475]: time="2026-01-20T00:33:08.141382068Z" level=info msg="StartContainer for \"4a0cab6a5f5ecceb694f1ac1fec7996618f7ef9b420cf793db272ca2edcf6af0\" returns successfully" Jan 20 00:33:08.167421 containerd[1475]: time="2026-01-20T00:33:08.167120068Z" level=info msg="StartContainer for \"78c764a5db61f80dd04ae0ee41883b06894016f0d6c5b92b629a1ef41a2cb939\" returns successfully" Jan 20 00:33:08.201533 containerd[1475]: time="2026-01-20T00:33:08.201396691Z" level=info msg="StartContainer for \"66e409aaeb7c033682f00e2e35e6a9fff902e5064bda7405ae7c7c5be122caee\" returns successfully" Jan 20 00:33:08.954033 kubelet[2156]: E0120 00:33:08.953951 2156 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:08.954636 kubelet[2156]: E0120 00:33:08.954164 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:08.957485 kubelet[2156]: E0120 00:33:08.957409 2156 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:08.957834 kubelet[2156]: E0120 00:33:08.957763 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:08.967567 kubelet[2156]: E0120 00:33:08.967482 2156 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:08.967852 kubelet[2156]: E0120 00:33:08.967804 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:09.012731 kubelet[2156]: I0120 00:33:09.012571 2156 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:09.969312 kubelet[2156]: E0120 00:33:09.968966 2156 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:09.969312 kubelet[2156]: E0120 00:33:09.969103 2156 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:09.969312 kubelet[2156]: E0120 00:33:09.969146 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:09.969312 kubelet[2156]: E0120 00:33:09.969189 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:09.970005 kubelet[2156]: E0120 00:33:09.969501 2156 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:09.970005 kubelet[2156]: E0120 00:33:09.969716 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:10.828267 kubelet[2156]: E0120 00:33:10.828068 2156 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 00:33:10.940065 kubelet[2156]: I0120 00:33:10.939952 2156 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:33:10.940065 kubelet[2156]: E0120 00:33:10.940040 2156 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 00:33:10.977945 kubelet[2156]: I0120 00:33:10.977814 2156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:10.978620 kubelet[2156]: I0120 00:33:10.978118 2156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:10.995410 kubelet[2156]: E0120 00:33:10.995289 2156 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:10.995637 kubelet[2156]: E0120 00:33:10.995600 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:10.995981 kubelet[2156]: E0120 00:33:10.995923 2156 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:10.996756 kubelet[2156]: E0120 00:33:10.996083 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:11.001172 kubelet[2156]: I0120 00:33:11.000138 2156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:11.005890 kubelet[2156]: E0120 00:33:11.005818 2156 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:11.005890 kubelet[2156]: I0120 00:33:11.005875 2156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:11.008383 kubelet[2156]: E0120 00:33:11.008317 2156 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:11.008383 kubelet[2156]: I0120 00:33:11.008352 2156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:11.011387 kubelet[2156]: E0120 00:33:11.011200 2156 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:11.687395 kubelet[2156]: I0120 00:33:11.686264 2156 apiserver.go:52] "Watching apiserver" Jan 20 00:33:11.701251 kubelet[2156]: I0120 00:33:11.701109 2156 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:33:13.804556 systemd[1]: Reloading requested from client PID 2443 ('systemctl') (unit session-7.scope)... Jan 20 00:33:13.804589 systemd[1]: Reloading... Jan 20 00:33:13.947734 zram_generator::config[2482]: No configuration found. Jan 20 00:33:14.079423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:33:14.203723 systemd[1]: Reloading finished in 398 ms. Jan 20 00:33:14.265602 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:14.287414 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:33:14.288154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:14.288251 systemd[1]: kubelet.service: Consumed 3.087s CPU time, 133.2M memory peak, 0B memory swap peak. Jan 20 00:33:14.301215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:14.508163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:14.515005 (kubelet)[2527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:33:14.584017 kubelet[2527]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:14.584017 kubelet[2527]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:33:14.584017 kubelet[2527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:14.584868 kubelet[2527]: I0120 00:33:14.584011 2527 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:33:14.593981 kubelet[2527]: I0120 00:33:14.593928 2527 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:33:14.593981 kubelet[2527]: I0120 00:33:14.593975 2527 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:33:14.594393 kubelet[2527]: I0120 00:33:14.594335 2527 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:33:14.599576 kubelet[2527]: I0120 00:33:14.599522 2527 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 00:33:14.603209 kubelet[2527]: I0120 00:33:14.603151 2527 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:33:14.610850 kubelet[2527]: E0120 00:33:14.610783 2527 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:33:14.610992 kubelet[2527]: I0120 00:33:14.610830 2527 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:33:14.616027 kubelet[2527]: I0120 00:33:14.615931 2527 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:33:14.616213 kubelet[2527]: I0120 00:33:14.616169 2527 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:33:14.616340 kubelet[2527]: I0120 00:33:14.616207 2527 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:33:14.616516 kubelet[2527]: I0120 00:33:14.616349 2527 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:33:14.616516 kubelet[2527]: I0120 00:33:14.616358 2527 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:33:14.616516 kubelet[2527]: I0120 00:33:14.616402 2527 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:14.616615 kubelet[2527]: I0120 00:33:14.616609 2527 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:33:14.616659 kubelet[2527]: I0120 00:33:14.616620 2527 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:33:14.616659 kubelet[2527]: I0120 00:33:14.616641 2527 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:33:14.616659 kubelet[2527]: I0120 00:33:14.616655 2527 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:33:14.618753 kubelet[2527]: I0120 00:33:14.618719 2527 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:33:14.619508 kubelet[2527]: I0120 00:33:14.619406 2527 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:33:14.623540 kubelet[2527]: I0120 00:33:14.623489 2527 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:33:14.623597 kubelet[2527]: I0120 00:33:14.623580 2527 server.go:1289] "Started kubelet" Jan 20 00:33:14.626076 kubelet[2527]: I0120 00:33:14.625975 2527 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:33:14.627852 kubelet[2527]: I0120 00:33:14.627562 2527 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:33:14.640114 kubelet[2527]: I0120 00:33:14.640047 2527 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:33:14.640320 kubelet[2527]: I0120 00:33:14.640248 2527 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:33:14.643106 kubelet[2527]: I0120 00:33:14.642118 2527 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:33:14.649758 kubelet[2527]: I0120 00:33:14.647608 2527 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:33:14.649758 kubelet[2527]: E0120 00:33:14.649179 2527 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:33:14.649758 kubelet[2527]: I0120 00:33:14.649215 2527 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:33:14.649758 kubelet[2527]: I0120 00:33:14.649317 2527 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:33:14.651770 kubelet[2527]: I0120 00:33:14.651533 2527 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:33:14.652764 kubelet[2527]: I0120 00:33:14.652743 2527 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:33:14.653367 kubelet[2527]: I0120 00:33:14.653340 2527 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:33:14.655570 kubelet[2527]: I0120 00:33:14.655510 2527 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:33:14.655991 kubelet[2527]: E0120 00:33:14.655966 2527 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:33:14.676956 kubelet[2527]: I0120 00:33:14.676878 2527 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:33:14.679977 kubelet[2527]: I0120 00:33:14.679881 2527 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:33:14.679977 kubelet[2527]: I0120 00:33:14.679951 2527 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:33:14.680104 kubelet[2527]: I0120 00:33:14.679986 2527 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:33:14.680104 kubelet[2527]: I0120 00:33:14.679998 2527 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:33:14.680104 kubelet[2527]: E0120 00:33:14.680055 2527 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:33:14.714869 kubelet[2527]: I0120 00:33:14.714831 2527 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:33:14.714869 kubelet[2527]: I0120 00:33:14.714862 2527 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:33:14.714869 kubelet[2527]: I0120 00:33:14.714885 2527 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:14.715073 kubelet[2527]: I0120 00:33:14.715007 2527 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:33:14.715073 kubelet[2527]: I0120 00:33:14.715017 2527 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:33:14.715073 kubelet[2527]: I0120 00:33:14.715032 2527 policy_none.go:49] "None policy: Start" Jan 20 00:33:14.715073 kubelet[2527]: I0120 00:33:14.715042 2527 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:33:14.715073 kubelet[2527]: I0120 00:33:14.715052 2527 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:33:14.715321 kubelet[2527]: I0120 00:33:14.715282 2527 state_mem.go:75] "Updated machine memory state" Jan 20 00:33:14.723362 kubelet[2527]: E0120 00:33:14.723234 2527 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:33:14.723769 kubelet[2527]: I0120 00:33:14.723753 2527 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:33:14.723937 kubelet[2527]: I0120 00:33:14.723841 2527 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:33:14.724294 kubelet[2527]: I0120 00:33:14.724237 2527 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:33:14.725632 kubelet[2527]: E0120 00:33:14.725189 2527 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:33:14.781390 kubelet[2527]: I0120 00:33:14.781233 2527 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:14.781390 kubelet[2527]: I0120 00:33:14.781376 2527 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:14.784100 kubelet[2527]: I0120 00:33:14.784074 2527 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:14.804585 sudo[2568]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 00:33:14.805033 sudo[2568]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 00:33:14.831833 kubelet[2527]: I0120 00:33:14.831326 2527 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:14.843252 kubelet[2527]: I0120 00:33:14.843186 2527 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 00:33:14.843378 kubelet[2527]: I0120 00:33:14.843278 2527 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:33:14.853543 kubelet[2527]: I0120 00:33:14.853155 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a90743ced4d9ab1e6ff838b04e0b2aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a90743ced4d9ab1e6ff838b04e0b2aa\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:14.853543 kubelet[2527]: I0120 00:33:14.853211 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:14.853543 kubelet[2527]: I0120 00:33:14.853240 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:14.853543 kubelet[2527]: I0120 00:33:14.853266 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:14.853543 kubelet[2527]: I0120 00:33:14.853303 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a90743ced4d9ab1e6ff838b04e0b2aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a90743ced4d9ab1e6ff838b04e0b2aa\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:14.853882 kubelet[2527]: I0120 00:33:14.853328 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a90743ced4d9ab1e6ff838b04e0b2aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a90743ced4d9ab1e6ff838b04e0b2aa\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:14.853882 kubelet[2527]: I0120 00:33:14.853350 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:14.853882 kubelet[2527]: I0120 00:33:14.853381 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:14.853882 kubelet[2527]: I0120 00:33:14.853409 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:15.088497 kubelet[2527]: E0120 00:33:15.088417 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:15.092355 kubelet[2527]: E0120 00:33:15.092251 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:15.093946 kubelet[2527]: E0120 00:33:15.093767 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:15.348212 sudo[2568]: pam_unix(sudo:session): session closed for user root Jan 20 00:33:15.618919 kubelet[2527]: I0120 00:33:15.618413 2527 apiserver.go:52] "Watching apiserver" Jan 20 00:33:15.647175 kubelet[2527]: I0120 00:33:15.647110 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.647014321 podStartE2EDuration="1.647014321s" podCreationTimestamp="2026-01-20 00:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:15.64675435 +0000 UTC m=+1.125126958" watchObservedRunningTime="2026-01-20 00:33:15.647014321 +0000 UTC m=+1.125386931" Jan 20 00:33:15.650383 kubelet[2527]: I0120 00:33:15.650336 2527 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:33:15.670540 kubelet[2527]: I0120 00:33:15.670087 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.670065412 podStartE2EDuration="1.670065412s" podCreationTimestamp="2026-01-20 00:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:15.659866231 +0000 UTC m=+1.138238840" watchObservedRunningTime="2026-01-20 00:33:15.670065412 +0000 UTC m=+1.148438020" Jan 20 00:33:15.670540 kubelet[2527]: I0120 00:33:15.670227 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.670217503 podStartE2EDuration="1.670217503s" podCreationTimestamp="2026-01-20 00:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:15.66987873 +0000 UTC m=+1.148251339" watchObservedRunningTime="2026-01-20 00:33:15.670217503 +0000 UTC m=+1.148590132" Jan 20 00:33:15.700576 kubelet[2527]: I0120 00:33:15.700429 2527 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:15.701117 kubelet[2527]: I0120 00:33:15.701049 2527 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:15.703583 kubelet[2527]: E0120 00:33:15.703415 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:15.723771 kubelet[2527]: E0120 00:33:15.722994 2527 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:15.723771 kubelet[2527]: E0120 00:33:15.723000 2527 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:15.723771 kubelet[2527]: E0120 00:33:15.723247 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:15.723771 kubelet[2527]: E0120 00:33:15.723258 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:16.703095 kubelet[2527]: E0120 00:33:16.702727 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:16.703095 kubelet[2527]: E0120 00:33:16.702839 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:16.703095 kubelet[2527]: E0120 00:33:16.702865 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:17.110268 sudo[1653]: pam_unix(sudo:session): session closed for user root Jan 20 00:33:17.113893 sshd[1648]: pam_unix(sshd:session): session closed for user core Jan 20 00:33:17.118824 systemd[1]: sshd@6-10.0.0.11:22-10.0.0.1:35126.service: Deactivated successfully. Jan 20 00:33:17.121045 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:33:17.121266 systemd[1]: session-7.scope: Consumed 9.079s CPU time, 162.3M memory peak, 0B memory swap peak. Jan 20 00:33:17.122066 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:33:17.124112 systemd-logind[1468]: Removed session 7. Jan 20 00:33:18.940731 kubelet[2527]: I0120 00:33:18.940567 2527 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:33:18.941585 containerd[1475]: time="2026-01-20T00:33:18.941540443Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:33:18.942070 kubelet[2527]: I0120 00:33:18.941838 2527 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:33:19.955460 systemd[1]: Created slice kubepods-besteffort-poda3b30ac3_9722_4a26_9298_151ce6b5935f.slice - libcontainer container kubepods-besteffort-poda3b30ac3_9722_4a26_9298_151ce6b5935f.slice. Jan 20 00:33:19.960833 systemd[1]: Created slice kubepods-burstable-pod34d1696e_aef8_4a5f_8c36_9efadfc0cd0b.slice - libcontainer container kubepods-burstable-pod34d1696e_aef8_4a5f_8c36_9efadfc0cd0b.slice. Jan 20 00:33:20.021024 kubelet[2527]: I0120 00:33:20.020911 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-lib-modules\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.021024 kubelet[2527]: I0120 00:33:20.020974 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3b30ac3-9722-4a26-9298-151ce6b5935f-kube-proxy\") pod \"kube-proxy-p89zk\" (UID: \"a3b30ac3-9722-4a26-9298-151ce6b5935f\") " pod="kube-system/kube-proxy-p89zk" Jan 20 00:33:20.021024 kubelet[2527]: I0120 00:33:20.021001 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjvz\" (UniqueName: \"kubernetes.io/projected/a3b30ac3-9722-4a26-9298-151ce6b5935f-kube-api-access-qbjvz\") pod \"kube-proxy-p89zk\" (UID: \"a3b30ac3-9722-4a26-9298-151ce6b5935f\") " pod="kube-system/kube-proxy-p89zk" Jan 20 00:33:20.021024 kubelet[2527]: I0120 00:33:20.021017 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-etc-cni-netd\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.021024 kubelet[2527]: I0120 00:33:20.021030 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-config-path\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022037 kubelet[2527]: I0120 00:33:20.021043 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-host-proc-sys-net\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022037 kubelet[2527]: I0120 00:33:20.021058 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-xtables-lock\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022037 kubelet[2527]: I0120 00:33:20.021129 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-clustermesh-secrets\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022037 kubelet[2527]: I0120 00:33:20.021208 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-hubble-tls\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022037 kubelet[2527]: I0120 00:33:20.021232 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k9pb\" (UniqueName: \"kubernetes.io/projected/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-kube-api-access-8k9pb\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022037 kubelet[2527]: I0120 00:33:20.021326 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-cgroup\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022243 kubelet[2527]: I0120 00:33:20.021352 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-host-proc-sys-kernel\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022243 kubelet[2527]: I0120 00:33:20.021371 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3b30ac3-9722-4a26-9298-151ce6b5935f-xtables-lock\") pod \"kube-proxy-p89zk\" (UID: \"a3b30ac3-9722-4a26-9298-151ce6b5935f\") " pod="kube-system/kube-proxy-p89zk" Jan 20 00:33:20.022243 kubelet[2527]: I0120 00:33:20.021385 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3b30ac3-9722-4a26-9298-151ce6b5935f-lib-modules\") pod \"kube-proxy-p89zk\" (UID: \"a3b30ac3-9722-4a26-9298-151ce6b5935f\") " pod="kube-system/kube-proxy-p89zk" Jan 20 00:33:20.022243 kubelet[2527]: I0120 00:33:20.021400 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-run\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022243 kubelet[2527]: I0120 00:33:20.021463 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-bpf-maps\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022243 kubelet[2527]: I0120 00:33:20.021481 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-hostproc\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.022491 kubelet[2527]: I0120 00:33:20.021519 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cni-path\") pod \"cilium-jm5f7\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " pod="kube-system/cilium-jm5f7" Jan 20 00:33:20.170351 systemd[1]: Created slice kubepods-besteffort-pod9364a2e5_d61f_4cd6_8271_f18840a19c20.slice - libcontainer container kubepods-besteffort-pod9364a2e5_d61f_4cd6_8271_f18840a19c20.slice. Jan 20 00:33:20.223081 kubelet[2527]: I0120 00:33:20.222861 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqz77\" (UniqueName: \"kubernetes.io/projected/9364a2e5-d61f-4cd6-8271-f18840a19c20-kube-api-access-cqz77\") pod \"cilium-operator-6c4d7847fc-tctvd\" (UID: \"9364a2e5-d61f-4cd6-8271-f18840a19c20\") " pod="kube-system/cilium-operator-6c4d7847fc-tctvd" Jan 20 00:33:20.223081 kubelet[2527]: I0120 00:33:20.222944 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9364a2e5-d61f-4cd6-8271-f18840a19c20-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tctvd\" (UID: \"9364a2e5-d61f-4cd6-8271-f18840a19c20\") " pod="kube-system/cilium-operator-6c4d7847fc-tctvd" Jan 20 00:33:20.273906 kubelet[2527]: E0120 00:33:20.273148 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:20.274344 kubelet[2527]: E0120 00:33:20.274040 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:20.276329 containerd[1475]: time="2026-01-20T00:33:20.276223967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p89zk,Uid:a3b30ac3-9722-4a26-9298-151ce6b5935f,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:20.276329 containerd[1475]: time="2026-01-20T00:33:20.276281853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jm5f7,Uid:34d1696e-aef8-4a5f-8c36-9efadfc0cd0b,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:20.329729 containerd[1475]: time="2026-01-20T00:33:20.328326642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:20.329729 containerd[1475]: time="2026-01-20T00:33:20.328883417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:20.329729 containerd[1475]: time="2026-01-20T00:33:20.328979024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:20.329729 containerd[1475]: time="2026-01-20T00:33:20.329346066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:20.346120 containerd[1475]: time="2026-01-20T00:33:20.341371567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:20.346120 containerd[1475]: time="2026-01-20T00:33:20.341609440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:20.346120 containerd[1475]: time="2026-01-20T00:33:20.341638082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:20.346120 containerd[1475]: time="2026-01-20T00:33:20.341871175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:20.370936 systemd[1]: Started cri-containerd-4d31c1cdf35f67ab46c6f9fe877f29342c297a39bf7c3885a75c1da9ee8c44fe.scope - libcontainer container 4d31c1cdf35f67ab46c6f9fe877f29342c297a39bf7c3885a75c1da9ee8c44fe. Jan 20 00:33:20.376636 systemd[1]: Started cri-containerd-996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4.scope - libcontainer container 996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4. Jan 20 00:33:20.405000 containerd[1475]: time="2026-01-20T00:33:20.404885717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p89zk,Uid:a3b30ac3-9722-4a26-9298-151ce6b5935f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d31c1cdf35f67ab46c6f9fe877f29342c297a39bf7c3885a75c1da9ee8c44fe\"" Jan 20 00:33:20.405914 kubelet[2527]: E0120 00:33:20.405834 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:20.414721 containerd[1475]: time="2026-01-20T00:33:20.414261166Z" level=info msg="CreateContainer within sandbox \"4d31c1cdf35f67ab46c6f9fe877f29342c297a39bf7c3885a75c1da9ee8c44fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:33:20.441924 containerd[1475]: time="2026-01-20T00:33:20.441789770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jm5f7,Uid:34d1696e-aef8-4a5f-8c36-9efadfc0cd0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\"" Jan 20 00:33:20.443780 kubelet[2527]: E0120 00:33:20.443710 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:20.446354 containerd[1475]: time="2026-01-20T00:33:20.446328358Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 00:33:20.471299 containerd[1475]: time="2026-01-20T00:33:20.471186687Z" level=info msg="CreateContainer within sandbox \"4d31c1cdf35f67ab46c6f9fe877f29342c297a39bf7c3885a75c1da9ee8c44fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92d97d87090aab11ad7511ffe743d0d894a76d1ceb91fa0bd7a1758d81805acb\"" Jan 20 00:33:20.472199 containerd[1475]: time="2026-01-20T00:33:20.472111775Z" level=info msg="StartContainer for \"92d97d87090aab11ad7511ffe743d0d894a76d1ceb91fa0bd7a1758d81805acb\"" Jan 20 00:33:20.475494 kubelet[2527]: E0120 00:33:20.475255 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:20.476525 containerd[1475]: time="2026-01-20T00:33:20.475788213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tctvd,Uid:9364a2e5-d61f-4cd6-8271-f18840a19c20,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:20.503857 update_engine[1469]: I20260120 00:33:20.501739 1469 update_attempter.cc:509] Updating boot flags... Jan 20 00:33:20.530872 systemd[1]: Started cri-containerd-92d97d87090aab11ad7511ffe743d0d894a76d1ceb91fa0bd7a1758d81805acb.scope - libcontainer container 92d97d87090aab11ad7511ffe743d0d894a76d1ceb91fa0bd7a1758d81805acb. Jan 20 00:33:20.541020 containerd[1475]: time="2026-01-20T00:33:20.536147669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:20.541020 containerd[1475]: time="2026-01-20T00:33:20.536234190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:20.541020 containerd[1475]: time="2026-01-20T00:33:20.536246282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:20.541020 containerd[1475]: time="2026-01-20T00:33:20.536375321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:20.571813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2745) Jan 20 00:33:20.620046 systemd[1]: Started cri-containerd-89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6.scope - libcontainer container 89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6. Jan 20 00:33:20.646746 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2746) Jan 20 00:33:20.744778 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2746) Jan 20 00:33:20.767780 containerd[1475]: time="2026-01-20T00:33:20.767659497Z" level=info msg="StartContainer for \"92d97d87090aab11ad7511ffe743d0d894a76d1ceb91fa0bd7a1758d81805acb\" returns successfully" Jan 20 00:33:20.856339 containerd[1475]: time="2026-01-20T00:33:20.856224097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tctvd,Uid:9364a2e5-d61f-4cd6-8271-f18840a19c20,Namespace:kube-system,Attempt:0,} returns sandbox id \"89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6\"" Jan 20 00:33:20.859561 kubelet[2527]: E0120 00:33:20.859337 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:21.815817 kubelet[2527]: E0120 00:33:21.815627 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:22.799705 kubelet[2527]: E0120 00:33:22.799581 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:24.500189 kubelet[2527]: E0120 00:33:24.498025 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:24.605597 kubelet[2527]: I0120 00:33:24.605441 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p89zk" podStartSLOduration=5.605368886 podStartE2EDuration="5.605368886s" podCreationTimestamp="2026-01-20 00:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:21.863987196 +0000 UTC m=+7.342359805" watchObservedRunningTime="2026-01-20 00:33:24.605368886 +0000 UTC m=+10.083741495" Jan 20 00:33:24.631713 kubelet[2527]: E0120 00:33:24.631499 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:24.828328 kubelet[2527]: E0120 00:33:24.825657 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:24.993488 kubelet[2527]: E0120 00:33:24.993022 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:24.996620 kubelet[2527]: E0120 00:33:24.995627 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:25.941603 kubelet[2527]: E0120 00:33:25.939188 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:25.941603 kubelet[2527]: E0120 00:33:25.939532 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:39.466259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263612224.mount: Deactivated successfully. Jan 20 00:33:47.589338 containerd[1475]: time="2026-01-20T00:33:47.589001000Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:47.597599 containerd[1475]: time="2026-01-20T00:33:47.597047427Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 00:33:47.604636 containerd[1475]: time="2026-01-20T00:33:47.604549661Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:47.610350 containerd[1475]: time="2026-01-20T00:33:47.610270982Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 27.163906746s" Jan 20 00:33:47.610350 containerd[1475]: time="2026-01-20T00:33:47.610337336Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 00:33:47.620135 containerd[1475]: time="2026-01-20T00:33:47.619878463Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 00:33:47.652070 containerd[1475]: time="2026-01-20T00:33:47.651749338Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:33:47.776272 containerd[1475]: time="2026-01-20T00:33:47.775967609Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\"" Jan 20 00:33:47.781644 containerd[1475]: time="2026-01-20T00:33:47.780150974Z" level=info msg="StartContainer for \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\"" Jan 20 00:33:47.933802 systemd[1]: Started cri-containerd-c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f.scope - libcontainer container c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f. Jan 20 00:33:48.091097 containerd[1475]: time="2026-01-20T00:33:48.090885788Z" level=info msg="StartContainer for \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\" returns successfully" Jan 20 00:33:48.158782 systemd[1]: cri-containerd-c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f.scope: Deactivated successfully. Jan 20 00:33:48.206756 kubelet[2527]: E0120 00:33:48.205879 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:48.528372 containerd[1475]: time="2026-01-20T00:33:48.521846087Z" level=info msg="shim disconnected" id=c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f namespace=k8s.io Jan 20 00:33:48.528372 containerd[1475]: time="2026-01-20T00:33:48.527221973Z" level=warning msg="cleaning up after shim disconnected" id=c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f namespace=k8s.io Jan 20 00:33:48.528372 containerd[1475]: time="2026-01-20T00:33:48.527242592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:33:48.722525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f-rootfs.mount: Deactivated successfully. Jan 20 00:33:49.215351 kubelet[2527]: E0120 00:33:49.214724 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:49.236732 containerd[1475]: time="2026-01-20T00:33:49.236532355Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:33:49.290751 containerd[1475]: time="2026-01-20T00:33:49.288600829Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\"" Jan 20 00:33:49.290751 containerd[1475]: time="2026-01-20T00:33:49.289938053Z" level=info msg="StartContainer for \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\"" Jan 20 00:33:49.360967 systemd[1]: Started cri-containerd-d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7.scope - libcontainer container d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7. Jan 20 00:33:49.418552 containerd[1475]: time="2026-01-20T00:33:49.418442789Z" level=info msg="StartContainer for \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\" returns successfully" Jan 20 00:33:49.444158 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:33:49.444941 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:33:49.445050 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:33:49.460659 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:33:49.461176 systemd[1]: cri-containerd-d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7.scope: Deactivated successfully. Jan 20 00:33:49.505982 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:33:49.541947 containerd[1475]: time="2026-01-20T00:33:49.541824839Z" level=info msg="shim disconnected" id=d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7 namespace=k8s.io Jan 20 00:33:49.541947 containerd[1475]: time="2026-01-20T00:33:49.541903285Z" level=warning msg="cleaning up after shim disconnected" id=d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7 namespace=k8s.io Jan 20 00:33:49.541947 containerd[1475]: time="2026-01-20T00:33:49.541919415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:33:49.723202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7-rootfs.mount: Deactivated successfully. Jan 20 00:33:50.234753 kubelet[2527]: E0120 00:33:50.234557 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:50.260210 containerd[1475]: time="2026-01-20T00:33:50.259555420Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:33:50.306940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445130886.mount: Deactivated successfully. Jan 20 00:33:50.453751 containerd[1475]: time="2026-01-20T00:33:50.453530233Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\"" Jan 20 00:33:50.459638 containerd[1475]: time="2026-01-20T00:33:50.458972205Z" level=info msg="StartContainer for \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\"" Jan 20 00:33:50.571013 systemd[1]: Started cri-containerd-386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306.scope - libcontainer container 386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306. Jan 20 00:33:50.691997 containerd[1475]: time="2026-01-20T00:33:50.691822586Z" level=info msg="StartContainer for \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\" returns successfully" Jan 20 00:33:50.709745 systemd[1]: cri-containerd-386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306.scope: Deactivated successfully. Jan 20 00:33:50.809477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306-rootfs.mount: Deactivated successfully. Jan 20 00:33:50.865643 containerd[1475]: time="2026-01-20T00:33:50.864194483Z" level=info msg="shim disconnected" id=386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306 namespace=k8s.io Jan 20 00:33:50.865643 containerd[1475]: time="2026-01-20T00:33:50.864307574Z" level=warning msg="cleaning up after shim disconnected" id=386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306 namespace=k8s.io Jan 20 00:33:50.865643 containerd[1475]: time="2026-01-20T00:33:50.864321879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:33:51.274964 kubelet[2527]: E0120 00:33:51.274013 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:51.298648 containerd[1475]: time="2026-01-20T00:33:51.298074898Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:33:51.373990 containerd[1475]: time="2026-01-20T00:33:51.373607803Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\"" Jan 20 00:33:51.375875 containerd[1475]: time="2026-01-20T00:33:51.375313605Z" level=info msg="StartContainer for \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\"" Jan 20 00:33:51.463307 systemd[1]: Started cri-containerd-10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571.scope - libcontainer container 10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571. Jan 20 00:33:51.541349 containerd[1475]: time="2026-01-20T00:33:51.541215703Z" level=info msg="StartContainer for \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\" returns successfully" Jan 20 00:33:51.541774 systemd[1]: cri-containerd-10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571.scope: Deactivated successfully. Jan 20 00:33:51.650085 containerd[1475]: time="2026-01-20T00:33:51.649775229Z" level=info msg="shim disconnected" id=10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571 namespace=k8s.io Jan 20 00:33:51.650085 containerd[1475]: time="2026-01-20T00:33:51.649842755Z" level=warning msg="cleaning up after shim disconnected" id=10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571 namespace=k8s.io Jan 20 00:33:51.650085 containerd[1475]: time="2026-01-20T00:33:51.649858204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:33:51.727537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571-rootfs.mount: Deactivated successfully. Jan 20 00:33:52.298829 kubelet[2527]: E0120 00:33:52.298592 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:52.327216 containerd[1475]: time="2026-01-20T00:33:52.326517146Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:33:52.408643 containerd[1475]: time="2026-01-20T00:33:52.408447161Z" level=info msg="CreateContainer within sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\"" Jan 20 00:33:52.414510 containerd[1475]: time="2026-01-20T00:33:52.411972519Z" level=info msg="StartContainer for \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\"" Jan 20 00:33:52.539765 systemd[1]: Started cri-containerd-15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe.scope - libcontainer container 15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe. Jan 20 00:33:52.699243 containerd[1475]: time="2026-01-20T00:33:52.699062229Z" level=info msg="StartContainer for \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\" returns successfully" Jan 20 00:33:52.722618 containerd[1475]: time="2026-01-20T00:33:52.717184470Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:52.723128 containerd[1475]: time="2026-01-20T00:33:52.722923858Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 00:33:52.725023 containerd[1475]: time="2026-01-20T00:33:52.724853568Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:52.745899 containerd[1475]: time="2026-01-20T00:33:52.745270801Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.125311256s" Jan 20 00:33:52.745899 containerd[1475]: time="2026-01-20T00:33:52.745325734Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 00:33:52.757821 systemd[1]: run-containerd-runc-k8s.io-15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe-runc.lhvQIZ.mount: Deactivated successfully. Jan 20 00:33:52.766485 containerd[1475]: time="2026-01-20T00:33:52.763551827Z" level=info msg="CreateContainer within sandbox \"89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 00:33:52.854879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448670593.mount: Deactivated successfully. Jan 20 00:33:52.926203 containerd[1475]: time="2026-01-20T00:33:52.926114210Z" level=info msg="CreateContainer within sandbox \"89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\"" Jan 20 00:33:52.933799 containerd[1475]: time="2026-01-20T00:33:52.930790242Z" level=info msg="StartContainer for \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\"" Jan 20 00:33:52.956296 kubelet[2527]: I0120 00:33:52.956113 2527 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:33:53.051468 systemd[1]: Started cri-containerd-cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c.scope - libcontainer container cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c. Jan 20 00:33:53.099818 kubelet[2527]: I0120 00:33:53.099774 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bs46\" (UniqueName: \"kubernetes.io/projected/d4bfb3c5-7ac2-4ffc-aa10-d30d709680b0-kube-api-access-7bs46\") pod \"coredns-674b8bbfcf-7m5cj\" (UID: \"d4bfb3c5-7ac2-4ffc-aa10-d30d709680b0\") " pod="kube-system/coredns-674b8bbfcf-7m5cj" Jan 20 00:33:53.099970 kubelet[2527]: I0120 00:33:53.099844 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a2df800-00bf-48d8-a18d-1437de552d26-config-volume\") pod \"coredns-674b8bbfcf-j46c9\" (UID: \"5a2df800-00bf-48d8-a18d-1437de552d26\") " pod="kube-system/coredns-674b8bbfcf-j46c9" Jan 20 00:33:53.099970 kubelet[2527]: I0120 00:33:53.099891 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntwxp\" (UniqueName: \"kubernetes.io/projected/5a2df800-00bf-48d8-a18d-1437de552d26-kube-api-access-ntwxp\") pod \"coredns-674b8bbfcf-j46c9\" (UID: \"5a2df800-00bf-48d8-a18d-1437de552d26\") " pod="kube-system/coredns-674b8bbfcf-j46c9" Jan 20 00:33:53.099970 kubelet[2527]: I0120 00:33:53.099924 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4bfb3c5-7ac2-4ffc-aa10-d30d709680b0-config-volume\") pod \"coredns-674b8bbfcf-7m5cj\" (UID: \"d4bfb3c5-7ac2-4ffc-aa10-d30d709680b0\") " pod="kube-system/coredns-674b8bbfcf-7m5cj" Jan 20 00:33:53.106590 systemd[1]: Created slice kubepods-burstable-pod5a2df800_00bf_48d8_a18d_1437de552d26.slice - libcontainer container kubepods-burstable-pod5a2df800_00bf_48d8_a18d_1437de552d26.slice. Jan 20 00:33:53.139242 systemd[1]: Created slice kubepods-burstable-podd4bfb3c5_7ac2_4ffc_aa10_d30d709680b0.slice - libcontainer container kubepods-burstable-podd4bfb3c5_7ac2_4ffc_aa10_d30d709680b0.slice. Jan 20 00:33:53.204526 containerd[1475]: time="2026-01-20T00:33:53.203615633Z" level=info msg="StartContainer for \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\" returns successfully" Jan 20 00:33:53.311817 kubelet[2527]: E0120 00:33:53.309203 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:53.339887 kubelet[2527]: E0120 00:33:53.339715 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:53.414347 kubelet[2527]: I0120 00:33:53.413344 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tctvd" podStartSLOduration=1.531052164 podStartE2EDuration="33.413323928s" podCreationTimestamp="2026-01-20 00:33:20 +0000 UTC" firstStartedPulling="2026-01-20 00:33:20.865875524 +0000 UTC m=+6.344248132" lastFinishedPulling="2026-01-20 00:33:52.748147277 +0000 UTC m=+38.226519896" observedRunningTime="2026-01-20 00:33:53.41254269 +0000 UTC m=+38.890915360" watchObservedRunningTime="2026-01-20 00:33:53.413323928 +0000 UTC m=+38.891696537" Jan 20 00:33:53.420384 kubelet[2527]: E0120 00:33:53.417002 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:53.422607 containerd[1475]: time="2026-01-20T00:33:53.421474846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j46c9,Uid:5a2df800-00bf-48d8-a18d-1437de552d26,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:53.452389 kubelet[2527]: E0120 00:33:53.452248 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:53.455822 containerd[1475]: time="2026-01-20T00:33:53.455608547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7m5cj,Uid:d4bfb3c5-7ac2-4ffc-aa10-d30d709680b0,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:54.344734 kubelet[2527]: E0120 00:33:54.344474 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:54.346233 kubelet[2527]: E0120 00:33:54.343848 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:55.345266 kubelet[2527]: E0120 00:33:55.345049 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:57.128944 systemd-networkd[1406]: cilium_host: Link UP Jan 20 00:33:57.129181 systemd-networkd[1406]: cilium_net: Link UP Jan 20 00:33:57.129186 systemd-networkd[1406]: cilium_net: Gained carrier Jan 20 00:33:57.129466 systemd-networkd[1406]: cilium_host: Gained carrier Jan 20 00:33:57.132385 systemd-networkd[1406]: cilium_host: Gained IPv6LL Jan 20 00:33:57.547077 systemd-networkd[1406]: cilium_vxlan: Link UP Jan 20 00:33:57.547090 systemd-networkd[1406]: cilium_vxlan: Gained carrier Jan 20 00:33:57.868547 systemd-networkd[1406]: cilium_net: Gained IPv6LL Jan 20 00:33:58.513381 kernel: NET: Registered PF_ALG protocol family Jan 20 00:33:59.081400 systemd-networkd[1406]: cilium_vxlan: Gained IPv6LL Jan 20 00:34:00.418022 systemd-networkd[1406]: lxc_health: Link UP Jan 20 00:34:00.454521 systemd-networkd[1406]: lxc_health: Gained carrier Jan 20 00:34:00.716147 systemd-networkd[1406]: lxcb2deb300fb71: Link UP Jan 20 00:34:00.727950 kernel: eth0: renamed from tmpaaba6 Jan 20 00:34:00.733772 systemd-networkd[1406]: lxcb2deb300fb71: Gained carrier Jan 20 00:34:01.156554 systemd-networkd[1406]: lxc593eec210ab8: Link UP Jan 20 00:34:01.179466 kernel: eth0: renamed from tmp816ee Jan 20 00:34:01.188987 systemd-networkd[1406]: lxc593eec210ab8: Gained carrier Jan 20 00:34:02.150114 systemd-networkd[1406]: lxc_health: Gained IPv6LL Jan 20 00:34:02.277479 kubelet[2527]: E0120 00:34:02.277020 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:02.278912 systemd-networkd[1406]: lxcb2deb300fb71: Gained IPv6LL Jan 20 00:34:02.337859 kubelet[2527]: I0120 00:34:02.335808 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jm5f7" podStartSLOduration=16.16468162 podStartE2EDuration="43.335790486s" podCreationTimestamp="2026-01-20 00:33:19 +0000 UTC" firstStartedPulling="2026-01-20 00:33:20.445827628 +0000 UTC m=+5.924200238" lastFinishedPulling="2026-01-20 00:33:47.616936495 +0000 UTC m=+33.095309104" observedRunningTime="2026-01-20 00:33:53.514609969 +0000 UTC m=+38.992982597" watchObservedRunningTime="2026-01-20 00:34:02.335790486 +0000 UTC m=+47.814163095" Jan 20 00:34:02.434207 kubelet[2527]: E0120 00:34:02.432388 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:03.046115 systemd-networkd[1406]: lxc593eec210ab8: Gained IPv6LL Jan 20 00:34:06.054046 containerd[1475]: time="2026-01-20T00:34:06.053009671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:06.054046 containerd[1475]: time="2026-01-20T00:34:06.053926713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:06.054046 containerd[1475]: time="2026-01-20T00:34:06.053943063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:06.055612 containerd[1475]: time="2026-01-20T00:34:06.054019937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:06.057389 containerd[1475]: time="2026-01-20T00:34:06.056110871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:06.057389 containerd[1475]: time="2026-01-20T00:34:06.056379874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:06.057389 containerd[1475]: time="2026-01-20T00:34:06.056402917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:06.057389 containerd[1475]: time="2026-01-20T00:34:06.057311473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:06.103029 systemd[1]: Started cri-containerd-aaba6a4e2750baaa224702b700ccae1f3975df1fca951e2de5d7286c99b7b4a7.scope - libcontainer container aaba6a4e2750baaa224702b700ccae1f3975df1fca951e2de5d7286c99b7b4a7. Jan 20 00:34:06.107134 systemd[1]: Started cri-containerd-816ee1540aa4aec5c3afcb451fb54f7c5b79c90e3b87962c5c4d0ba90e25a73b.scope - libcontainer container 816ee1540aa4aec5c3afcb451fb54f7c5b79c90e3b87962c5c4d0ba90e25a73b. Jan 20 00:34:06.118717 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:06.125329 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:06.150634 containerd[1475]: time="2026-01-20T00:34:06.150536971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7m5cj,Uid:d4bfb3c5-7ac2-4ffc-aa10-d30d709680b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaba6a4e2750baaa224702b700ccae1f3975df1fca951e2de5d7286c99b7b4a7\"" Jan 20 00:34:06.153520 kubelet[2527]: E0120 00:34:06.153016 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:06.182333 containerd[1475]: time="2026-01-20T00:34:06.182244356Z" level=info msg="CreateContainer within sandbox \"aaba6a4e2750baaa224702b700ccae1f3975df1fca951e2de5d7286c99b7b4a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:34:06.182843 containerd[1475]: time="2026-01-20T00:34:06.182740708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j46c9,Uid:5a2df800-00bf-48d8-a18d-1437de552d26,Namespace:kube-system,Attempt:0,} returns sandbox id \"816ee1540aa4aec5c3afcb451fb54f7c5b79c90e3b87962c5c4d0ba90e25a73b\"" Jan 20 00:34:06.183715 kubelet[2527]: E0120 00:34:06.183620 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:06.191750 containerd[1475]: time="2026-01-20T00:34:06.191565902Z" level=info msg="CreateContainer within sandbox \"816ee1540aa4aec5c3afcb451fb54f7c5b79c90e3b87962c5c4d0ba90e25a73b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:34:06.210126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488526451.mount: Deactivated successfully. Jan 20 00:34:06.214540 containerd[1475]: time="2026-01-20T00:34:06.214504210Z" level=info msg="CreateContainer within sandbox \"aaba6a4e2750baaa224702b700ccae1f3975df1fca951e2de5d7286c99b7b4a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e148f93d733b806ec31fa18798bd7b463d1e03129f67d3a8f12ea4b56cb87a07\"" Jan 20 00:34:06.215820 containerd[1475]: time="2026-01-20T00:34:06.215738088Z" level=info msg="StartContainer for \"e148f93d733b806ec31fa18798bd7b463d1e03129f67d3a8f12ea4b56cb87a07\"" Jan 20 00:34:06.223722 containerd[1475]: time="2026-01-20T00:34:06.223112739Z" level=info msg="CreateContainer within sandbox \"816ee1540aa4aec5c3afcb451fb54f7c5b79c90e3b87962c5c4d0ba90e25a73b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b7bc8c02dda226d4bd8cc968cc8e50ba756859aa57879a080efc7a21881d2162\"" Jan 20 00:34:06.224076 containerd[1475]: time="2026-01-20T00:34:06.223997093Z" level=info msg="StartContainer for \"b7bc8c02dda226d4bd8cc968cc8e50ba756859aa57879a080efc7a21881d2162\"" Jan 20 00:34:06.259390 systemd[1]: Started cri-containerd-e148f93d733b806ec31fa18798bd7b463d1e03129f67d3a8f12ea4b56cb87a07.scope - libcontainer container e148f93d733b806ec31fa18798bd7b463d1e03129f67d3a8f12ea4b56cb87a07. Jan 20 00:34:06.276904 systemd[1]: Started cri-containerd-b7bc8c02dda226d4bd8cc968cc8e50ba756859aa57879a080efc7a21881d2162.scope - libcontainer container b7bc8c02dda226d4bd8cc968cc8e50ba756859aa57879a080efc7a21881d2162. Jan 20 00:34:06.319159 containerd[1475]: time="2026-01-20T00:34:06.318079256Z" level=info msg="StartContainer for \"e148f93d733b806ec31fa18798bd7b463d1e03129f67d3a8f12ea4b56cb87a07\" returns successfully" Jan 20 00:34:06.328175 containerd[1475]: time="2026-01-20T00:34:06.328100184Z" level=info msg="StartContainer for \"b7bc8c02dda226d4bd8cc968cc8e50ba756859aa57879a080efc7a21881d2162\" returns successfully" Jan 20 00:34:06.450076 kubelet[2527]: E0120 00:34:06.449947 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:06.455186 kubelet[2527]: E0120 00:34:06.454617 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:06.511014 kubelet[2527]: I0120 00:34:06.510539 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7m5cj" podStartSLOduration=46.510520408 podStartE2EDuration="46.510520408s" podCreationTimestamp="2026-01-20 00:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:34:06.487949212 +0000 UTC m=+51.966321820" watchObservedRunningTime="2026-01-20 00:34:06.510520408 +0000 UTC m=+51.988893028" Jan 20 00:34:07.460415 kubelet[2527]: E0120 00:34:07.458285 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:07.461723 kubelet[2527]: E0120 00:34:07.461630 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:07.480926 kubelet[2527]: I0120 00:34:07.480811 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j46c9" podStartSLOduration=47.480786155 podStartE2EDuration="47.480786155s" podCreationTimestamp="2026-01-20 00:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:34:06.520394713 +0000 UTC m=+51.998767342" watchObservedRunningTime="2026-01-20 00:34:07.480786155 +0000 UTC m=+52.959158764" Jan 20 00:34:08.473653 kubelet[2527]: E0120 00:34:08.473516 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:08.473653 kubelet[2527]: E0120 00:34:08.473525 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:26.681566 kubelet[2527]: E0120 00:34:26.681417 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:29.135567 systemd[1]: Started sshd@7-10.0.0.11:22-10.0.0.1:33364.service - OpenSSH per-connection server daemon (10.0.0.1:33364). Jan 20 00:34:29.209838 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 33364 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:29.212187 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:29.219102 systemd-logind[1468]: New session 8 of user core. Jan 20 00:34:29.226909 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:34:29.725276 sshd[3961]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:29.731137 systemd[1]: sshd@7-10.0.0.11:22-10.0.0.1:33364.service: Deactivated successfully. Jan 20 00:34:29.733629 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:34:29.735324 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:34:29.738882 systemd-logind[1468]: Removed session 8. Jan 20 00:34:31.683335 kubelet[2527]: E0120 00:34:31.683202 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:34.742195 systemd[1]: Started sshd@8-10.0.0.11:22-10.0.0.1:47676.service - OpenSSH per-connection server daemon (10.0.0.1:47676). Jan 20 00:34:34.811523 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 47676 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:34.814096 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:34.820559 systemd-logind[1468]: New session 9 of user core. Jan 20 00:34:34.834426 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:34:34.972503 sshd[3976]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:34.977209 systemd[1]: sshd@8-10.0.0.11:22-10.0.0.1:47676.service: Deactivated successfully. Jan 20 00:34:34.979770 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:34:34.980876 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:34:34.982325 systemd-logind[1468]: Removed session 9. Jan 20 00:34:39.993844 systemd[1]: Started sshd@9-10.0.0.11:22-10.0.0.1:47688.service - OpenSSH per-connection server daemon (10.0.0.1:47688). Jan 20 00:34:40.036640 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 47688 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:40.039014 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:40.045607 systemd-logind[1468]: New session 10 of user core. Jan 20 00:34:40.052896 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:34:40.363194 sshd[3992]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:40.369155 systemd[1]: sshd@9-10.0.0.11:22-10.0.0.1:47688.service: Deactivated successfully. Jan 20 00:34:40.373140 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:34:40.374380 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:34:40.376412 systemd-logind[1468]: Removed session 10. Jan 20 00:34:41.684589 kubelet[2527]: E0120 00:34:41.684205 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:45.385153 systemd[1]: Started sshd@10-10.0.0.11:22-10.0.0.1:51000.service - OpenSSH per-connection server daemon (10.0.0.1:51000). Jan 20 00:34:45.431191 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 51000 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:45.433508 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:45.439385 systemd-logind[1468]: New session 11 of user core. Jan 20 00:34:45.451012 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:34:45.595947 sshd[4007]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:45.600818 systemd[1]: sshd@10-10.0.0.11:22-10.0.0.1:51000.service: Deactivated successfully. Jan 20 00:34:45.603060 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:34:45.604207 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:34:45.605632 systemd-logind[1468]: Removed session 11. Jan 20 00:34:48.681948 kubelet[2527]: E0120 00:34:48.681866 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:50.610958 systemd[1]: Started sshd@11-10.0.0.11:22-10.0.0.1:51014.service - OpenSSH per-connection server daemon (10.0.0.1:51014). Jan 20 00:34:50.697241 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 51014 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:50.699457 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:50.707099 systemd-logind[1468]: New session 12 of user core. Jan 20 00:34:50.713903 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:34:50.889753 sshd[4022]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:50.895280 systemd[1]: sshd@11-10.0.0.11:22-10.0.0.1:51014.service: Deactivated successfully. Jan 20 00:34:50.898358 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:34:50.900146 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:34:50.901754 systemd-logind[1468]: Removed session 12. Jan 20 00:34:55.906569 systemd[1]: Started sshd@12-10.0.0.11:22-10.0.0.1:49926.service - OpenSSH per-connection server daemon (10.0.0.1:49926). Jan 20 00:34:55.977862 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 49926 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:55.981414 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:55.991345 systemd-logind[1468]: New session 13 of user core. Jan 20 00:34:56.007012 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:34:56.197440 sshd[4039]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:56.204312 systemd[1]: sshd@12-10.0.0.11:22-10.0.0.1:49926.service: Deactivated successfully. Jan 20 00:34:56.224211 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:34:56.232933 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:34:56.235864 systemd-logind[1468]: Removed session 13. Jan 20 00:35:01.210964 systemd[1]: Started sshd@13-10.0.0.11:22-10.0.0.1:49936.service - OpenSSH per-connection server daemon (10.0.0.1:49936). Jan 20 00:35:01.256117 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 49936 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:01.258548 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:01.266305 systemd-logind[1468]: New session 14 of user core. Jan 20 00:35:01.273995 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:35:01.426946 sshd[4055]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:01.433193 systemd[1]: sshd@13-10.0.0.11:22-10.0.0.1:49936.service: Deactivated successfully. Jan 20 00:35:01.436002 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:35:01.437764 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:35:01.439658 systemd-logind[1468]: Removed session 14. Jan 20 00:35:06.520926 systemd[1]: Started sshd@14-10.0.0.11:22-10.0.0.1:54726.service - OpenSSH per-connection server daemon (10.0.0.1:54726). Jan 20 00:35:06.567492 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 54726 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:06.571812 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:06.583180 systemd-logind[1468]: New session 15 of user core. Jan 20 00:35:06.594275 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:35:06.880575 sshd[4071]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:06.890333 systemd[1]: sshd@14-10.0.0.11:22-10.0.0.1:54726.service: Deactivated successfully. Jan 20 00:35:06.892478 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:35:06.893848 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:35:06.903160 systemd[1]: Started sshd@15-10.0.0.11:22-10.0.0.1:54730.service - OpenSSH per-connection server daemon (10.0.0.1:54730). Jan 20 00:35:06.904795 systemd-logind[1468]: Removed session 15. Jan 20 00:35:06.940757 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 54730 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:06.942823 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:06.949252 systemd-logind[1468]: New session 16 of user core. Jan 20 00:35:06.963091 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:35:07.206739 sshd[4087]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:07.218722 systemd[1]: sshd@15-10.0.0.11:22-10.0.0.1:54730.service: Deactivated successfully. Jan 20 00:35:07.221200 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:35:07.225867 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:35:07.234114 systemd[1]: Started sshd@16-10.0.0.11:22-10.0.0.1:54746.service - OpenSSH per-connection server daemon (10.0.0.1:54746). Jan 20 00:35:07.236481 systemd-logind[1468]: Removed session 16. Jan 20 00:35:07.276061 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 54746 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:07.278950 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:07.285171 systemd-logind[1468]: New session 17 of user core. Jan 20 00:35:07.292923 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:35:07.516494 sshd[4099]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:07.522083 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:35:07.522874 systemd[1]: sshd@16-10.0.0.11:22-10.0.0.1:54746.service: Deactivated successfully. Jan 20 00:35:07.525883 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:35:07.527170 systemd-logind[1468]: Removed session 17. Jan 20 00:35:08.681858 kubelet[2527]: E0120 00:35:08.681541 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:10.682239 kubelet[2527]: E0120 00:35:10.682091 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:12.531855 systemd[1]: Started sshd@17-10.0.0.11:22-10.0.0.1:53062.service - OpenSSH per-connection server daemon (10.0.0.1:53062). Jan 20 00:35:12.605423 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 53062 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:12.607637 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:12.614173 systemd-logind[1468]: New session 18 of user core. Jan 20 00:35:12.627846 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:35:12.758850 sshd[4114]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:12.764912 systemd[1]: sshd@17-10.0.0.11:22-10.0.0.1:53062.service: Deactivated successfully. Jan 20 00:35:12.767989 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:35:12.771204 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:35:12.779290 systemd-logind[1468]: Removed session 18. Jan 20 00:35:13.682138 kubelet[2527]: E0120 00:35:13.682044 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:17.779777 systemd[1]: Started sshd@18-10.0.0.11:22-10.0.0.1:53066.service - OpenSSH per-connection server daemon (10.0.0.1:53066). Jan 20 00:35:17.821040 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 53066 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:17.823080 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:17.829386 systemd-logind[1468]: New session 19 of user core. Jan 20 00:35:17.838902 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:35:17.965101 sshd[4130]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:17.969594 systemd[1]: sshd@18-10.0.0.11:22-10.0.0.1:53066.service: Deactivated successfully. Jan 20 00:35:17.972234 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:35:17.973263 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:35:17.975173 systemd-logind[1468]: Removed session 19. Jan 20 00:35:22.979152 systemd[1]: Started sshd@19-10.0.0.11:22-10.0.0.1:44274.service - OpenSSH per-connection server daemon (10.0.0.1:44274). Jan 20 00:35:23.022831 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 44274 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:23.024972 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:23.032299 systemd-logind[1468]: New session 20 of user core. Jan 20 00:35:23.043007 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:35:23.192631 sshd[4147]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:23.197747 systemd[1]: sshd@19-10.0.0.11:22-10.0.0.1:44274.service: Deactivated successfully. Jan 20 00:35:23.200398 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:35:23.202238 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:35:23.204054 systemd-logind[1468]: Removed session 20. Jan 20 00:35:28.211597 systemd[1]: Started sshd@20-10.0.0.11:22-10.0.0.1:44282.service - OpenSSH per-connection server daemon (10.0.0.1:44282). Jan 20 00:35:28.258446 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 44282 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:28.260492 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:28.267218 systemd-logind[1468]: New session 21 of user core. Jan 20 00:35:28.279941 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:35:28.414829 sshd[4162]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:28.427108 systemd[1]: sshd@20-10.0.0.11:22-10.0.0.1:44282.service: Deactivated successfully. Jan 20 00:35:28.429004 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:35:28.430617 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:35:28.439022 systemd[1]: Started sshd@21-10.0.0.11:22-10.0.0.1:44284.service - OpenSSH per-connection server daemon (10.0.0.1:44284). Jan 20 00:35:28.440479 systemd-logind[1468]: Removed session 21. Jan 20 00:35:28.473360 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 44284 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:28.475189 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:28.480327 systemd-logind[1468]: New session 22 of user core. Jan 20 00:35:28.489948 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:35:28.803000 sshd[4176]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:28.813389 systemd[1]: sshd@21-10.0.0.11:22-10.0.0.1:44284.service: Deactivated successfully. Jan 20 00:35:28.815294 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:35:28.817305 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:35:28.827089 systemd[1]: Started sshd@22-10.0.0.11:22-10.0.0.1:44300.service - OpenSSH per-connection server daemon (10.0.0.1:44300). Jan 20 00:35:28.829005 systemd-logind[1468]: Removed session 22. Jan 20 00:35:28.871139 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 44300 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:28.873237 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:28.879646 systemd-logind[1468]: New session 23 of user core. Jan 20 00:35:28.890904 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 00:35:29.495576 sshd[4190]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:29.502625 systemd[1]: sshd@22-10.0.0.11:22-10.0.0.1:44300.service: Deactivated successfully. Jan 20 00:35:29.504255 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 00:35:29.507916 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Jan 20 00:35:29.515199 systemd[1]: Started sshd@23-10.0.0.11:22-10.0.0.1:44302.service - OpenSSH per-connection server daemon (10.0.0.1:44302). Jan 20 00:35:29.517973 systemd-logind[1468]: Removed session 23. Jan 20 00:35:29.556644 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 44302 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:29.558931 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:29.564648 systemd-logind[1468]: New session 24 of user core. Jan 20 00:35:29.578003 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 00:35:29.913262 sshd[4209]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:29.918311 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Jan 20 00:35:29.918631 systemd[1]: sshd@23-10.0.0.11:22-10.0.0.1:44302.service: Deactivated successfully. Jan 20 00:35:29.920939 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 00:35:29.922270 systemd-logind[1468]: Removed session 24. Jan 20 00:35:29.951072 systemd[1]: Started sshd@24-10.0.0.11:22-10.0.0.1:44318.service - OpenSSH per-connection server daemon (10.0.0.1:44318). Jan 20 00:35:30.072623 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 44318 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:30.084648 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:30.148927 systemd-logind[1468]: New session 25 of user core. Jan 20 00:35:30.174186 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 00:35:30.405097 sshd[4222]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:30.420947 systemd[1]: sshd@24-10.0.0.11:22-10.0.0.1:44318.service: Deactivated successfully. Jan 20 00:35:30.426064 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 00:35:30.428053 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Jan 20 00:35:30.437872 systemd-logind[1468]: Removed session 25. Jan 20 00:35:35.414619 systemd[1]: Started sshd@25-10.0.0.11:22-10.0.0.1:51250.service - OpenSSH per-connection server daemon (10.0.0.1:51250). Jan 20 00:35:35.472198 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 51250 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:35.474593 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:35.480804 systemd-logind[1468]: New session 26 of user core. Jan 20 00:35:35.491106 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 00:35:35.616326 sshd[4236]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:35.621242 systemd[1]: sshd@25-10.0.0.11:22-10.0.0.1:51250.service: Deactivated successfully. Jan 20 00:35:35.624105 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 00:35:35.625779 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Jan 20 00:35:35.627549 systemd-logind[1468]: Removed session 26. Jan 20 00:35:35.682291 kubelet[2527]: E0120 00:35:35.682017 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:36.681456 kubelet[2527]: E0120 00:35:36.681383 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:40.650066 systemd[1]: Started sshd@26-10.0.0.11:22-10.0.0.1:51252.service - OpenSSH per-connection server daemon (10.0.0.1:51252). Jan 20 00:35:40.702943 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 51252 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:40.705243 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:40.712314 systemd-logind[1468]: New session 27 of user core. Jan 20 00:35:40.720912 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 00:35:40.847617 sshd[4252]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:40.852134 systemd[1]: sshd@26-10.0.0.11:22-10.0.0.1:51252.service: Deactivated successfully. Jan 20 00:35:40.855391 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 00:35:40.857844 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. Jan 20 00:35:40.860039 systemd-logind[1468]: Removed session 27. Jan 20 00:35:45.862405 systemd[1]: Started sshd@27-10.0.0.11:22-10.0.0.1:34568.service - OpenSSH per-connection server daemon (10.0.0.1:34568). Jan 20 00:35:45.905866 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 34568 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:45.908101 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:45.914508 systemd-logind[1468]: New session 28 of user core. Jan 20 00:35:45.924020 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 00:35:46.051542 sshd[4268]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:46.056063 systemd[1]: sshd@27-10.0.0.11:22-10.0.0.1:34568.service: Deactivated successfully. Jan 20 00:35:46.059023 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 00:35:46.061383 systemd-logind[1468]: Session 28 logged out. Waiting for processes to exit. Jan 20 00:35:46.063632 systemd-logind[1468]: Removed session 28. Jan 20 00:35:47.680995 kubelet[2527]: E0120 00:35:47.680890 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:49.682475 kubelet[2527]: E0120 00:35:49.682314 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:50.682262 kubelet[2527]: E0120 00:35:50.682117 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:51.070736 systemd[1]: Started sshd@28-10.0.0.11:22-10.0.0.1:34570.service - OpenSSH per-connection server daemon (10.0.0.1:34570). Jan 20 00:35:51.127159 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 34570 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:51.133861 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:51.146746 systemd-logind[1468]: New session 29 of user core. Jan 20 00:35:51.159264 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 00:35:51.313375 sshd[4285]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:51.326555 systemd[1]: sshd@28-10.0.0.11:22-10.0.0.1:34570.service: Deactivated successfully. Jan 20 00:35:51.329577 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 00:35:51.332527 systemd-logind[1468]: Session 29 logged out. Waiting for processes to exit. Jan 20 00:35:51.338154 systemd[1]: Started sshd@29-10.0.0.11:22-10.0.0.1:34576.service - OpenSSH per-connection server daemon (10.0.0.1:34576). Jan 20 00:35:51.339930 systemd-logind[1468]: Removed session 29. Jan 20 00:35:51.381239 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 34576 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:51.383522 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:51.389278 systemd-logind[1468]: New session 30 of user core. Jan 20 00:35:51.402122 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 00:35:53.249167 containerd[1475]: time="2026-01-20T00:35:53.249121986Z" level=info msg="StopContainer for \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\" with timeout 30 (s)" Jan 20 00:35:53.251336 containerd[1475]: time="2026-01-20T00:35:53.251310181Z" level=info msg="Stop container \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\" with signal terminated" Jan 20 00:35:53.309235 systemd[1]: cri-containerd-cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c.scope: Deactivated successfully. Jan 20 00:35:53.309877 systemd[1]: cri-containerd-cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c.scope: Consumed 1.354s CPU time. Jan 20 00:35:53.337444 containerd[1475]: time="2026-01-20T00:35:53.337114315Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:35:53.364581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c-rootfs.mount: Deactivated successfully. Jan 20 00:35:53.388014 containerd[1475]: time="2026-01-20T00:35:53.387745780Z" level=info msg="shim disconnected" id=cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c namespace=k8s.io Jan 20 00:35:53.388014 containerd[1475]: time="2026-01-20T00:35:53.387811502Z" level=warning msg="cleaning up after shim disconnected" id=cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c namespace=k8s.io Jan 20 00:35:53.388014 containerd[1475]: time="2026-01-20T00:35:53.387823805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:53.389105 containerd[1475]: time="2026-01-20T00:35:53.389020941Z" level=info msg="StopContainer for \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\" with timeout 2 (s)" Jan 20 00:35:53.389786 containerd[1475]: time="2026-01-20T00:35:53.389713384Z" level=info msg="Stop container \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\" with signal terminated" Jan 20 00:35:53.415082 systemd-networkd[1406]: lxc_health: Link DOWN Jan 20 00:35:53.415118 systemd-networkd[1406]: lxc_health: Lost carrier Jan 20 00:35:53.445993 containerd[1475]: time="2026-01-20T00:35:53.445827560Z" level=info msg="StopContainer for \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\" returns successfully" Jan 20 00:35:53.449009 containerd[1475]: time="2026-01-20T00:35:53.447550912Z" level=info msg="StopPodSandbox for \"89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6\"" Jan 20 00:35:53.449009 containerd[1475]: time="2026-01-20T00:35:53.448296550Z" level=info msg="Container to stop \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:35:53.448357 systemd[1]: cri-containerd-15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe.scope: Deactivated successfully. Jan 20 00:35:53.450172 systemd[1]: cri-containerd-15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe.scope: Consumed 14.003s CPU time. Jan 20 00:35:53.453508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6-shm.mount: Deactivated successfully. Jan 20 00:35:53.473938 systemd[1]: cri-containerd-89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6.scope: Deactivated successfully. Jan 20 00:35:53.499537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe-rootfs.mount: Deactivated successfully. Jan 20 00:35:53.513839 containerd[1475]: time="2026-01-20T00:35:53.513477258Z" level=info msg="shim disconnected" id=15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe namespace=k8s.io Jan 20 00:35:53.513839 containerd[1475]: time="2026-01-20T00:35:53.513548391Z" level=warning msg="cleaning up after shim disconnected" id=15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe namespace=k8s.io Jan 20 00:35:53.513839 containerd[1475]: time="2026-01-20T00:35:53.513564531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:53.528270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6-rootfs.mount: Deactivated successfully. Jan 20 00:35:53.545930 containerd[1475]: time="2026-01-20T00:35:53.545818245Z" level=info msg="shim disconnected" id=89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6 namespace=k8s.io Jan 20 00:35:53.545930 containerd[1475]: time="2026-01-20T00:35:53.545906560Z" level=warning msg="cleaning up after shim disconnected" id=89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6 namespace=k8s.io Jan 20 00:35:53.545930 containerd[1475]: time="2026-01-20T00:35:53.545924373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:53.581757 containerd[1475]: time="2026-01-20T00:35:53.581326455Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:35:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:35:53.589962 containerd[1475]: time="2026-01-20T00:35:53.589773629Z" level=info msg="StopContainer for \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\" returns successfully" Jan 20 00:35:53.591751 containerd[1475]: time="2026-01-20T00:35:53.591122988Z" level=info msg="StopPodSandbox for \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\"" Jan 20 00:35:53.591751 containerd[1475]: time="2026-01-20T00:35:53.591161239Z" level=info msg="Container to stop \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:35:53.591751 containerd[1475]: time="2026-01-20T00:35:53.591176918Z" level=info msg="Container to stop \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:35:53.591751 containerd[1475]: time="2026-01-20T00:35:53.591190103Z" level=info msg="Container to stop \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:35:53.591751 containerd[1475]: time="2026-01-20T00:35:53.591205021Z" level=info msg="Container to stop \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:35:53.591751 containerd[1475]: time="2026-01-20T00:35:53.591218356Z" level=info msg="Container to stop \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:35:53.604069 containerd[1475]: time="2026-01-20T00:35:53.602728158Z" level=info msg="TearDown network for sandbox \"89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6\" successfully" Jan 20 00:35:53.604069 containerd[1475]: time="2026-01-20T00:35:53.602780637Z" level=info msg="StopPodSandbox for \"89695397ea4d9afeaa0afdf8d66a01fb7b8fde6e9d43fd8341628d804f95cfa6\" returns successfully" Jan 20 00:35:53.612103 systemd[1]: cri-containerd-996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4.scope: Deactivated successfully. Jan 20 00:35:53.690956 containerd[1475]: time="2026-01-20T00:35:53.690847862Z" level=info msg="shim disconnected" id=996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4 namespace=k8s.io Jan 20 00:35:53.690956 containerd[1475]: time="2026-01-20T00:35:53.690920667Z" level=warning msg="cleaning up after shim disconnected" id=996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4 namespace=k8s.io Jan 20 00:35:53.690956 containerd[1475]: time="2026-01-20T00:35:53.690935724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:53.713060 containerd[1475]: time="2026-01-20T00:35:53.712914535Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:35:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:35:53.716795 containerd[1475]: time="2026-01-20T00:35:53.716730541Z" level=info msg="TearDown network for sandbox \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" successfully" Jan 20 00:35:53.716795 containerd[1475]: time="2026-01-20T00:35:53.716787648Z" level=info msg="StopPodSandbox for \"996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4\" returns successfully" Jan 20 00:35:53.805548 kubelet[2527]: I0120 00:35:53.805337 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9364a2e5-d61f-4cd6-8271-f18840a19c20-cilium-config-path\") pod \"9364a2e5-d61f-4cd6-8271-f18840a19c20\" (UID: \"9364a2e5-d61f-4cd6-8271-f18840a19c20\") " Jan 20 00:35:53.805548 kubelet[2527]: I0120 00:35:53.805437 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqz77\" (UniqueName: \"kubernetes.io/projected/9364a2e5-d61f-4cd6-8271-f18840a19c20-kube-api-access-cqz77\") pod \"9364a2e5-d61f-4cd6-8271-f18840a19c20\" (UID: \"9364a2e5-d61f-4cd6-8271-f18840a19c20\") " Jan 20 00:35:53.810864 kubelet[2527]: I0120 00:35:53.810796 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9364a2e5-d61f-4cd6-8271-f18840a19c20-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9364a2e5-d61f-4cd6-8271-f18840a19c20" (UID: "9364a2e5-d61f-4cd6-8271-f18840a19c20"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:35:53.813927 kubelet[2527]: I0120 00:35:53.813567 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9364a2e5-d61f-4cd6-8271-f18840a19c20-kube-api-access-cqz77" (OuterVolumeSpecName: "kube-api-access-cqz77") pod "9364a2e5-d61f-4cd6-8271-f18840a19c20" (UID: "9364a2e5-d61f-4cd6-8271-f18840a19c20"). InnerVolumeSpecName "kube-api-access-cqz77". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:35:53.907009 kubelet[2527]: I0120 00:35:53.906869 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-xtables-lock\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907009 kubelet[2527]: I0120 00:35:53.906960 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-config-path\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907009 kubelet[2527]: I0120 00:35:53.906993 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-clustermesh-secrets\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907009 kubelet[2527]: I0120 00:35:53.907014 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cni-path\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907319 kubelet[2527]: I0120 00:35:53.907038 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-cgroup\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907319 kubelet[2527]: I0120 00:35:53.907059 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-hostproc\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907319 kubelet[2527]: I0120 00:35:53.907082 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-host-proc-sys-net\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907319 kubelet[2527]: I0120 00:35:53.907131 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-run\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907319 kubelet[2527]: I0120 00:35:53.907154 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-lib-modules\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907319 kubelet[2527]: I0120 00:35:53.907173 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-host-proc-sys-kernel\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907538 kubelet[2527]: I0120 00:35:53.907198 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-hubble-tls\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907538 kubelet[2527]: I0120 00:35:53.907204 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cni-path" (OuterVolumeSpecName: "cni-path") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.907538 kubelet[2527]: I0120 00:35:53.907219 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-bpf-maps\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907538 kubelet[2527]: I0120 00:35:53.907292 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k9pb\" (UniqueName: \"kubernetes.io/projected/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-kube-api-access-8k9pb\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907538 kubelet[2527]: I0120 00:35:53.907315 2527 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-etc-cni-netd\") pod \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\" (UID: \"34d1696e-aef8-4a5f-8c36-9efadfc0cd0b\") " Jan 20 00:35:53.907538 kubelet[2527]: I0120 00:35:53.907377 2527 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9364a2e5-d61f-4cd6-8271-f18840a19c20-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:53.907884 kubelet[2527]: I0120 00:35:53.907392 2527 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cqz77\" (UniqueName: \"kubernetes.io/projected/9364a2e5-d61f-4cd6-8271-f18840a19c20-kube-api-access-cqz77\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:53.907884 kubelet[2527]: I0120 00:35:53.907405 2527 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:53.907884 kubelet[2527]: I0120 00:35:53.907459 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.907884 kubelet[2527]: I0120 00:35:53.907546 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.907884 kubelet[2527]: I0120 00:35:53.907575 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-hostproc" (OuterVolumeSpecName: "hostproc") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.908070 kubelet[2527]: I0120 00:35:53.907596 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.908070 kubelet[2527]: I0120 00:35:53.907651 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.908070 kubelet[2527]: I0120 00:35:53.907733 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.908070 kubelet[2527]: I0120 00:35:53.907757 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.908070 kubelet[2527]: I0120 00:35:53.907894 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.908235 kubelet[2527]: I0120 00:35:53.907942 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:35:53.914918 kubelet[2527]: I0120 00:35:53.914745 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-kube-api-access-8k9pb" (OuterVolumeSpecName: "kube-api-access-8k9pb") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "kube-api-access-8k9pb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:35:53.915360 kubelet[2527]: I0120 00:35:53.915315 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:35:53.916747 kubelet[2527]: I0120 00:35:53.916523 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:35:53.916929 kubelet[2527]: I0120 00:35:53.916887 2527 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" (UID: "34d1696e-aef8-4a5f-8c36-9efadfc0cd0b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:35:54.008426 kubelet[2527]: I0120 00:35:54.008328 2527 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008426 kubelet[2527]: I0120 00:35:54.008394 2527 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008426 kubelet[2527]: I0120 00:35:54.008417 2527 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008426 kubelet[2527]: I0120 00:35:54.008432 2527 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008426 kubelet[2527]: I0120 00:35:54.008444 2527 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008943 kubelet[2527]: I0120 00:35:54.008458 2527 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008943 kubelet[2527]: I0120 00:35:54.008470 2527 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008943 kubelet[2527]: I0120 00:35:54.008481 2527 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008943 kubelet[2527]: I0120 00:35:54.008494 2527 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008943 kubelet[2527]: I0120 00:35:54.008505 2527 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008943 kubelet[2527]: I0120 00:35:54.008517 2527 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008943 kubelet[2527]: I0120 00:35:54.008529 2527 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8k9pb\" (UniqueName: \"kubernetes.io/projected/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-kube-api-access-8k9pb\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.008943 kubelet[2527]: I0120 00:35:54.008543 2527 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 20 00:35:54.036755 kubelet[2527]: I0120 00:35:54.036592 2527 scope.go:117] "RemoveContainer" containerID="15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe" Jan 20 00:35:54.040854 containerd[1475]: time="2026-01-20T00:35:54.040321106Z" level=info msg="RemoveContainer for \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\"" Jan 20 00:35:54.047522 systemd[1]: Removed slice kubepods-burstable-pod34d1696e_aef8_4a5f_8c36_9efadfc0cd0b.slice - libcontainer container kubepods-burstable-pod34d1696e_aef8_4a5f_8c36_9efadfc0cd0b.slice. Jan 20 00:35:54.047793 systemd[1]: kubepods-burstable-pod34d1696e_aef8_4a5f_8c36_9efadfc0cd0b.slice: Consumed 14.233s CPU time. Jan 20 00:35:54.051409 systemd[1]: Removed slice kubepods-besteffort-pod9364a2e5_d61f_4cd6_8271_f18840a19c20.slice - libcontainer container kubepods-besteffort-pod9364a2e5_d61f_4cd6_8271_f18840a19c20.slice. Jan 20 00:35:54.051564 systemd[1]: kubepods-besteffort-pod9364a2e5_d61f_4cd6_8271_f18840a19c20.slice: Consumed 1.413s CPU time. Jan 20 00:35:54.123105 containerd[1475]: time="2026-01-20T00:35:54.123020060Z" level=info msg="RemoveContainer for \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\" returns successfully" Jan 20 00:35:54.123498 kubelet[2527]: I0120 00:35:54.123437 2527 scope.go:117] "RemoveContainer" containerID="10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571" Jan 20 00:35:54.126646 containerd[1475]: time="2026-01-20T00:35:54.126212658Z" level=info msg="RemoveContainer for \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\"" Jan 20 00:35:54.131992 containerd[1475]: time="2026-01-20T00:35:54.131945757Z" level=info msg="RemoveContainer for \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\" returns successfully" Jan 20 00:35:54.132776 kubelet[2527]: I0120 00:35:54.132584 2527 scope.go:117] "RemoveContainer" containerID="386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306" Jan 20 00:35:54.134885 containerd[1475]: time="2026-01-20T00:35:54.134775969Z" level=info msg="RemoveContainer for \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\"" Jan 20 00:35:54.139454 containerd[1475]: time="2026-01-20T00:35:54.139366337Z" level=info msg="RemoveContainer for \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\" returns successfully" Jan 20 00:35:54.139916 kubelet[2527]: I0120 00:35:54.139829 2527 scope.go:117] "RemoveContainer" containerID="d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7" Jan 20 00:35:54.143087 containerd[1475]: time="2026-01-20T00:35:54.142958755Z" level=info msg="RemoveContainer for \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\"" Jan 20 00:35:54.148317 containerd[1475]: time="2026-01-20T00:35:54.148229591Z" level=info msg="RemoveContainer for \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\" returns successfully" Jan 20 00:35:54.148571 kubelet[2527]: I0120 00:35:54.148520 2527 scope.go:117] "RemoveContainer" containerID="c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f" Jan 20 00:35:54.150860 containerd[1475]: time="2026-01-20T00:35:54.150769576Z" level=info msg="RemoveContainer for \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\"" Jan 20 00:35:54.156315 containerd[1475]: time="2026-01-20T00:35:54.155846780Z" level=info msg="RemoveContainer for \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\" returns successfully" Jan 20 00:35:54.156528 kubelet[2527]: I0120 00:35:54.156363 2527 scope.go:117] "RemoveContainer" containerID="15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe" Jan 20 00:35:54.162335 containerd[1475]: time="2026-01-20T00:35:54.162165077Z" level=error msg="ContainerStatus for \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\": not found" Jan 20 00:35:54.162529 kubelet[2527]: E0120 00:35:54.162439 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\": not found" containerID="15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe" Jan 20 00:35:54.162529 kubelet[2527]: I0120 00:35:54.162478 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe"} err="failed to get container status \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"15a501663f4744fdf606156ac7745cfd07e5a6facfe1a9acdaad3007a8a7a0fe\": not found" Jan 20 00:35:54.162896 kubelet[2527]: I0120 00:35:54.162834 2527 scope.go:117] "RemoveContainer" containerID="10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571" Jan 20 00:35:54.163760 containerd[1475]: time="2026-01-20T00:35:54.163114028Z" level=error msg="ContainerStatus for \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\": not found" Jan 20 00:35:54.163891 kubelet[2527]: E0120 00:35:54.163301 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\": not found" containerID="10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571" Jan 20 00:35:54.163891 kubelet[2527]: I0120 00:35:54.163373 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571"} err="failed to get container status \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\": rpc error: code = NotFound desc = an error occurred when try to find container \"10a0064d76948273a825dc76e87d020d0ea719f41c0943a2fc75d4887c8c3571\": not found" Jan 20 00:35:54.163891 kubelet[2527]: I0120 00:35:54.163413 2527 scope.go:117] "RemoveContainer" containerID="386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306" Jan 20 00:35:54.164230 containerd[1475]: time="2026-01-20T00:35:54.164004020Z" level=error msg="ContainerStatus for \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\": not found" Jan 20 00:35:54.164597 kubelet[2527]: E0120 00:35:54.164390 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\": not found" containerID="386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306" Jan 20 00:35:54.164597 kubelet[2527]: I0120 00:35:54.164481 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306"} err="failed to get container status \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\": rpc error: code = NotFound desc = an error occurred when try to find container \"386c78669968f801a65d9f6996b0601c196905e97c52527f4bd4ddee8bfe8306\": not found" Jan 20 00:35:54.164597 kubelet[2527]: I0120 00:35:54.164517 2527 scope.go:117] "RemoveContainer" containerID="d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7" Jan 20 00:35:54.164939 containerd[1475]: time="2026-01-20T00:35:54.164869796Z" level=error msg="ContainerStatus for \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\": not found" Jan 20 00:35:54.165140 kubelet[2527]: E0120 00:35:54.164991 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\": not found" containerID="d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7" Jan 20 00:35:54.165140 kubelet[2527]: I0120 00:35:54.165117 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7"} err="failed to get container status \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5dce5ec2356672e25f725031505a448058efbf00e9e2ad68a134883826980c7\": not found" Jan 20 00:35:54.165140 kubelet[2527]: I0120 00:35:54.165133 2527 scope.go:117] "RemoveContainer" containerID="c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f" Jan 20 00:35:54.165401 containerd[1475]: time="2026-01-20T00:35:54.165308124Z" level=error msg="ContainerStatus for \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\": not found" Jan 20 00:35:54.165592 kubelet[2527]: E0120 00:35:54.165442 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\": not found" containerID="c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f" Jan 20 00:35:54.165592 kubelet[2527]: I0120 00:35:54.165458 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f"} err="failed to get container status \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1478a973e8dc321326942c249e0d92434e9d57d5a6d77347365388a0050543f\": not found" Jan 20 00:35:54.165797 kubelet[2527]: I0120 00:35:54.165636 2527 scope.go:117] "RemoveContainer" containerID="cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c" Jan 20 00:35:54.166990 containerd[1475]: time="2026-01-20T00:35:54.166944891Z" level=info msg="RemoveContainer for \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\"" Jan 20 00:35:54.172406 containerd[1475]: time="2026-01-20T00:35:54.172304539Z" level=info msg="RemoveContainer for \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\" returns successfully" Jan 20 00:35:54.172760 kubelet[2527]: I0120 00:35:54.172733 2527 scope.go:117] "RemoveContainer" containerID="cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c" Jan 20 00:35:54.173262 containerd[1475]: time="2026-01-20T00:35:54.173184597Z" level=error msg="ContainerStatus for \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\": not found" Jan 20 00:35:54.173512 kubelet[2527]: E0120 00:35:54.173408 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\": not found" containerID="cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c" Jan 20 00:35:54.173512 kubelet[2527]: I0120 00:35:54.173453 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c"} err="failed to get container status \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\": rpc error: code = NotFound desc = an error occurred when try to find container \"cda74177dd79db04aee2cc9bdc7ff2ab1ded00961ae410d2a5e549f12e89134c\": not found" Jan 20 00:35:54.305984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4-rootfs.mount: Deactivated successfully. Jan 20 00:35:54.306171 systemd[1]: var-lib-kubelet-pods-9364a2e5\x2dd61f\x2d4cd6\x2d8271\x2df18840a19c20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqz77.mount: Deactivated successfully. Jan 20 00:35:54.306261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-996c2ffbb4da30ae08c1a5a9be75c505907754e26681a8d5dc901f82071b4bc4-shm.mount: Deactivated successfully. Jan 20 00:35:54.306333 systemd[1]: var-lib-kubelet-pods-34d1696e\x2daef8\x2d4a5f\x2d8c36\x2d9efadfc0cd0b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 00:35:54.306405 systemd[1]: var-lib-kubelet-pods-34d1696e\x2daef8\x2d4a5f\x2d8c36\x2d9efadfc0cd0b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8k9pb.mount: Deactivated successfully. Jan 20 00:35:54.306475 systemd[1]: var-lib-kubelet-pods-34d1696e\x2daef8\x2d4a5f\x2d8c36\x2d9efadfc0cd0b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 00:35:54.687766 kubelet[2527]: I0120 00:35:54.687491 2527 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34d1696e-aef8-4a5f-8c36-9efadfc0cd0b" path="/var/lib/kubelet/pods/34d1696e-aef8-4a5f-8c36-9efadfc0cd0b/volumes" Jan 20 00:35:54.689040 kubelet[2527]: I0120 00:35:54.688956 2527 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9364a2e5-d61f-4cd6-8271-f18840a19c20" path="/var/lib/kubelet/pods/9364a2e5-d61f-4cd6-8271-f18840a19c20/volumes" Jan 20 00:35:54.835558 kubelet[2527]: E0120 00:35:54.835319 2527 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 00:35:55.115490 sshd[4301]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:55.133107 systemd[1]: sshd@29-10.0.0.11:22-10.0.0.1:34576.service: Deactivated successfully. Jan 20 00:35:55.135590 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 00:35:55.136115 systemd[1]: session-30.scope: Consumed 1.106s CPU time. Jan 20 00:35:55.138503 systemd-logind[1468]: Session 30 logged out. Waiting for processes to exit. Jan 20 00:35:55.145324 systemd[1]: Started sshd@30-10.0.0.11:22-10.0.0.1:38458.service - OpenSSH per-connection server daemon (10.0.0.1:38458). Jan 20 00:35:55.147202 systemd-logind[1468]: Removed session 30. Jan 20 00:35:55.204068 sshd[4460]: Accepted publickey for core from 10.0.0.1 port 38458 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:55.206120 sshd[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:55.212808 systemd-logind[1468]: New session 31 of user core. Jan 20 00:35:55.222963 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 00:35:55.938512 sshd[4460]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:55.952291 systemd[1]: sshd@30-10.0.0.11:22-10.0.0.1:38458.service: Deactivated successfully. Jan 20 00:35:55.958078 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 00:35:55.962082 systemd-logind[1468]: Session 31 logged out. Waiting for processes to exit. Jan 20 00:35:55.973603 systemd[1]: Started sshd@31-10.0.0.11:22-10.0.0.1:38464.service - OpenSSH per-connection server daemon (10.0.0.1:38464). Jan 20 00:35:55.983278 systemd-logind[1468]: Removed session 31. Jan 20 00:35:55.992774 systemd[1]: Created slice kubepods-burstable-pod00a2f726_180d_4638_95b8_cf3d1834de6f.slice - libcontainer container kubepods-burstable-pod00a2f726_180d_4638_95b8_cf3d1834de6f.slice. Jan 20 00:35:56.018448 sshd[4473]: Accepted publickey for core from 10.0.0.1 port 38464 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:56.021085 sshd[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:56.030222 systemd-logind[1468]: New session 32 of user core. Jan 20 00:35:56.045251 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 00:35:56.110990 sshd[4473]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:56.123392 systemd[1]: sshd@31-10.0.0.11:22-10.0.0.1:38464.service: Deactivated successfully. Jan 20 00:35:56.125872 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 00:35:56.128277 systemd-logind[1468]: Session 32 logged out. Waiting for processes to exit. Jan 20 00:35:56.128575 kubelet[2527]: I0120 00:35:56.128443 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-host-proc-sys-net\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129124 kubelet[2527]: I0120 00:35:56.128565 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00a2f726-180d-4638-95b8-cf3d1834de6f-hubble-tls\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129124 kubelet[2527]: I0120 00:35:56.128731 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r559l\" (UniqueName: \"kubernetes.io/projected/00a2f726-180d-4638-95b8-cf3d1834de6f-kube-api-access-r559l\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129124 kubelet[2527]: I0120 00:35:56.128755 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-host-proc-sys-kernel\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129124 kubelet[2527]: I0120 00:35:56.128776 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-cni-path\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129124 kubelet[2527]: I0120 00:35:56.128797 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-lib-modules\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129124 kubelet[2527]: I0120 00:35:56.128821 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-xtables-lock\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129318 kubelet[2527]: I0120 00:35:56.128844 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00a2f726-180d-4638-95b8-cf3d1834de6f-cilium-config-path\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129318 kubelet[2527]: I0120 00:35:56.128864 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00a2f726-180d-4638-95b8-cf3d1834de6f-clustermesh-secrets\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129318 kubelet[2527]: I0120 00:35:56.128885 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/00a2f726-180d-4638-95b8-cf3d1834de6f-cilium-ipsec-secrets\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129318 kubelet[2527]: I0120 00:35:56.128908 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-bpf-maps\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129318 kubelet[2527]: I0120 00:35:56.128983 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-hostproc\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129318 kubelet[2527]: I0120 00:35:56.129049 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-etc-cni-netd\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129491 kubelet[2527]: I0120 00:35:56.129078 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-cilium-run\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.129491 kubelet[2527]: I0120 00:35:56.129107 2527 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00a2f726-180d-4638-95b8-cf3d1834de6f-cilium-cgroup\") pod \"cilium-kqvrz\" (UID: \"00a2f726-180d-4638-95b8-cf3d1834de6f\") " pod="kube-system/cilium-kqvrz" Jan 20 00:35:56.137157 systemd[1]: Started sshd@32-10.0.0.11:22-10.0.0.1:38476.service - OpenSSH per-connection server daemon (10.0.0.1:38476). Jan 20 00:35:56.138992 systemd-logind[1468]: Removed session 32. Jan 20 00:35:56.178333 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 38476 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:56.180834 sshd[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:56.186848 systemd-logind[1468]: New session 33 of user core. Jan 20 00:35:56.191830 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 00:35:56.303390 kubelet[2527]: E0120 00:35:56.303291 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:56.304346 containerd[1475]: time="2026-01-20T00:35:56.304293989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqvrz,Uid:00a2f726-180d-4638-95b8-cf3d1834de6f,Namespace:kube-system,Attempt:0,}" Jan 20 00:35:56.339893 containerd[1475]: time="2026-01-20T00:35:56.339433461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:35:56.339893 containerd[1475]: time="2026-01-20T00:35:56.339554377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:35:56.339893 containerd[1475]: time="2026-01-20T00:35:56.339577039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:56.339893 containerd[1475]: time="2026-01-20T00:35:56.339808550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:56.369906 systemd[1]: Started cri-containerd-fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426.scope - libcontainer container fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426. Jan 20 00:35:56.400463 containerd[1475]: time="2026-01-20T00:35:56.400386266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqvrz,Uid:00a2f726-180d-4638-95b8-cf3d1834de6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\"" Jan 20 00:35:56.401554 kubelet[2527]: E0120 00:35:56.401418 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:56.407534 containerd[1475]: time="2026-01-20T00:35:56.407505646Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:35:56.430550 containerd[1475]: time="2026-01-20T00:35:56.430385930Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f15339cc055c06913735b8148bc7b995b4c391bba4ae20f8880db492c6fe6f6\"" Jan 20 00:35:56.431564 containerd[1475]: time="2026-01-20T00:35:56.431525622Z" level=info msg="StartContainer for \"5f15339cc055c06913735b8148bc7b995b4c391bba4ae20f8880db492c6fe6f6\"" Jan 20 00:35:56.473895 systemd[1]: Started cri-containerd-5f15339cc055c06913735b8148bc7b995b4c391bba4ae20f8880db492c6fe6f6.scope - libcontainer container 5f15339cc055c06913735b8148bc7b995b4c391bba4ae20f8880db492c6fe6f6. Jan 20 00:35:56.509329 containerd[1475]: time="2026-01-20T00:35:56.509135721Z" level=info msg="StartContainer for \"5f15339cc055c06913735b8148bc7b995b4c391bba4ae20f8880db492c6fe6f6\" returns successfully" Jan 20 00:35:56.524446 systemd[1]: cri-containerd-5f15339cc055c06913735b8148bc7b995b4c391bba4ae20f8880db492c6fe6f6.scope: Deactivated successfully. Jan 20 00:35:56.583954 containerd[1475]: time="2026-01-20T00:35:56.583820065Z" level=info msg="shim disconnected" id=5f15339cc055c06913735b8148bc7b995b4c391bba4ae20f8880db492c6fe6f6 namespace=k8s.io Jan 20 00:35:56.583954 containerd[1475]: time="2026-01-20T00:35:56.583886770Z" level=warning msg="cleaning up after shim disconnected" id=5f15339cc055c06913735b8148bc7b995b4c391bba4ae20f8880db492c6fe6f6 namespace=k8s.io Jan 20 00:35:56.583954 containerd[1475]: time="2026-01-20T00:35:56.583900024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:57.056180 kubelet[2527]: E0120 00:35:57.056020 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:57.065718 containerd[1475]: time="2026-01-20T00:35:57.063514357Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:35:57.079835 containerd[1475]: time="2026-01-20T00:35:57.079751654Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7633a777647b4b387b01a498fc9265dd35811c63ce0c812080d788d0f8c977e0\"" Jan 20 00:35:57.080883 containerd[1475]: time="2026-01-20T00:35:57.080786861Z" level=info msg="StartContainer for \"7633a777647b4b387b01a498fc9265dd35811c63ce0c812080d788d0f8c977e0\"" Jan 20 00:35:57.117970 systemd[1]: Started cri-containerd-7633a777647b4b387b01a498fc9265dd35811c63ce0c812080d788d0f8c977e0.scope - libcontainer container 7633a777647b4b387b01a498fc9265dd35811c63ce0c812080d788d0f8c977e0. Jan 20 00:35:57.155749 containerd[1475]: time="2026-01-20T00:35:57.154831488Z" level=info msg="StartContainer for \"7633a777647b4b387b01a498fc9265dd35811c63ce0c812080d788d0f8c977e0\" returns successfully" Jan 20 00:35:57.167481 systemd[1]: cri-containerd-7633a777647b4b387b01a498fc9265dd35811c63ce0c812080d788d0f8c977e0.scope: Deactivated successfully. Jan 20 00:35:57.196836 containerd[1475]: time="2026-01-20T00:35:57.196600826Z" level=info msg="shim disconnected" id=7633a777647b4b387b01a498fc9265dd35811c63ce0c812080d788d0f8c977e0 namespace=k8s.io Jan 20 00:35:57.196836 containerd[1475]: time="2026-01-20T00:35:57.196818292Z" level=warning msg="cleaning up after shim disconnected" id=7633a777647b4b387b01a498fc9265dd35811c63ce0c812080d788d0f8c977e0 namespace=k8s.io Jan 20 00:35:57.196836 containerd[1475]: time="2026-01-20T00:35:57.196832148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:58.062838 kubelet[2527]: E0120 00:35:58.062528 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:58.070938 containerd[1475]: time="2026-01-20T00:35:58.070893554Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:35:58.144918 containerd[1475]: time="2026-01-20T00:35:58.144841495Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a476d76475161633e7bc670edbd6c42dbf0f21ef6f8cc7f141011336caad0230\"" Jan 20 00:35:58.145940 containerd[1475]: time="2026-01-20T00:35:58.145879420Z" level=info msg="StartContainer for \"a476d76475161633e7bc670edbd6c42dbf0f21ef6f8cc7f141011336caad0230\"" Jan 20 00:35:58.187460 systemd[1]: Started cri-containerd-a476d76475161633e7bc670edbd6c42dbf0f21ef6f8cc7f141011336caad0230.scope - libcontainer container a476d76475161633e7bc670edbd6c42dbf0f21ef6f8cc7f141011336caad0230. Jan 20 00:35:58.231602 containerd[1475]: time="2026-01-20T00:35:58.231474976Z" level=info msg="StartContainer for \"a476d76475161633e7bc670edbd6c42dbf0f21ef6f8cc7f141011336caad0230\" returns successfully" Jan 20 00:35:58.233112 systemd[1]: cri-containerd-a476d76475161633e7bc670edbd6c42dbf0f21ef6f8cc7f141011336caad0230.scope: Deactivated successfully. Jan 20 00:35:58.260351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a476d76475161633e7bc670edbd6c42dbf0f21ef6f8cc7f141011336caad0230-rootfs.mount: Deactivated successfully. Jan 20 00:35:58.269810 containerd[1475]: time="2026-01-20T00:35:58.269732531Z" level=info msg="shim disconnected" id=a476d76475161633e7bc670edbd6c42dbf0f21ef6f8cc7f141011336caad0230 namespace=k8s.io Jan 20 00:35:58.269949 containerd[1475]: time="2026-01-20T00:35:58.269809154Z" level=warning msg="cleaning up after shim disconnected" id=a476d76475161633e7bc670edbd6c42dbf0f21ef6f8cc7f141011336caad0230 namespace=k8s.io Jan 20 00:35:58.269949 containerd[1475]: time="2026-01-20T00:35:58.269829642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:58.466159 kubelet[2527]: I0120 00:35:58.466019 2527 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T00:35:58Z","lastTransitionTime":"2026-01-20T00:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 00:35:59.067612 kubelet[2527]: E0120 00:35:59.067566 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:59.075439 containerd[1475]: time="2026-01-20T00:35:59.075243676Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:35:59.096702 containerd[1475]: time="2026-01-20T00:35:59.096582068Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c0cdf83c832096f280c9110dacbccaddd879b7c4465edc14333ad9f6705e1f3\"" Jan 20 00:35:59.097457 containerd[1475]: time="2026-01-20T00:35:59.097417607Z" level=info msg="StartContainer for \"6c0cdf83c832096f280c9110dacbccaddd879b7c4465edc14333ad9f6705e1f3\"" Jan 20 00:35:59.128864 systemd[1]: Started cri-containerd-6c0cdf83c832096f280c9110dacbccaddd879b7c4465edc14333ad9f6705e1f3.scope - libcontainer container 6c0cdf83c832096f280c9110dacbccaddd879b7c4465edc14333ad9f6705e1f3. Jan 20 00:35:59.159940 systemd[1]: cri-containerd-6c0cdf83c832096f280c9110dacbccaddd879b7c4465edc14333ad9f6705e1f3.scope: Deactivated successfully. Jan 20 00:35:59.162037 containerd[1475]: time="2026-01-20T00:35:59.161997805Z" level=info msg="StartContainer for \"6c0cdf83c832096f280c9110dacbccaddd879b7c4465edc14333ad9f6705e1f3\" returns successfully" Jan 20 00:35:59.195945 containerd[1475]: time="2026-01-20T00:35:59.195550175Z" level=info msg="shim disconnected" id=6c0cdf83c832096f280c9110dacbccaddd879b7c4465edc14333ad9f6705e1f3 namespace=k8s.io Jan 20 00:35:59.195945 containerd[1475]: time="2026-01-20T00:35:59.195652446Z" level=warning msg="cleaning up after shim disconnected" id=6c0cdf83c832096f280c9110dacbccaddd879b7c4465edc14333ad9f6705e1f3 namespace=k8s.io Jan 20 00:35:59.195945 containerd[1475]: time="2026-01-20T00:35:59.195720654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:59.237107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c0cdf83c832096f280c9110dacbccaddd879b7c4465edc14333ad9f6705e1f3-rootfs.mount: Deactivated successfully. Jan 20 00:35:59.836754 kubelet[2527]: E0120 00:35:59.836617 2527 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 00:36:00.075369 kubelet[2527]: E0120 00:36:00.075319 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:00.083336 containerd[1475]: time="2026-01-20T00:36:00.083215370Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:36:00.106267 containerd[1475]: time="2026-01-20T00:36:00.106053987Z" level=info msg="CreateContainer within sandbox \"fcb89e546a1565fe5335d96edd14b181800a5222447dfb71d4dbf8e35ec1d426\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d65d2de8537a6a4b3dd6bc4c12bf6134f8146bfc314b6050d1e8adf001add3cb\"" Jan 20 00:36:00.107148 containerd[1475]: time="2026-01-20T00:36:00.106990427Z" level=info msg="StartContainer for \"d65d2de8537a6a4b3dd6bc4c12bf6134f8146bfc314b6050d1e8adf001add3cb\"" Jan 20 00:36:00.145909 systemd[1]: Started cri-containerd-d65d2de8537a6a4b3dd6bc4c12bf6134f8146bfc314b6050d1e8adf001add3cb.scope - libcontainer container d65d2de8537a6a4b3dd6bc4c12bf6134f8146bfc314b6050d1e8adf001add3cb. Jan 20 00:36:00.185479 containerd[1475]: time="2026-01-20T00:36:00.185424866Z" level=info msg="StartContainer for \"d65d2de8537a6a4b3dd6bc4c12bf6134f8146bfc314b6050d1e8adf001add3cb\" returns successfully" Jan 20 00:36:00.704740 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 20 00:36:01.080534 kubelet[2527]: E0120 00:36:01.080487 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:01.114802 kubelet[2527]: I0120 00:36:01.114703 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kqvrz" podStartSLOduration=6.114584076 podStartE2EDuration="6.114584076s" podCreationTimestamp="2026-01-20 00:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:36:01.111076589 +0000 UTC m=+166.589449198" watchObservedRunningTime="2026-01-20 00:36:01.114584076 +0000 UTC m=+166.592956696" Jan 20 00:36:02.304573 kubelet[2527]: E0120 00:36:02.304466 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:05.263390 systemd-networkd[1406]: lxc_health: Link UP Jan 20 00:36:05.268918 systemd-networkd[1406]: lxc_health: Gained carrier Jan 20 00:36:06.307130 kubelet[2527]: E0120 00:36:06.305345 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:06.310078 systemd-networkd[1406]: lxc_health: Gained IPv6LL Jan 20 00:36:07.125537 kubelet[2527]: E0120 00:36:07.125407 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:08.127456 kubelet[2527]: E0120 00:36:08.127386 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:11.388848 sshd[4481]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:11.393008 systemd[1]: sshd@32-10.0.0.11:22-10.0.0.1:38476.service: Deactivated successfully. Jan 20 00:36:11.395208 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 00:36:11.397263 systemd-logind[1468]: Session 33 logged out. Waiting for processes to exit. Jan 20 00:36:11.399867 systemd-logind[1468]: Removed session 33.