Jan 17 00:30:29.677557 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:30:29.677588 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:30:29.677604 kernel: BIOS-provided physical RAM map: Jan 17 00:30:29.677614 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:30:29.677623 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 00:30:29.677632 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 00:30:29.677643 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 00:30:29.677653 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 00:30:29.677662 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 00:30:29.677672 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 00:30:29.677685 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 00:30:29.677695 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 00:30:29.677726 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 00:30:29.677737 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 00:30:29.677749 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 00:30:29.677759 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 00:30:29.677773 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 00:30:29.677783 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 00:30:29.677793 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 00:30:29.677803 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:30:29.677814 kernel: NX (Execute Disable) protection: active Jan 17 00:30:29.677824 kernel: APIC: Static calls initialized Jan 17 00:30:29.677834 kernel: efi: EFI v2.7 by EDK II Jan 17 00:30:29.677844 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 17 00:30:29.677854 kernel: SMBIOS 2.8 present. Jan 17 00:30:29.677864 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 00:30:29.677875 kernel: Hypervisor detected: KVM Jan 17 00:30:29.677888 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:30:29.677898 kernel: kvm-clock: using sched offset of 20359202526 cycles Jan 17 00:30:29.677909 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:30:29.677920 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:30:29.677930 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:30:29.677941 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:30:29.677951 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 00:30:29.677962 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:30:29.677972 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:30:29.677987 kernel: Using GB pages for direct mapping Jan 17 00:30:29.677997 kernel: Secure boot disabled Jan 17 00:30:29.678007 kernel: ACPI: Early table checksum verification disabled Jan 17 00:30:29.678018 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 00:30:29.678034 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:30:29.678046 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:30:29.678057 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:30:29.678073 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 00:30:29.678084 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:30:29.678113 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:30:29.678123 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:30:29.678133 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:30:29.678144 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:30:29.678155 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 00:30:29.678170 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 17 00:30:29.679675 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 00:30:29.679698 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 00:30:29.679711 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 00:30:29.679722 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 00:30:29.679733 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 00:30:29.679744 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 00:30:29.679755 kernel: No NUMA configuration found Jan 17 00:30:29.679792 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 00:30:29.679811 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 00:30:29.679822 kernel: Zone ranges: Jan 17 00:30:29.679833 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:30:29.679844 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 00:30:29.679855 kernel: Normal empty Jan 17 00:30:29.679892 kernel: Movable zone start for each node Jan 17 00:30:29.679905 kernel: Early memory node ranges Jan 17 00:30:29.679915 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:30:29.679927 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 00:30:29.679937 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 00:30:29.679953 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 00:30:29.679964 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 00:30:29.679975 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 00:30:29.679986 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 00:30:29.679997 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:30:29.680008 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:30:29.680019 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 00:30:29.680030 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:30:29.680041 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 00:30:29.680056 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:30:29.680068 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 00:30:29.680079 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:30:29.680090 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:30:29.680100 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:30:29.680111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:30:29.680122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:30:29.680134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:30:29.680145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:30:29.680161 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:30:29.680172 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:30:29.680225 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:30:29.680236 kernel: TSC deadline timer available Jan 17 00:30:29.680246 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:30:29.680258 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:30:29.680269 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:30:29.680280 kernel: kvm-guest: setup PV sched yield Jan 17 00:30:29.680290 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:30:29.680307 kernel: Booting paravirtualized kernel on KVM Jan 17 00:30:29.680318 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:30:29.680380 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:30:29.680393 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:30:29.680406 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:30:29.680417 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:30:29.680426 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:30:29.680435 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:30:29.680449 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:30:29.680494 kernel: random: crng init done Jan 17 00:30:29.680506 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:30:29.680517 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:30:29.680528 kernel: Fallback order for Node 0: 0 Jan 17 00:30:29.680539 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 00:30:29.680550 kernel: Policy zone: DMA32 Jan 17 00:30:29.680561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:30:29.680573 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 17 00:30:29.680590 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:30:29.680600 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:30:29.680611 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:30:29.680623 kernel: Dynamic Preempt: voluntary Jan 17 00:30:29.680634 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:30:29.680659 kernel: rcu: RCU event tracing is enabled. Jan 17 00:30:29.680675 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:30:29.680687 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:30:29.680699 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:30:29.680710 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:30:29.680722 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:30:29.680734 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:30:29.680750 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:30:29.680761 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:30:29.680773 kernel: Console: colour dummy device 80x25 Jan 17 00:30:29.680785 kernel: printk: console [ttyS0] enabled Jan 17 00:30:29.680796 kernel: ACPI: Core revision 20230628 Jan 17 00:30:29.680812 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:30:29.680823 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:30:29.680834 kernel: x2apic enabled Jan 17 00:30:29.680846 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:30:29.680858 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:30:29.680869 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:30:29.680880 kernel: kvm-guest: setup PV IPIs Jan 17 00:30:29.680892 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:30:29.680904 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:30:29.680921 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:30:29.680932 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:30:29.680944 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:30:29.680956 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:30:29.680968 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:30:29.681857 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:30:29.681870 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:30:29.681883 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:30:29.681894 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:30:29.681914 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:30:29.681926 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:30:29.681937 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:30:29.681949 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:30:29.681985 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:30:29.681998 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:30:29.682010 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:30:29.682021 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:30:29.682038 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:30:29.682050 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:30:29.682061 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:30:29.682072 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:30:29.682084 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:30:29.682096 kernel: landlock: Up and running. Jan 17 00:30:29.682107 kernel: SELinux: Initializing. Jan 17 00:30:29.682119 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:30:29.682130 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:30:29.682169 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:30:29.682219 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:30:29.682231 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:30:29.682241 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:30:29.682253 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:30:29.682265 kernel: signal: max sigframe size: 1776 Jan 17 00:30:29.682276 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:30:29.682288 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:30:29.682299 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:30:29.682316 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:30:29.682384 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:30:29.682401 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:30:29.682414 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:30:29.682424 kernel: smpboot: Max logical packages: 1 Jan 17 00:30:29.682434 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:30:29.682445 kernel: devtmpfs: initialized Jan 17 00:30:29.682459 kernel: x86/mm: Memory block size: 128MB Jan 17 00:30:29.682469 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 00:30:29.682488 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 00:30:29.682500 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 00:30:29.682512 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 00:30:29.682523 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 00:30:29.682535 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:30:29.682547 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:30:29.682559 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:30:29.682570 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:30:29.682587 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:30:29.682599 kernel: audit: type=2000 audit(1768609826.359:1): state=initialized audit_enabled=0 res=1 Jan 17 00:30:29.682610 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:30:29.682622 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:30:29.682633 kernel: cpuidle: using governor menu Jan 17 00:30:29.682645 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:30:29.682656 kernel: dca service started, version 1.12.1 Jan 17 00:30:29.682668 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:30:29.682680 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:30:29.682695 kernel: PCI: Using configuration type 1 for base access Jan 17 00:30:29.682707 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:30:29.682719 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:30:29.682730 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:30:29.682742 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:30:29.682754 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:30:29.682765 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:30:29.682777 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:30:29.682788 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:30:29.682804 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:30:29.682816 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:30:29.682828 kernel: ACPI: Interpreter enabled Jan 17 00:30:29.682839 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:30:29.682851 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:30:29.682863 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:30:29.682874 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:30:29.682886 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:30:29.682897 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:30:29.683643 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:30:29.683903 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:30:29.684115 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:30:29.684132 kernel: PCI host bridge to bus 0000:00 Jan 17 00:30:29.684565 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:30:29.684764 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:30:29.684990 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:30:29.685260 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:30:29.685564 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:30:29.685792 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 00:30:29.686032 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:30:29.686515 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:30:29.686777 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:30:29.687486 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 00:30:29.687862 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 00:30:29.688254 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:30:29.688528 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:30:29.688737 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:30:29.689045 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:30:29.689304 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 00:30:29.689680 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 00:30:29.689884 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 00:30:29.690386 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:30:29.690699 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 00:30:29.690911 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 00:30:29.691123 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 00:30:29.691618 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:30:29.691970 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 00:30:29.692161 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 00:30:29.692458 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 00:30:29.692658 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 00:30:29.692972 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:30:29.693269 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:30:29.693636 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:30:29.693852 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 00:30:29.694056 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 00:30:29.694436 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:30:29.694656 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 00:30:29.694675 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:30:29.694687 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:30:29.694698 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:30:29.694717 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:30:29.694729 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:30:29.694774 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:30:29.694786 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:30:29.694798 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:30:29.694811 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:30:29.694846 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:30:29.694858 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:30:29.694869 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:30:29.694886 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:30:29.694898 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:30:29.694910 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:30:29.694921 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:30:29.694933 kernel: iommu: Default domain type: Translated Jan 17 00:30:29.694945 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:30:29.694957 kernel: efivars: Registered efivars operations Jan 17 00:30:29.694969 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:30:29.694980 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:30:29.694997 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 00:30:29.695009 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 00:30:29.695020 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 00:30:29.695032 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 00:30:29.695288 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:30:29.695605 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:30:29.695812 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:30:29.695830 kernel: vgaarb: loaded Jan 17 00:30:29.695849 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:30:29.695861 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:30:29.695873 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:30:29.695884 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:30:29.695896 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:30:29.695908 kernel: pnp: PnP ACPI init Jan 17 00:30:29.696282 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:30:29.696306 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:30:29.696317 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:30:29.696392 kernel: NET: Registered PF_INET protocol family Jan 17 00:30:29.696407 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:30:29.696418 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:30:29.696428 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:30:29.696440 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:30:29.696454 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:30:29.696464 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:30:29.696475 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:30:29.696493 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:30:29.696505 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:30:29.696517 kernel: NET: Registered PF_XDP protocol family Jan 17 00:30:29.696722 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 00:30:29.696917 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 00:30:29.697098 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:30:29.697325 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:30:29.697568 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:30:29.697750 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:30:29.697924 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:30:29.698096 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 00:30:29.698110 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:30:29.698122 kernel: Initialise system trusted keyrings Jan 17 00:30:29.698134 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:30:29.698145 kernel: Key type asymmetric registered Jan 17 00:30:29.698157 kernel: Asymmetric key parser 'x509' registered Jan 17 00:30:29.698167 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:30:29.698235 kernel: io scheduler mq-deadline registered Jan 17 00:30:29.698247 kernel: io scheduler kyber registered Jan 17 00:30:29.698259 kernel: io scheduler bfq registered Jan 17 00:30:29.698270 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:30:29.698282 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:30:29.698293 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:30:29.698304 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:30:29.698316 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:30:29.698374 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:30:29.698394 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:30:29.698408 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:30:29.698419 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:30:29.698707 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:30:29.698726 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:30:29.698906 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:30:29.699082 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:30:28 UTC (1768609828) Jan 17 00:30:29.699318 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:30:29.699403 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:30:29.699416 kernel: efifb: probing for efifb Jan 17 00:30:29.699427 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 00:30:29.699438 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 00:30:29.699450 kernel: efifb: scrolling: redraw Jan 17 00:30:29.699463 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 00:30:29.699473 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:30:29.699485 kernel: fb0: EFI VGA frame buffer device Jan 17 00:30:29.699497 kernel: pstore: Using crash dump compression: deflate Jan 17 00:30:29.699513 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:30:29.699525 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:30:29.699541 kernel: Segment Routing with IPv6 Jan 17 00:30:29.699553 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:30:29.699564 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:30:29.699575 kernel: Key type dns_resolver registered Jan 17 00:30:29.699586 kernel: IPI shorthand broadcast: enabled Jan 17 00:30:29.699624 kernel: sched_clock: Marking stable (2188023235, 942430122)->(3781337981, -650884624) Jan 17 00:30:29.699640 kernel: registered taskstats version 1 Jan 17 00:30:29.699655 kernel: Loading compiled-in X.509 certificates Jan 17 00:30:29.699666 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:30:29.699677 kernel: Key type .fscrypt registered Jan 17 00:30:29.699688 kernel: Key type fscrypt-provisioning registered Jan 17 00:30:29.699699 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:30:29.699711 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:30:29.699723 kernel: ima: No architecture policies found Jan 17 00:30:29.699735 kernel: clk: Disabling unused clocks Jan 17 00:30:29.699747 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:30:29.699762 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:30:29.699773 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:30:29.699784 kernel: Run /init as init process Jan 17 00:30:29.699795 kernel: with arguments: Jan 17 00:30:29.699808 kernel: /init Jan 17 00:30:29.699819 kernel: with environment: Jan 17 00:30:29.699831 kernel: HOME=/ Jan 17 00:30:29.699842 kernel: TERM=linux Jan 17 00:30:29.699855 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:30:29.699873 systemd[1]: Detected virtualization kvm. Jan 17 00:30:29.699885 systemd[1]: Detected architecture x86-64. Jan 17 00:30:29.699896 systemd[1]: Running in initrd. Jan 17 00:30:29.699908 systemd[1]: No hostname configured, using default hostname. Jan 17 00:30:29.699920 systemd[1]: Hostname set to . Jan 17 00:30:29.699932 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:30:29.699947 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:30:29.699959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:30:29.699971 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:30:29.699984 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:30:29.700023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:30:29.700037 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:30:29.700054 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:30:29.700068 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:30:29.700080 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:30:29.700093 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:30:29.700105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:30:29.700117 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:30:29.700133 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:30:29.700144 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:30:29.700156 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:30:29.700167 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:30:29.700180 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:30:29.700232 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:30:29.700245 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:30:29.700257 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:30:29.700269 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:30:29.700286 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:30:29.700297 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:30:29.700309 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:30:29.700321 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:30:29.700383 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:30:29.700397 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:30:29.700412 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:30:29.700423 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:30:29.700434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:30:29.700487 systemd-journald[194]: Collecting audit messages is disabled. Jan 17 00:30:29.700514 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:30:29.700526 systemd-journald[194]: Journal started Jan 17 00:30:29.700554 systemd-journald[194]: Runtime Journal (/run/log/journal/6e46d38ae95c4fe59bd251db090dc940) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:30:29.705467 systemd-modules-load[195]: Inserted module 'overlay' Jan 17 00:30:29.718993 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:30:29.730577 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:30:29.735131 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:30:29.756698 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:30:29.766424 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:30:29.786789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:30:29.801798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:30:29.809478 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:30:29.819177 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:30:29.830523 kernel: Bridge firewalling registered Jan 17 00:30:29.835406 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 17 00:30:29.838987 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:30:29.848849 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:30:29.875571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:30:29.877090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:30:29.893381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:30:29.920631 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:30:29.925641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:30:29.964456 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:30:29.982514 dracut-cmdline[226]: dracut-dracut-053 Jan 17 00:30:30.000624 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:30:30.027620 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:30:30.126570 systemd-resolved[239]: Positive Trust Anchors: Jan 17 00:30:30.126622 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:30:30.126668 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:30:30.170813 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 17 00:30:30.173477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:30:30.178024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:30:30.221437 kernel: SCSI subsystem initialized Jan 17 00:30:30.235492 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:30:30.258020 kernel: iscsi: registered transport (tcp) Jan 17 00:30:30.294279 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:30:30.294455 kernel: QLogic iSCSI HBA Driver Jan 17 00:30:30.382675 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:30:30.402987 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:30:30.459015 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:30:30.459094 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:30:30.464664 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:30:30.542285 kernel: raid6: avx2x4 gen() 26835 MB/s Jan 17 00:30:30.561360 kernel: raid6: avx2x2 gen() 17418 MB/s Jan 17 00:30:30.581286 kernel: raid6: avx2x1 gen() 12785 MB/s Jan 17 00:30:30.581469 kernel: raid6: using algorithm avx2x4 gen() 26835 MB/s Jan 17 00:30:30.602261 kernel: raid6: .... xor() 4543 MB/s, rmw enabled Jan 17 00:30:30.602442 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:30:30.637555 kernel: xor: automatically using best checksumming function avx Jan 17 00:30:30.954318 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:30:30.985146 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:30:31.018823 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:30:31.054995 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 17 00:30:31.069449 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:30:31.105736 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:30:31.147318 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 17 00:30:31.233548 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:30:31.252644 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:30:31.407761 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:30:31.432653 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:30:31.475569 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:30:31.484286 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:30:31.498382 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:30:31.505042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:30:31.526795 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:30:31.531752 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:30:31.551852 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:30:31.572156 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:30:31.570075 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:30:31.614848 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:30:31.614947 kernel: GPT:9289727 != 19775487 Jan 17 00:30:31.614991 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:30:31.615030 kernel: GPT:9289727 != 19775487 Jan 17 00:30:31.615047 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:30:31.615063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:30:31.593979 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:30:31.594135 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:30:31.598968 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:30:31.601414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:30:31.601762 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:30:31.602079 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:30:31.645888 kernel: libata version 3.00 loaded. Jan 17 00:30:31.654160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:30:31.676808 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:30:31.680435 kernel: AES CTR mode by8 optimization enabled Jan 17 00:30:31.693415 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:30:31.693732 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:30:31.706408 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Jan 17 00:30:31.706473 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:30:31.713238 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:30:31.721792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:30:31.744778 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (469) Jan 17 00:30:31.744810 kernel: scsi host0: ahci Jan 17 00:30:31.745589 kernel: scsi host1: ahci Jan 17 00:30:31.745877 kernel: scsi host2: ahci Jan 17 00:30:31.745962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:30:31.759811 kernel: scsi host3: ahci Jan 17 00:30:31.762536 kernel: scsi host4: ahci Jan 17 00:30:31.771374 kernel: scsi host5: ahci Jan 17 00:30:31.771760 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 00:30:31.771778 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 00:30:31.775245 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:30:31.804594 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 00:30:31.804628 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 00:30:31.804647 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 00:30:31.804663 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 00:30:31.807770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:30:31.831698 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:30:31.842711 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:30:31.887670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:30:31.898926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:30:31.909905 disk-uuid[570]: Primary Header is updated. Jan 17 00:30:31.909905 disk-uuid[570]: Secondary Entries is updated. Jan 17 00:30:31.909905 disk-uuid[570]: Secondary Header is updated. Jan 17 00:30:31.928024 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:30:31.928068 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:30:31.975904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:30:32.101955 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:30:32.102010 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:30:32.102023 kernel: ata3.00: applying bridge limits Jan 17 00:30:32.106833 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:30:32.115506 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:30:32.121444 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:30:32.126509 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:30:32.134642 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:30:32.138662 kernel: ata3.00: configured for UDMA/100 Jan 17 00:30:32.149640 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:30:32.224546 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:30:32.224926 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:30:32.241406 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:30:32.931565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:30:32.934598 disk-uuid[571]: The operation has completed successfully. Jan 17 00:30:33.018617 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:30:33.019022 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:30:33.055632 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:30:33.069130 sh[595]: Success Jan 17 00:30:33.108435 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:30:33.186902 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:30:33.194732 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:30:33.228731 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:30:33.262913 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:30:33.262972 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:30:33.268456 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:30:33.272855 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:30:33.276620 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:30:33.301631 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:30:33.313957 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:30:33.333458 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:30:33.346834 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:30:33.378279 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:30:33.378308 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:30:33.378326 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:30:33.389442 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:30:33.415283 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:30:33.424425 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:30:33.447837 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:30:33.465729 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:30:33.641090 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:30:33.659868 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:30:33.669849 ignition[684]: Ignition 2.19.0 Jan 17 00:30:33.672394 ignition[684]: Stage: fetch-offline Jan 17 00:30:33.672469 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:30:33.672484 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:30:33.672647 ignition[684]: parsed url from cmdline: "" Jan 17 00:30:33.672654 ignition[684]: no config URL provided Jan 17 00:30:33.672663 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:30:33.695267 systemd-networkd[782]: lo: Link UP Jan 17 00:30:33.672677 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:30:33.695273 systemd-networkd[782]: lo: Gained carrier Jan 17 00:30:33.672712 ignition[684]: op(1): [started] loading QEMU firmware config module Jan 17 00:30:33.699470 systemd-networkd[782]: Enumeration completed Jan 17 00:30:33.672720 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:30:33.699576 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:30:33.701694 ignition[684]: op(1): [finished] loading QEMU firmware config module Jan 17 00:30:33.701050 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:30:33.701058 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:30:33.701940 systemd[1]: Reached target network.target - Network. Jan 17 00:30:33.711410 systemd-networkd[782]: eth0: Link UP Jan 17 00:30:33.711416 systemd-networkd[782]: eth0: Gained carrier Jan 17 00:30:33.711428 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:30:33.761635 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:30:34.043111 ignition[684]: parsing config with SHA512: eece84468496284bcecf0e1bd562039c8757eed38ce85ce83201db59629ed5942cf38e019d7ae3bd603965762a91ccf68cd27de629bde258a7d6d30571db89c1 Jan 17 00:30:34.065722 unknown[684]: fetched base config from "system" Jan 17 00:30:34.065738 unknown[684]: fetched user config from "qemu" Jan 17 00:30:34.066315 ignition[684]: fetch-offline: fetch-offline passed Jan 17 00:30:34.074952 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:30:34.066482 ignition[684]: Ignition finished successfully Jan 17 00:30:34.074963 systemd-resolved[239]: Detected conflict on linux IN A 10.0.0.79 Jan 17 00:30:34.074979 systemd-resolved[239]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 17 00:30:34.089655 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:30:34.107564 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:30:34.155919 ignition[788]: Ignition 2.19.0 Jan 17 00:30:34.155959 ignition[788]: Stage: kargs Jan 17 00:30:34.156266 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:30:34.156286 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:30:34.159667 ignition[788]: kargs: kargs passed Jan 17 00:30:34.159741 ignition[788]: Ignition finished successfully Jan 17 00:30:34.182937 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:30:34.199662 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:30:34.250376 ignition[795]: Ignition 2.19.0 Jan 17 00:30:34.250628 ignition[795]: Stage: disks Jan 17 00:30:34.256057 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:30:34.250840 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:30:34.295575 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:30:34.250856 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:30:34.307082 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:30:34.253652 ignition[795]: disks: disks passed Jan 17 00:30:34.367420 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:30:34.253714 ignition[795]: Ignition finished successfully Jan 17 00:30:34.374623 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:30:34.398570 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:30:34.443593 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:30:34.513146 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:30:34.523771 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:30:34.571896 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:30:34.834795 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:30:34.837492 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:30:34.838460 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:30:34.860624 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:30:34.875539 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:30:34.901985 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Jan 17 00:30:34.902712 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:30:34.879233 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:30:34.879301 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:30:34.953151 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:30:34.956016 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:30:34.956036 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:30:34.879380 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:30:34.950462 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:30:34.959077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:30:35.016525 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:30:38.856751 systemd-networkd[782]: eth0: Gained IPv6LL Jan 17 00:30:38.974475 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:30:39.004035 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:30:39.067305 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:30:39.108729 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:30:39.648616 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:30:39.679064 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:30:39.687555 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:30:39.752375 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:30:39.753779 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:30:39.848302 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:30:39.896394 ignition[928]: INFO : Ignition 2.19.0 Jan 17 00:30:39.896394 ignition[928]: INFO : Stage: mount Jan 17 00:30:39.905784 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:30:39.905784 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:30:39.905784 ignition[928]: INFO : mount: mount passed Jan 17 00:30:39.905784 ignition[928]: INFO : Ignition finished successfully Jan 17 00:30:39.917764 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:30:39.968808 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:30:39.995921 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:30:40.045726 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Jan 17 00:30:40.056094 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:30:40.056163 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:30:40.061240 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:30:40.078656 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:30:40.090563 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:30:40.186470 ignition[958]: INFO : Ignition 2.19.0 Jan 17 00:30:40.186470 ignition[958]: INFO : Stage: files Jan 17 00:30:40.186470 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:30:40.186470 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:30:40.225435 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:30:40.225435 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:30:40.225435 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:30:40.225435 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:30:40.272721 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:30:40.272721 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:30:40.272721 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:30:40.272721 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:30:40.229819 unknown[958]: wrote ssh authorized keys file for user: core Jan 17 00:30:40.541565 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:30:40.747697 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:30:40.747697 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:30:40.780852 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:30:40.873253 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:30:41.423883 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:30:41.423883 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:30:41.449809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:30:42.598937 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:30:47.693325 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:30:47.716757 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:30:47.740324 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:30:47.740324 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:30:47.740324 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:30:47.740324 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 00:30:47.740324 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:30:47.740324 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:30:47.740324 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 00:30:47.740324 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:30:47.842001 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:30:47.866573 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:30:47.866573 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:30:47.866573 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:30:47.866573 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:30:47.866573 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:30:47.866573 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:30:47.944663 ignition[958]: INFO : files: files passed Jan 17 00:30:47.944663 ignition[958]: INFO : Ignition finished successfully Jan 17 00:30:47.960825 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:30:47.982655 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:30:47.990790 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:30:48.007054 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:30:48.007272 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:30:48.039101 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:30:48.048571 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:30:48.048571 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:30:48.075787 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:30:48.076885 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:30:48.097747 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:30:48.121709 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:30:48.189247 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:30:48.189978 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:30:48.207790 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:30:48.212115 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:30:48.232016 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:30:48.252664 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:30:48.334300 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:30:48.356169 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:30:48.402942 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:30:48.432000 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:30:48.466636 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:30:48.531791 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:30:48.533683 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:30:48.556142 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:30:48.563488 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:30:48.592267 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:30:48.634799 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:30:48.659282 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:30:48.669408 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:30:48.680251 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:30:48.713109 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:30:48.772479 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:30:48.791431 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:30:48.808417 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:30:48.808807 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:30:48.854861 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:30:48.877802 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:30:48.903713 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:30:48.915540 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:30:48.944508 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:30:48.944693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:30:48.981071 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:30:48.981438 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:30:48.995068 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:30:49.001074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:30:49.006710 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:30:49.097010 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:30:49.103730 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:30:49.118766 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:30:49.119624 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:30:49.169000 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:30:49.173836 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:30:49.202632 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:30:49.202841 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:30:49.213766 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:30:49.213926 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:30:49.280638 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:30:49.287242 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:30:49.287487 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:30:49.342274 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:30:49.351932 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:30:49.352551 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:30:49.368451 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:30:49.368654 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:30:49.410608 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:30:49.432468 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:30:49.433103 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:30:49.464858 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:30:49.466478 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:30:51.498532 ignition[1012]: INFO : Ignition 2.19.0 Jan 17 00:30:51.498532 ignition[1012]: INFO : Stage: umount Jan 17 00:30:51.498532 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:30:51.498532 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:30:51.498532 ignition[1012]: INFO : umount: umount passed Jan 17 00:30:51.498532 ignition[1012]: INFO : Ignition finished successfully Jan 17 00:30:51.504978 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:30:51.505263 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:30:51.514079 systemd[1]: Stopped target network.target - Network. Jan 17 00:30:51.524896 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:30:51.527809 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:30:51.527971 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:30:51.528044 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:30:51.528136 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:30:51.528196 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:30:51.530094 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:30:51.530154 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:30:51.530806 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:30:51.530864 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:30:51.538421 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:30:51.538669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:30:51.588635 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:30:51.588935 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:30:51.592911 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:30:51.593010 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:30:51.622191 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 17 00:30:51.647014 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:30:51.647458 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:30:51.691611 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:30:51.691693 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:30:51.830110 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:30:51.834467 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:30:51.834574 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:30:51.869974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:30:51.870124 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:30:51.891513 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:30:51.891670 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:30:51.910781 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:30:51.974067 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:30:51.974433 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:30:51.995415 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:30:51.995946 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:30:52.012730 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:30:52.012826 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:30:52.038820 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:30:52.038894 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:30:52.051443 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:30:52.051657 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:30:52.074983 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:30:52.075085 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:30:52.100643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:30:52.100804 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:30:52.180410 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:30:52.183728 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:30:52.183832 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:30:52.249790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:30:52.249917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:30:52.255519 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:30:52.255690 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:30:52.282750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:30:52.350862 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:30:52.466558 systemd[1]: Switching root. Jan 17 00:30:52.563759 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 17 00:30:52.563849 systemd-journald[194]: Journal stopped Jan 17 00:30:56.472855 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:30:56.472954 kernel: SELinux: policy capability open_perms=1 Jan 17 00:30:56.472974 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:30:56.472991 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:30:56.473017 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:30:56.473046 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:30:56.473065 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:30:56.473091 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:30:56.473120 kernel: audit: type=1403 audit(1768609852.956:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:30:56.473150 systemd[1]: Successfully loaded SELinux policy in 102.535ms. Jan 17 00:30:56.473183 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 117.317ms. Jan 17 00:30:56.473281 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:30:56.473310 systemd[1]: Detected virtualization kvm. Jan 17 00:30:56.473420 systemd[1]: Detected architecture x86-64. Jan 17 00:30:56.473446 systemd[1]: Detected first boot. Jan 17 00:30:56.473468 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:30:56.473500 zram_generator::config[1056]: No configuration found. Jan 17 00:30:56.473523 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:30:56.473586 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:30:56.473608 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:30:56.473629 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:30:56.473650 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:30:56.474418 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:30:56.474441 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:30:56.474471 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:30:56.474493 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:30:56.474513 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:30:56.474533 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:30:56.474552 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:30:56.474574 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:30:56.474595 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:30:56.474616 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:30:56.474636 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:30:56.474791 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:30:56.474815 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:30:56.474837 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:30:56.474856 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:30:56.474878 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:30:56.474898 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:30:56.474918 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:30:56.474938 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:30:56.474966 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:30:56.474987 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:30:56.475008 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:30:56.475028 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:30:56.475047 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:30:56.475067 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:30:56.475088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:30:56.475108 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:30:56.475128 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:30:56.475155 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:30:56.475177 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:30:56.475198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:30:56.475272 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:30:56.475294 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:30:56.475314 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:30:56.475393 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:30:56.475418 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:30:56.475442 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:30:56.475470 systemd[1]: Reached target machines.target - Containers. Jan 17 00:30:56.475491 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:30:56.475511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:30:56.475531 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:30:56.475551 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:30:56.475572 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:30:56.475591 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:30:56.475612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:30:56.475637 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:30:56.475773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:30:56.475798 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:30:56.475819 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:30:56.475839 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:30:56.475860 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:30:56.475880 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:30:56.475900 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:30:56.475920 kernel: loop: module loaded Jan 17 00:30:56.475947 kernel: fuse: init (API version 7.39) Jan 17 00:30:56.475968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:30:56.475989 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:30:56.476011 kernel: ACPI: bus type drm_connector registered Jan 17 00:30:56.476032 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:30:56.476051 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:30:56.476071 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:30:56.476091 systemd[1]: Stopped verity-setup.service. Jan 17 00:30:56.476110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:30:56.476138 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:30:56.476159 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:30:56.476256 systemd-journald[1128]: Collecting audit messages is disabled. Jan 17 00:30:56.476299 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:30:56.476321 systemd-journald[1128]: Journal started Jan 17 00:30:56.476422 systemd-journald[1128]: Runtime Journal (/run/log/journal/6e46d38ae95c4fe59bd251db090dc940) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:30:55.147656 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:30:55.199948 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:30:55.202445 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:30:55.203116 systemd[1]: systemd-journald.service: Consumed 1.822s CPU time. Jan 17 00:30:56.488243 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:30:56.490880 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:30:56.495930 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:30:56.501632 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:30:56.506634 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:30:56.512657 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:30:56.533482 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:30:56.534057 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:30:56.540569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:30:56.540889 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:30:56.550846 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:30:56.552005 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:30:56.556974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:30:56.557314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:30:56.562685 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:30:56.563324 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:30:56.568280 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:30:56.568716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:30:56.572803 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:30:56.576797 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:30:56.581474 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:30:56.607706 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:30:56.635510 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:30:56.664439 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:30:56.673404 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:30:56.673473 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:30:56.685900 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:30:56.749848 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:30:56.798136 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:30:56.844820 kernel: hrtimer: interrupt took 11909829 ns Jan 17 00:30:56.845019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:30:56.909101 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:30:56.965807 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:30:56.971164 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:30:56.981119 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:30:56.989015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:30:57.013095 systemd-journald[1128]: Time spent on flushing to /var/log/journal/6e46d38ae95c4fe59bd251db090dc940 is 35.947ms for 985 entries. Jan 17 00:30:57.013095 systemd-journald[1128]: System Journal (/var/log/journal/6e46d38ae95c4fe59bd251db090dc940) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:30:57.102904 systemd-journald[1128]: Received client request to flush runtime journal. Jan 17 00:30:57.102975 kernel: loop0: detected capacity change from 0 to 219144 Jan 17 00:30:57.026914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:30:57.047715 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:30:57.073160 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:30:57.090759 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:30:57.100375 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:30:57.105613 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:30:57.112565 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:30:57.260381 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:30:57.370076 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:30:57.394495 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:30:57.422889 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:30:57.436866 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:30:57.445178 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:30:57.476429 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:30:57.675981 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:30:57.691532 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 00:30:57.696623 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:30:57.697783 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:30:57.708764 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:30:57.732928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:30:58.006326 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 00:30:58.066702 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 00:30:58.066729 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 00:30:58.098111 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:30:58.443460 kernel: loop3: detected capacity change from 0 to 219144 Jan 17 00:30:58.536574 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 00:30:58.750426 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:30:58.954042 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:30:58.955319 (sd-merge)[1195]: Merged extensions into '/usr'. Jan 17 00:30:58.969763 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:30:58.969789 systemd[1]: Reloading... Jan 17 00:30:59.481429 zram_generator::config[1217]: No configuration found. Jan 17 00:31:00.067264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:31:00.354808 systemd[1]: Reloading finished in 1384 ms. Jan 17 00:31:00.355263 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:31:00.741476 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:31:00.748895 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:31:00.780853 systemd[1]: Starting ensure-sysext.service... Jan 17 00:31:00.810913 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:31:00.895579 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:31:00.895604 systemd[1]: Reloading... Jan 17 00:31:01.158292 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:31:01.159149 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:31:01.166027 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:31:01.167286 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 17 00:31:01.167557 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 17 00:31:01.179410 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:31:01.180699 systemd-tmpfiles[1259]: Skipping /boot Jan 17 00:31:01.199599 zram_generator::config[1291]: No configuration found. Jan 17 00:31:01.210480 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:31:01.210521 systemd-tmpfiles[1259]: Skipping /boot Jan 17 00:31:02.084669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:31:02.402525 systemd[1]: Reloading finished in 1506 ms. Jan 17 00:31:02.457449 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:31:02.473392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:31:02.509667 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:31:02.554838 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:31:02.579055 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:31:02.603973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:31:02.644729 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:31:02.662627 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:31:02.700778 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:31:02.719722 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jan 17 00:31:02.724099 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:31:02.725104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:31:02.744577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:31:02.759601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:31:02.786467 augenrules[1347]: No rules Jan 17 00:31:02.783733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:31:02.797155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:31:02.802127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:31:02.808514 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:31:02.836873 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:31:02.849542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:31:02.849810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:31:02.870914 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:31:02.883884 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:31:02.895986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:31:02.896614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:31:02.912684 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:31:02.931061 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:31:02.931455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:31:02.941591 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:31:02.973319 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:31:02.973816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:31:02.982627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:31:03.003611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:31:03.018045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:31:03.036678 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1372) Jan 17 00:31:03.039306 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:31:03.044721 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:31:03.054276 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:31:03.157097 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:31:03.164606 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:31:03.164651 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:31:03.165911 systemd[1]: Finished ensure-sysext.service. Jan 17 00:31:03.170797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:31:03.171093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:31:03.184102 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:31:03.184506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:31:03.190630 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:31:03.191159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:31:03.197400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:31:03.197716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:31:03.238246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:31:03.238504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:31:03.243027 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:31:03.438538 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:31:03.445851 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:31:03.456093 systemd-resolved[1334]: Positive Trust Anchors: Jan 17 00:31:03.456160 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:31:03.456249 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:31:03.470120 systemd-resolved[1334]: Defaulting to hostname 'linux'. Jan 17 00:31:03.478429 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:31:03.482952 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:31:03.641433 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 00:31:03.660437 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:31:03.786050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:31:03.813202 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:31:03.829981 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:31:03.835877 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:31:03.836702 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:31:03.848021 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:31:03.892697 systemd-networkd[1391]: lo: Link UP Jan 17 00:31:03.892734 systemd-networkd[1391]: lo: Gained carrier Jan 17 00:31:03.902482 systemd-networkd[1391]: Enumeration completed Jan 17 00:31:03.902711 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:31:03.903943 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:31:03.903993 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:31:03.907434 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:31:03.912458 systemd[1]: Reached target network.target - Network. Jan 17 00:31:03.913614 systemd-networkd[1391]: eth0: Link UP Jan 17 00:31:03.913657 systemd-networkd[1391]: eth0: Gained carrier Jan 17 00:31:03.913686 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:31:03.938951 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:31:03.965431 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:31:03.969652 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jan 17 00:31:03.969694 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:31:03.987717 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:31:05.035525 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:31:05.035679 systemd-timesyncd[1402]: Initial clock synchronization to Sat 2026-01-17 00:31:05.035096 UTC. Jan 17 00:31:05.035770 systemd-resolved[1334]: Clock change detected. Flushing caches. Jan 17 00:31:05.039431 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:31:05.046412 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:31:05.278910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:31:05.511615 kernel: kvm_amd: TSC scaling supported Jan 17 00:31:05.511734 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:31:05.511803 kernel: kvm_amd: Nested Paging enabled Jan 17 00:31:05.514596 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:31:05.516448 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:31:05.574371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:31:05.662483 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:31:05.724366 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:31:05.739962 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:31:05.776649 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:31:05.824452 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:31:05.834147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:31:05.842055 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:31:05.847689 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:31:05.853805 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:31:05.860620 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:31:05.865299 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:31:05.870250 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:31:05.876708 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:31:05.876775 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:31:05.880444 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:31:05.888508 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:31:05.901158 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:31:05.921999 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:31:05.931569 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:31:05.936888 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:31:05.942119 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:31:05.951037 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:31:05.955671 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:31:05.955737 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:31:05.964248 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:31:05.974404 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:31:05.985365 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:31:05.993383 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:31:06.008499 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:31:06.013436 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:31:06.016406 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:31:06.019310 jq[1430]: false Jan 17 00:31:06.035965 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:31:06.048954 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:31:06.060470 extend-filesystems[1431]: Found loop3 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found loop4 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found loop5 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found sr0 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found vda Jan 17 00:31:06.063880 extend-filesystems[1431]: Found vda1 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found vda2 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found vda3 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found usr Jan 17 00:31:06.063880 extend-filesystems[1431]: Found vda4 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found vda6 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found vda7 Jan 17 00:31:06.063880 extend-filesystems[1431]: Found vda9 Jan 17 00:31:06.063880 extend-filesystems[1431]: Checking size of /dev/vda9 Jan 17 00:31:06.122711 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1384) Jan 17 00:31:06.122748 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:31:06.073871 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:31:06.065896 dbus-daemon[1429]: [system] SELinux support is enabled Jan 17 00:31:06.123373 extend-filesystems[1431]: Resized partition /dev/vda9 Jan 17 00:31:06.122270 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:31:06.126748 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:31:06.138809 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:31:06.141016 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:31:06.143705 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:31:06.153155 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:31:06.164583 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:31:06.183548 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:31:06.189718 jq[1453]: true Jan 17 00:31:06.207759 update_engine[1452]: I20260117 00:31:06.198772 1452 main.cc:92] Flatcar Update Engine starting Jan 17 00:31:06.207759 update_engine[1452]: I20260117 00:31:06.203127 1452 update_check_scheduler.cc:74] Next update check in 7m31s Jan 17 00:31:06.201513 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:31:06.201830 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:31:06.202479 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:31:06.202775 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:31:06.210681 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:31:06.210992 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:31:06.259401 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:31:06.268391 jq[1456]: true Jan 17 00:31:06.274004 tar[1455]: linux-amd64/LICENSE Jan 17 00:31:06.270733 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:31:06.292042 tar[1455]: linux-amd64/helm Jan 17 00:31:06.290527 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:31:06.295642 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:31:06.295642 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:31:06.295642 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:31:06.299548 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:31:06.324276 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Jan 17 00:31:06.299863 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:31:06.301410 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:31:06.301448 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:31:06.304459 systemd-logind[1449]: New seat seat0. Jan 17 00:31:06.335069 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:31:06.351108 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:31:06.353617 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:31:06.368969 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:31:06.369784 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:31:06.395977 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:31:06.414041 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:31:06.422560 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:31:06.431055 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:31:06.500139 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:31:06.667909 containerd[1457]: time="2026-01-17T00:31:06.667595238Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:31:06.709699 containerd[1457]: time="2026-01-17T00:31:06.709636551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:06.719412 containerd[1457]: time="2026-01-17T00:31:06.719305686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:06.719563 containerd[1457]: time="2026-01-17T00:31:06.719537330Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:31:06.719647 containerd[1457]: time="2026-01-17T00:31:06.719630473Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:31:06.719997 containerd[1457]: time="2026-01-17T00:31:06.719933990Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:31:06.720092 containerd[1457]: time="2026-01-17T00:31:06.720070775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:06.721427 containerd[1457]: time="2026-01-17T00:31:06.721388847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:06.721517 containerd[1457]: time="2026-01-17T00:31:06.721495506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:06.721917 containerd[1457]: time="2026-01-17T00:31:06.721890624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:06.722055 containerd[1457]: time="2026-01-17T00:31:06.722033651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:06.722138 containerd[1457]: time="2026-01-17T00:31:06.722118579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:06.722254 containerd[1457]: time="2026-01-17T00:31:06.722180886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:06.722555 containerd[1457]: time="2026-01-17T00:31:06.722528384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:06.723021 containerd[1457]: time="2026-01-17T00:31:06.722994715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:31:06.727061 containerd[1457]: time="2026-01-17T00:31:06.726404542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:31:06.727061 containerd[1457]: time="2026-01-17T00:31:06.726434037Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:31:06.727061 containerd[1457]: time="2026-01-17T00:31:06.726578287Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:31:06.727061 containerd[1457]: time="2026-01-17T00:31:06.726654690Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:31:06.732462 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:31:06.747379 containerd[1457]: time="2026-01-17T00:31:06.745276030Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:31:06.747379 containerd[1457]: time="2026-01-17T00:31:06.745459704Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:31:06.747379 containerd[1457]: time="2026-01-17T00:31:06.745488798Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:31:06.747379 containerd[1457]: time="2026-01-17T00:31:06.745513384Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:31:06.747379 containerd[1457]: time="2026-01-17T00:31:06.745534403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:31:06.747379 containerd[1457]: time="2026-01-17T00:31:06.745764232Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:31:06.747379 containerd[1457]: time="2026-01-17T00:31:06.746059774Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747401940Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747426205Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747442947Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747464797Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747482260Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747500134Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747517807Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747535630Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747551630Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747566538Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747581245Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747606412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747623333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.747665 containerd[1457]: time="2026-01-17T00:31:06.747640726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747661284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747677856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747703414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747724402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747744149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747762794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747785617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747804151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747823147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747843926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747866648Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747898067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747918635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748031 containerd[1457]: time="2026-01-17T00:31:06.747938132Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:31:06.748540 containerd[1457]: time="2026-01-17T00:31:06.748000889Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:31:06.748540 containerd[1457]: time="2026-01-17T00:31:06.748027669Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:31:06.748540 containerd[1457]: time="2026-01-17T00:31:06.748046283Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:31:06.748540 containerd[1457]: time="2026-01-17T00:31:06.748068495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:31:06.748540 containerd[1457]: time="2026-01-17T00:31:06.748084936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.748540 containerd[1457]: time="2026-01-17T00:31:06.748104943Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:31:06.748540 containerd[1457]: time="2026-01-17T00:31:06.748130020Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:31:06.748540 containerd[1457]: time="2026-01-17T00:31:06.748151420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:31:06.758305 containerd[1457]: time="2026-01-17T00:31:06.749395814Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:31:06.758305 containerd[1457]: time="2026-01-17T00:31:06.749619211Z" level=info msg="Connect containerd service" Jan 17 00:31:06.758305 containerd[1457]: time="2026-01-17T00:31:06.749732042Z" level=info msg="using legacy CRI server" Jan 17 00:31:06.758305 containerd[1457]: time="2026-01-17T00:31:06.749744254Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:31:06.758305 containerd[1457]: time="2026-01-17T00:31:06.756837650Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:31:06.758305 containerd[1457]: time="2026-01-17T00:31:06.757931563Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:31:06.758815 containerd[1457]: time="2026-01-17T00:31:06.758491458Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:31:06.758815 containerd[1457]: time="2026-01-17T00:31:06.758568953Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:31:06.758815 containerd[1457]: time="2026-01-17T00:31:06.758630699Z" level=info msg="Start subscribing containerd event" Jan 17 00:31:06.758815 containerd[1457]: time="2026-01-17T00:31:06.758678728Z" level=info msg="Start recovering state" Jan 17 00:31:06.758815 containerd[1457]: time="2026-01-17T00:31:06.758774737Z" level=info msg="Start event monitor" Jan 17 00:31:06.758815 containerd[1457]: time="2026-01-17T00:31:06.758791439Z" level=info msg="Start snapshots syncer" Jan 17 00:31:06.758815 containerd[1457]: time="2026-01-17T00:31:06.758809081Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:31:06.758999 containerd[1457]: time="2026-01-17T00:31:06.758819912Z" level=info msg="Start streaming server" Jan 17 00:31:06.758999 containerd[1457]: time="2026-01-17T00:31:06.758907565Z" level=info msg="containerd successfully booted in 0.092739s" Jan 17 00:31:06.761053 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:31:06.821525 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:31:06.860061 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:31:06.883501 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:31:06.886036 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:31:06.912063 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:31:06.947888 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:31:06.950592 systemd-networkd[1391]: eth0: Gained IPv6LL Jan 17 00:31:06.970754 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:31:06.998789 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:31:07.005847 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:31:07.013161 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:31:07.024599 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:31:07.052655 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:31:07.060908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:07.070866 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:31:07.147069 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:31:07.147788 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:31:07.155649 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:31:07.167038 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:31:07.411183 tar[1455]: linux-amd64/README.md Jan 17 00:31:07.445635 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:31:08.449723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:08.454948 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:31:08.466289 systemd[1]: Startup finished in 2.451s (kernel) + 23.964s (initrd) + 14.573s (userspace) = 40.988s. Jan 17 00:31:08.478729 (kubelet)[1541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:31:09.741643 kubelet[1541]: E0117 00:31:09.740299 1541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:31:09.748078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:31:09.749892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:31:09.751611 systemd[1]: kubelet.service: Consumed 1.581s CPU time. Jan 17 00:31:15.966473 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:31:15.984972 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:36330.service - OpenSSH per-connection server daemon (10.0.0.1:36330). Jan 17 00:31:16.099140 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 36330 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:16.104857 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:16.129876 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:31:16.142857 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:31:16.161636 systemd-logind[1449]: New session 1 of user core. Jan 17 00:31:16.181106 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:31:16.198933 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:31:16.204966 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:31:16.492969 systemd[1559]: Queued start job for default target default.target. Jan 17 00:31:16.503974 systemd[1559]: Created slice app.slice - User Application Slice. Jan 17 00:31:16.504051 systemd[1559]: Reached target paths.target - Paths. Jan 17 00:31:16.504074 systemd[1559]: Reached target timers.target - Timers. Jan 17 00:31:16.508402 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:31:16.541455 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:31:16.541723 systemd[1559]: Reached target sockets.target - Sockets. Jan 17 00:31:16.541784 systemd[1559]: Reached target basic.target - Basic System. Jan 17 00:31:16.541846 systemd[1559]: Reached target default.target - Main User Target. Jan 17 00:31:16.541909 systemd[1559]: Startup finished in 308ms. Jan 17 00:31:16.543416 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:31:16.570573 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:31:16.672670 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:36346.service - OpenSSH per-connection server daemon (10.0.0.1:36346). Jan 17 00:31:16.758121 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 36346 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:16.773846 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:16.793571 systemd-logind[1449]: New session 2 of user core. Jan 17 00:31:16.803623 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:31:16.895061 sshd[1570]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:16.906735 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:36346.service: Deactivated successfully. Jan 17 00:31:16.910891 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:31:16.917120 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:31:16.942707 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:36362.service - OpenSSH per-connection server daemon (10.0.0.1:36362). Jan 17 00:31:16.953019 systemd-logind[1449]: Removed session 2. Jan 17 00:31:16.999432 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 36362 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:17.002195 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:17.015987 systemd-logind[1449]: New session 3 of user core. Jan 17 00:31:17.025649 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:31:17.089063 sshd[1577]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:17.103179 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:36362.service: Deactivated successfully. Jan 17 00:31:17.107062 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:31:17.112593 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:31:17.125268 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:36374.service - OpenSSH per-connection server daemon (10.0.0.1:36374). Jan 17 00:31:17.127161 systemd-logind[1449]: Removed session 3. Jan 17 00:31:17.194419 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 36374 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:17.197607 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:17.207820 systemd-logind[1449]: New session 4 of user core. Jan 17 00:31:17.216702 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:31:17.297013 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:17.309049 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:36374.service: Deactivated successfully. Jan 17 00:31:17.312092 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:31:17.314180 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:31:17.327802 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:36384.service - OpenSSH per-connection server daemon (10.0.0.1:36384). Jan 17 00:31:17.331374 systemd-logind[1449]: Removed session 4. Jan 17 00:31:17.382723 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 36384 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:17.386630 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:17.412606 systemd-logind[1449]: New session 5 of user core. Jan 17 00:31:17.427381 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:31:17.525516 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:31:17.526051 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:31:17.552025 sudo[1594]: pam_unix(sudo:session): session closed for user root Jan 17 00:31:17.561783 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:17.572147 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:36384.service: Deactivated successfully. Jan 17 00:31:17.576034 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:31:17.581495 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:31:17.599176 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:36396.service - OpenSSH per-connection server daemon (10.0.0.1:36396). Jan 17 00:31:17.601633 systemd-logind[1449]: Removed session 5. Jan 17 00:31:17.647306 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 36396 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:17.650255 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:17.665456 systemd-logind[1449]: New session 6 of user core. Jan 17 00:31:17.674955 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:31:17.745654 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:31:17.746279 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:31:17.761301 sudo[1603]: pam_unix(sudo:session): session closed for user root Jan 17 00:31:17.772895 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:31:17.775563 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:31:17.816679 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:31:17.821182 auditctl[1606]: No rules Jan 17 00:31:17.823277 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:31:17.823681 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:31:17.842402 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:31:17.904677 augenrules[1624]: No rules Jan 17 00:31:17.906772 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:31:17.908893 sudo[1602]: pam_unix(sudo:session): session closed for user root Jan 17 00:31:17.915263 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:17.924251 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:36396.service: Deactivated successfully. Jan 17 00:31:17.929197 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:31:17.932622 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:31:17.944795 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:36400.service - OpenSSH per-connection server daemon (10.0.0.1:36400). Jan 17 00:31:17.947638 systemd-logind[1449]: Removed session 6. Jan 17 00:31:18.003492 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 36400 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:31:18.006124 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:18.017662 systemd-logind[1449]: New session 7 of user core. Jan 17 00:31:18.033386 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:31:18.112305 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:31:18.117018 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:31:19.578737 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:31:19.579296 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:31:19.997870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:31:20.025013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:20.845562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:20.846528 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:31:21.429971 kubelet[1668]: E0117 00:31:21.429069 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:31:21.443422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:31:21.569238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:31:21.839974 dockerd[1655]: time="2026-01-17T00:31:21.839657990Z" level=info msg="Starting up" Jan 17 00:31:22.713901 dockerd[1655]: time="2026-01-17T00:31:22.713006998Z" level=info msg="Loading containers: start." Jan 17 00:31:23.701393 kernel: Initializing XFRM netlink socket Jan 17 00:31:24.169980 systemd-networkd[1391]: docker0: Link UP Jan 17 00:31:24.273536 dockerd[1655]: time="2026-01-17T00:31:24.272296369Z" level=info msg="Loading containers: done." Jan 17 00:31:24.473109 dockerd[1655]: time="2026-01-17T00:31:24.437044586Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:31:24.481055 dockerd[1655]: time="2026-01-17T00:31:24.477256536Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:31:24.481055 dockerd[1655]: time="2026-01-17T00:31:24.477998051Z" level=info msg="Daemon has completed initialization" Jan 17 00:31:24.668805 dockerd[1655]: time="2026-01-17T00:31:24.668241111Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:31:24.670357 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:31:28.387288 containerd[1457]: time="2026-01-17T00:31:28.385464348Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:31:29.985543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158812637.mount: Deactivated successfully. Jan 17 00:31:31.735136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:31:31.765012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:32.214559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:32.233084 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:31:32.562045 kubelet[1858]: E0117 00:31:32.561715 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:31:32.566059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:31:32.566418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:31:36.315813 containerd[1457]: time="2026-01-17T00:31:36.313973238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:36.333195 containerd[1457]: time="2026-01-17T00:31:36.318401855Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 17 00:31:36.341184 containerd[1457]: time="2026-01-17T00:31:36.340851713Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:36.396488 containerd[1457]: time="2026-01-17T00:31:36.387048471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:36.401307 containerd[1457]: time="2026-01-17T00:31:36.400742367Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 8.015218698s" Jan 17 00:31:36.401307 containerd[1457]: time="2026-01-17T00:31:36.400901844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 17 00:31:36.445287 containerd[1457]: time="2026-01-17T00:31:36.441641217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:31:39.984299 containerd[1457]: time="2026-01-17T00:31:39.982026348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:39.993066 containerd[1457]: time="2026-01-17T00:31:39.991275315Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 17 00:31:40.001091 containerd[1457]: time="2026-01-17T00:31:39.999381258Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:40.008584 containerd[1457]: time="2026-01-17T00:31:40.008467042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:40.011695 containerd[1457]: time="2026-01-17T00:31:40.011619695Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 3.569760093s" Jan 17 00:31:40.011769 containerd[1457]: time="2026-01-17T00:31:40.011703339Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 17 00:31:40.018523 containerd[1457]: time="2026-01-17T00:31:40.018180129Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:31:41.611259 containerd[1457]: time="2026-01-17T00:31:41.610936101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:41.614570 containerd[1457]: time="2026-01-17T00:31:41.614470807Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 17 00:31:41.619728 containerd[1457]: time="2026-01-17T00:31:41.619560200Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:41.627013 containerd[1457]: time="2026-01-17T00:31:41.626536058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:41.628022 containerd[1457]: time="2026-01-17T00:31:41.627765222Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.609525079s" Jan 17 00:31:41.628022 containerd[1457]: time="2026-01-17T00:31:41.627829332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 17 00:31:41.629275 containerd[1457]: time="2026-01-17T00:31:41.628536784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:31:42.742014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:31:42.760417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:43.003186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:43.018378 (kubelet)[1916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:31:43.105048 kubelet[1916]: E0117 00:31:43.104908 1916 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:31:43.111365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:31:43.111697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:31:43.215868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1283837787.mount: Deactivated successfully. Jan 17 00:31:44.492121 containerd[1457]: time="2026-01-17T00:31:44.490812333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:44.495547 containerd[1457]: time="2026-01-17T00:31:44.495254984Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 17 00:31:44.500070 containerd[1457]: time="2026-01-17T00:31:44.497604904Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:44.506589 containerd[1457]: time="2026-01-17T00:31:44.503962924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:44.506589 containerd[1457]: time="2026-01-17T00:31:44.505043428Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.876457509s" Jan 17 00:31:44.506589 containerd[1457]: time="2026-01-17T00:31:44.505073921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 00:31:44.506589 containerd[1457]: time="2026-01-17T00:31:44.506135204Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:31:45.207477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553218253.mount: Deactivated successfully. Jan 17 00:31:47.486806 containerd[1457]: time="2026-01-17T00:31:47.486651143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:47.490256 containerd[1457]: time="2026-01-17T00:31:47.489677080Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 17 00:31:47.495406 containerd[1457]: time="2026-01-17T00:31:47.492181642Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:47.501500 containerd[1457]: time="2026-01-17T00:31:47.501423744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:47.503372 containerd[1457]: time="2026-01-17T00:31:47.503262265Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.997092593s" Jan 17 00:31:47.503372 containerd[1457]: time="2026-01-17T00:31:47.503303169Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 17 00:31:47.505178 containerd[1457]: time="2026-01-17T00:31:47.505112938Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:31:47.953157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3385531533.mount: Deactivated successfully. Jan 17 00:31:47.975533 containerd[1457]: time="2026-01-17T00:31:47.975407974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:47.978559 containerd[1457]: time="2026-01-17T00:31:47.978442904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 17 00:31:47.980604 containerd[1457]: time="2026-01-17T00:31:47.980489236Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:47.992710 containerd[1457]: time="2026-01-17T00:31:47.990792920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:47.992710 containerd[1457]: time="2026-01-17T00:31:47.992012624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 486.832869ms" Jan 17 00:31:47.992710 containerd[1457]: time="2026-01-17T00:31:47.992048377Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 17 00:31:47.996067 containerd[1457]: time="2026-01-17T00:31:47.995411923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:31:48.588474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377545156.mount: Deactivated successfully. Jan 17 00:31:51.486094 update_engine[1452]: I20260117 00:31:51.484266 1452 update_attempter.cc:509] Updating boot flags... Jan 17 00:31:51.580470 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2044) Jan 17 00:31:51.640442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2045) Jan 17 00:31:51.723378 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2045) Jan 17 00:31:53.183003 containerd[1457]: time="2026-01-17T00:31:53.182875656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:53.184507 containerd[1457]: time="2026-01-17T00:31:53.184459769Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 17 00:31:53.185780 containerd[1457]: time="2026-01-17T00:31:53.185675662Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:53.191885 containerd[1457]: time="2026-01-17T00:31:53.191808506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:53.194211 containerd[1457]: time="2026-01-17T00:31:53.194149724Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 5.198692843s" Jan 17 00:31:53.194211 containerd[1457]: time="2026-01-17T00:31:53.194207169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 17 00:31:53.242029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:31:53.251650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:53.466871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:53.473143 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:31:53.572006 kubelet[2070]: E0117 00:31:53.571877 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:31:53.577617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:31:53.577979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:31:59.243632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:59.257825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:59.321208 systemd[1]: Reloading requested from client PID 2102 ('systemctl') (unit session-7.scope)... Jan 17 00:31:59.321287 systemd[1]: Reloading... Jan 17 00:31:59.505420 zram_generator::config[2146]: No configuration found. Jan 17 00:31:59.698098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:31:59.832479 systemd[1]: Reloading finished in 510 ms. Jan 17 00:31:59.943673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:59.944034 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:31:59.946696 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:31:59.947512 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:31:59.947902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:31:59.953598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:32:00.193440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:32:00.214896 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:32:00.288484 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:32:00.288484 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:32:00.289002 kubelet[2192]: I0117 00:32:00.288543 2192 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:32:01.599186 kubelet[2192]: I0117 00:32:01.598773 2192 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:32:01.599186 kubelet[2192]: I0117 00:32:01.598820 2192 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:32:01.602735 kubelet[2192]: I0117 00:32:01.602685 2192 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:32:01.602735 kubelet[2192]: I0117 00:32:01.602729 2192 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:32:01.603016 kubelet[2192]: I0117 00:32:01.602960 2192 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:32:01.698930 kubelet[2192]: I0117 00:32:01.696653 2192 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:32:01.698930 kubelet[2192]: E0117 00:32:01.698445 2192 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:32:01.730444 kubelet[2192]: E0117 00:32:01.729565 2192 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:32:01.730444 kubelet[2192]: I0117 00:32:01.729679 2192 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:32:01.768656 kubelet[2192]: I0117 00:32:01.767906 2192 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:32:01.775735 kubelet[2192]: I0117 00:32:01.774723 2192 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:32:01.775735 kubelet[2192]: I0117 00:32:01.775390 2192 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:32:01.775735 kubelet[2192]: I0117 00:32:01.776197 2192 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:32:01.775735 kubelet[2192]: I0117 00:32:01.776216 2192 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:32:01.781895 kubelet[2192]: I0117 00:32:01.776870 2192 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:32:01.791746 kubelet[2192]: I0117 00:32:01.790825 2192 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:32:01.799748 kubelet[2192]: I0117 00:32:01.798429 2192 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:32:01.799748 kubelet[2192]: I0117 00:32:01.798541 2192 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:32:01.799748 kubelet[2192]: I0117 00:32:01.798810 2192 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:32:01.801907 kubelet[2192]: I0117 00:32:01.800953 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:32:01.801907 kubelet[2192]: E0117 00:32:01.801724 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:32:01.802556 kubelet[2192]: E0117 00:32:01.802517 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:32:01.815812 kubelet[2192]: I0117 00:32:01.813582 2192 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:32:01.818195 kubelet[2192]: I0117 00:32:01.817360 2192 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:32:01.818195 kubelet[2192]: I0117 00:32:01.817622 2192 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:32:01.818195 kubelet[2192]: W0117 00:32:01.818097 2192 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:32:01.864188 kubelet[2192]: I0117 00:32:01.861972 2192 server.go:1262] "Started kubelet" Jan 17 00:32:01.864188 kubelet[2192]: I0117 00:32:01.863292 2192 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:32:01.864188 kubelet[2192]: I0117 00:32:01.863492 2192 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:32:01.871426 kubelet[2192]: I0117 00:32:01.866816 2192 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:32:01.876460 kubelet[2192]: I0117 00:32:01.873194 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:32:01.876460 kubelet[2192]: I0117 00:32:01.873243 2192 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:32:01.879011 kubelet[2192]: I0117 00:32:01.878972 2192 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:32:01.890236 kubelet[2192]: I0117 00:32:01.881273 2192 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:32:01.893391 kubelet[2192]: I0117 00:32:01.893245 2192 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:32:01.894849 kubelet[2192]: E0117 00:32:01.891302 2192 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5d5a7a96a804 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:32:01.85818522 +0000 UTC m=+1.634069716,LastTimestamp:2026-01-17 00:32:01.85818522 +0000 UTC m=+1.634069716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:32:01.897541 kubelet[2192]: E0117 00:32:01.896042 2192 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:32:01.897664 kubelet[2192]: I0117 00:32:01.897539 2192 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:32:01.897949 kubelet[2192]: I0117 00:32:01.897873 2192 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:32:01.898230 kubelet[2192]: E0117 00:32:01.898112 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Jan 17 00:32:01.899539 kubelet[2192]: E0117 00:32:01.899437 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:32:01.899539 kubelet[2192]: I0117 00:32:01.899507 2192 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:32:01.901040 kubelet[2192]: E0117 00:32:01.900922 2192 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:32:01.902871 kubelet[2192]: I0117 00:32:01.902748 2192 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:32:01.902871 kubelet[2192]: I0117 00:32:01.902863 2192 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:32:01.997928 kubelet[2192]: E0117 00:32:01.997460 2192 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:32:02.028823 kubelet[2192]: I0117 00:32:02.028651 2192 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:32:02.028823 kubelet[2192]: I0117 00:32:02.028706 2192 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:32:02.028823 kubelet[2192]: I0117 00:32:02.028733 2192 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:32:02.068566 kubelet[2192]: I0117 00:32:02.049705 2192 policy_none.go:49] "None policy: Start" Jan 17 00:32:02.068566 kubelet[2192]: I0117 00:32:02.049927 2192 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:32:02.068566 kubelet[2192]: I0117 00:32:02.049997 2192 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:32:02.075219 kubelet[2192]: I0117 00:32:02.075062 2192 policy_none.go:47] "Start" Jan 17 00:32:02.079116 kubelet[2192]: I0117 00:32:02.079053 2192 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:32:02.085082 kubelet[2192]: I0117 00:32:02.083486 2192 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:32:02.085082 kubelet[2192]: I0117 00:32:02.083659 2192 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:32:02.085082 kubelet[2192]: I0117 00:32:02.083706 2192 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:32:02.085082 kubelet[2192]: E0117 00:32:02.083777 2192 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:32:02.085762 kubelet[2192]: E0117 00:32:02.085696 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:32:02.104441 kubelet[2192]: E0117 00:32:02.098826 2192 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:32:02.123396 kubelet[2192]: E0117 00:32:02.119997 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Jan 17 00:32:02.121395 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:32:02.191064 kubelet[2192]: E0117 00:32:02.188667 2192 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:32:02.201386 kubelet[2192]: E0117 00:32:02.200170 2192 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:32:02.206766 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:32:02.302867 kubelet[2192]: E0117 00:32:02.302190 2192 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:32:02.309401 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:32:02.383483 kubelet[2192]: E0117 00:32:02.378056 2192 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:32:02.383483 kubelet[2192]: I0117 00:32:02.379033 2192 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:32:02.383483 kubelet[2192]: I0117 00:32:02.379277 2192 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:32:02.386183 kubelet[2192]: I0117 00:32:02.383944 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:32:02.390809 kubelet[2192]: E0117 00:32:02.390673 2192 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:32:02.390809 kubelet[2192]: E0117 00:32:02.390837 2192 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:32:02.403021 kubelet[2192]: I0117 00:32:02.402908 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c00788923f37704537bf76b30065372e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c00788923f37704537bf76b30065372e\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:02.403021 kubelet[2192]: I0117 00:32:02.402948 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c00788923f37704537bf76b30065372e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c00788923f37704537bf76b30065372e\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:02.403570 kubelet[2192]: I0117 00:32:02.403208 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c00788923f37704537bf76b30065372e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c00788923f37704537bf76b30065372e\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:02.426637 systemd[1]: Created slice kubepods-burstable-podc00788923f37704537bf76b30065372e.slice - libcontainer container kubepods-burstable-podc00788923f37704537bf76b30065372e.slice. Jan 17 00:32:02.486686 kubelet[2192]: I0117 00:32:02.485937 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:32:02.486686 kubelet[2192]: E0117 00:32:02.486641 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:02.487581 kubelet[2192]: E0117 00:32:02.487434 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 17 00:32:02.510052 kubelet[2192]: I0117 00:32:02.506740 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:02.510052 kubelet[2192]: I0117 00:32:02.506900 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:02.510052 kubelet[2192]: I0117 00:32:02.507065 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:02.510052 kubelet[2192]: I0117 00:32:02.509573 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:02.510052 kubelet[2192]: I0117 00:32:02.509687 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:02.508900 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 17 00:32:02.511607 kubelet[2192]: I0117 00:32:02.509795 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:32:02.522730 kubelet[2192]: E0117 00:32:02.522462 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Jan 17 00:32:02.525053 kubelet[2192]: E0117 00:32:02.524986 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:02.536920 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 17 00:32:02.580378 kubelet[2192]: E0117 00:32:02.580029 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:02.700168 kubelet[2192]: I0117 00:32:02.698985 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:32:02.706261 kubelet[2192]: E0117 00:32:02.702295 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 17 00:32:02.816673 kubelet[2192]: E0117 00:32:02.816004 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:02.826396 containerd[1457]: time="2026-01-17T00:32:02.825143358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c00788923f37704537bf76b30065372e,Namespace:kube-system,Attempt:0,}" Jan 17 00:32:02.844820 kubelet[2192]: E0117 00:32:02.844218 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:02.846797 containerd[1457]: time="2026-01-17T00:32:02.846661525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 17 00:32:02.898760 kubelet[2192]: E0117 00:32:02.898046 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:02.900960 containerd[1457]: time="2026-01-17T00:32:02.900864661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 17 00:32:03.036951 kubelet[2192]: E0117 00:32:03.035669 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:32:03.129133 kubelet[2192]: E0117 00:32:03.128375 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:32:03.129133 kubelet[2192]: I0117 00:32:03.128834 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:32:03.138569 kubelet[2192]: E0117 00:32:03.130001 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 17 00:32:03.316155 kubelet[2192]: E0117 00:32:03.313655 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:32:03.326295 kubelet[2192]: E0117 00:32:03.324933 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" Jan 17 00:32:03.433844 kubelet[2192]: E0117 00:32:03.432656 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:32:03.776966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977658811.mount: Deactivated successfully. Jan 17 00:32:03.815804 containerd[1457]: time="2026-01-17T00:32:03.814825902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:32:03.830684 containerd[1457]: time="2026-01-17T00:32:03.829940923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:32:03.836837 containerd[1457]: time="2026-01-17T00:32:03.834377943Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:32:03.843954 containerd[1457]: time="2026-01-17T00:32:03.842759248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:32:03.848457 containerd[1457]: time="2026-01-17T00:32:03.848140400Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:32:03.876686 containerd[1457]: time="2026-01-17T00:32:03.875698327Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:32:03.882290 containerd[1457]: time="2026-01-17T00:32:03.880577306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:32:03.886510 kubelet[2192]: E0117 00:32:03.886422 2192 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:32:03.893553 containerd[1457]: time="2026-01-17T00:32:03.893247271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:32:03.897025 containerd[1457]: time="2026-01-17T00:32:03.896385960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.069928444s" Jan 17 00:32:03.902923 containerd[1457]: time="2026-01-17T00:32:03.901024130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.054247093s" Jan 17 00:32:03.904821 containerd[1457]: time="2026-01-17T00:32:03.904053992Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.003069561s" Jan 17 00:32:03.942033 kubelet[2192]: I0117 00:32:03.941526 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:32:03.949699 kubelet[2192]: E0117 00:32:03.942877 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 17 00:32:04.341798 containerd[1457]: time="2026-01-17T00:32:04.339549979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:04.341798 containerd[1457]: time="2026-01-17T00:32:04.339702949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:04.341798 containerd[1457]: time="2026-01-17T00:32:04.339879655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:04.341798 containerd[1457]: time="2026-01-17T00:32:04.341121993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:04.345783 containerd[1457]: time="2026-01-17T00:32:04.343820360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:04.345783 containerd[1457]: time="2026-01-17T00:32:04.344099813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:04.345783 containerd[1457]: time="2026-01-17T00:32:04.344129769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:04.347213 containerd[1457]: time="2026-01-17T00:32:04.346926633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:04.401698 containerd[1457]: time="2026-01-17T00:32:04.400222981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:04.401698 containerd[1457]: time="2026-01-17T00:32:04.401647738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:04.401698 containerd[1457]: time="2026-01-17T00:32:04.401792002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:04.403632 containerd[1457]: time="2026-01-17T00:32:04.402677808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:04.602993 systemd[1]: Started cri-containerd-90e10c2ae5665bbf650c2ff1087af461f6916aebb887e3ce1ae4584177f28d23.scope - libcontainer container 90e10c2ae5665bbf650c2ff1087af461f6916aebb887e3ce1ae4584177f28d23. Jan 17 00:32:04.639925 systemd[1]: Started cri-containerd-b8a5c45be65e2115a8a7c4d5575b738422ceb9f09f284bd6cad0f52414d9477a.scope - libcontainer container b8a5c45be65e2115a8a7c4d5575b738422ceb9f09f284bd6cad0f52414d9477a. Jan 17 00:32:04.704011 systemd[1]: Started cri-containerd-beac76a410181c09a6cfe362189e7c03d582f03f79fc64ee1244eab7dd35cf84.scope - libcontainer container beac76a410181c09a6cfe362189e7c03d582f03f79fc64ee1244eab7dd35cf84. Jan 17 00:32:04.779771 kubelet[2192]: E0117 00:32:04.779124 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:32:04.928631 kubelet[2192]: E0117 00:32:04.927782 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="3.2s" Jan 17 00:32:04.940216 containerd[1457]: time="2026-01-17T00:32:04.938694350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8a5c45be65e2115a8a7c4d5575b738422ceb9f09f284bd6cad0f52414d9477a\"" Jan 17 00:32:04.979418 kubelet[2192]: E0117 00:32:04.977831 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:04.983525 containerd[1457]: time="2026-01-17T00:32:04.982185646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"90e10c2ae5665bbf650c2ff1087af461f6916aebb887e3ce1ae4584177f28d23\"" Jan 17 00:32:04.987812 kubelet[2192]: E0117 00:32:04.987772 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:05.012153 containerd[1457]: time="2026-01-17T00:32:05.000010905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c00788923f37704537bf76b30065372e,Namespace:kube-system,Attempt:0,} returns sandbox id \"beac76a410181c09a6cfe362189e7c03d582f03f79fc64ee1244eab7dd35cf84\"" Jan 17 00:32:05.015122 kubelet[2192]: E0117 00:32:05.012490 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:05.040730 containerd[1457]: time="2026-01-17T00:32:05.040171193Z" level=info msg="CreateContainer within sandbox \"beac76a410181c09a6cfe362189e7c03d582f03f79fc64ee1244eab7dd35cf84\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:32:05.042177 containerd[1457]: time="2026-01-17T00:32:05.040558717Z" level=info msg="CreateContainer within sandbox \"90e10c2ae5665bbf650c2ff1087af461f6916aebb887e3ce1ae4584177f28d23\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:32:05.048738 containerd[1457]: time="2026-01-17T00:32:05.048699851Z" level=info msg="CreateContainer within sandbox \"b8a5c45be65e2115a8a7c4d5575b738422ceb9f09f284bd6cad0f52414d9477a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:32:05.234539 containerd[1457]: time="2026-01-17T00:32:05.232701904Z" level=info msg="CreateContainer within sandbox \"beac76a410181c09a6cfe362189e7c03d582f03f79fc64ee1244eab7dd35cf84\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e73670808a9921b30f103bc0a553fd7df6ffbc830398f4be281bae518916067\"" Jan 17 00:32:05.238554 containerd[1457]: time="2026-01-17T00:32:05.237633476Z" level=info msg="StartContainer for \"1e73670808a9921b30f103bc0a553fd7df6ffbc830398f4be281bae518916067\"" Jan 17 00:32:05.286076 containerd[1457]: time="2026-01-17T00:32:05.284581808Z" level=info msg="CreateContainer within sandbox \"90e10c2ae5665bbf650c2ff1087af461f6916aebb887e3ce1ae4584177f28d23\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44089cd2165ce11d6205fa46f50845e1b8d19e58aa0bdb2ddf2d1ae06c353b08\"" Jan 17 00:32:05.295145 containerd[1457]: time="2026-01-17T00:32:05.292247063Z" level=info msg="CreateContainer within sandbox \"b8a5c45be65e2115a8a7c4d5575b738422ceb9f09f284bd6cad0f52414d9477a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cd04ff0def5030f2071cb19bc25fcee29a8c24ad283a595a4aedd74008d16039\"" Jan 17 00:32:05.295145 containerd[1457]: time="2026-01-17T00:32:05.292837076Z" level=info msg="StartContainer for \"44089cd2165ce11d6205fa46f50845e1b8d19e58aa0bdb2ddf2d1ae06c353b08\"" Jan 17 00:32:05.299286 containerd[1457]: time="2026-01-17T00:32:05.297713304Z" level=info msg="StartContainer for \"cd04ff0def5030f2071cb19bc25fcee29a8c24ad283a595a4aedd74008d16039\"" Jan 17 00:32:05.414563 systemd[1]: Started cri-containerd-1e73670808a9921b30f103bc0a553fd7df6ffbc830398f4be281bae518916067.scope - libcontainer container 1e73670808a9921b30f103bc0a553fd7df6ffbc830398f4be281bae518916067. Jan 17 00:32:05.479959 systemd[1]: Started cri-containerd-44089cd2165ce11d6205fa46f50845e1b8d19e58aa0bdb2ddf2d1ae06c353b08.scope - libcontainer container 44089cd2165ce11d6205fa46f50845e1b8d19e58aa0bdb2ddf2d1ae06c353b08. Jan 17 00:32:05.534817 systemd[1]: Started cri-containerd-cd04ff0def5030f2071cb19bc25fcee29a8c24ad283a595a4aedd74008d16039.scope - libcontainer container cd04ff0def5030f2071cb19bc25fcee29a8c24ad283a595a4aedd74008d16039. Jan 17 00:32:05.570835 kubelet[2192]: I0117 00:32:05.549781 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:32:05.584275 kubelet[2192]: E0117 00:32:05.584065 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jan 17 00:32:05.705513 kubelet[2192]: E0117 00:32:05.704751 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:32:05.718941 containerd[1457]: time="2026-01-17T00:32:05.717279729Z" level=info msg="StartContainer for \"1e73670808a9921b30f103bc0a553fd7df6ffbc830398f4be281bae518916067\" returns successfully" Jan 17 00:32:05.770500 containerd[1457]: time="2026-01-17T00:32:05.753915937Z" level=info msg="StartContainer for \"44089cd2165ce11d6205fa46f50845e1b8d19e58aa0bdb2ddf2d1ae06c353b08\" returns successfully" Jan 17 00:32:05.804627 containerd[1457]: time="2026-01-17T00:32:05.802752433Z" level=info msg="StartContainer for \"cd04ff0def5030f2071cb19bc25fcee29a8c24ad283a595a4aedd74008d16039\" returns successfully" Jan 17 00:32:06.011641 kubelet[2192]: E0117 00:32:06.010901 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:32:06.112380 kubelet[2192]: E0117 00:32:06.085701 2192 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:32:06.330551 kubelet[2192]: E0117 00:32:06.329882 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:06.330551 kubelet[2192]: E0117 00:32:06.330547 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:06.392399 kubelet[2192]: E0117 00:32:06.379565 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:06.407119 kubelet[2192]: E0117 00:32:06.407052 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:06.420243 kubelet[2192]: E0117 00:32:06.419727 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:06.423287 kubelet[2192]: E0117 00:32:06.423189 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:07.427111 kubelet[2192]: E0117 00:32:07.426538 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:07.427111 kubelet[2192]: E0117 00:32:07.427458 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:07.452198 kubelet[2192]: E0117 00:32:07.427853 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:07.452198 kubelet[2192]: E0117 00:32:07.428044 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:07.452198 kubelet[2192]: E0117 00:32:07.430060 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:07.452198 kubelet[2192]: E0117 00:32:07.432812 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:08.493789 kubelet[2192]: E0117 00:32:08.492830 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:08.511169 kubelet[2192]: E0117 00:32:08.494534 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:08.511169 kubelet[2192]: E0117 00:32:08.495036 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:08.511169 kubelet[2192]: E0117 00:32:08.496835 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:08.825121 kubelet[2192]: I0117 00:32:08.801307 2192 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:32:09.477900 kubelet[2192]: E0117 00:32:09.476142 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:09.480082 kubelet[2192]: E0117 00:32:09.479616 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:12.418986 kubelet[2192]: E0117 00:32:12.406717 2192 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:32:13.589526 kubelet[2192]: E0117 00:32:13.583561 2192 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:32:13.589526 kubelet[2192]: E0117 00:32:13.584084 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:14.145582 kubelet[2192]: E0117 00:32:14.145019 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 00:32:14.384108 kubelet[2192]: E0117 00:32:14.381233 2192 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5d5a7a96a804 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:32:01.85818522 +0000 UTC m=+1.634069716,LastTimestamp:2026-01-17 00:32:01.85818522 +0000 UTC m=+1.634069716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:32:14.490585 kubelet[2192]: I0117 00:32:14.489220 2192 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:32:14.498779 kubelet[2192]: I0117 00:32:14.497973 2192 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:14.520304 kubelet[2192]: E0117 00:32:14.519628 2192 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5d5a7d224324 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:32:01.900888868 +0000 UTC m=+1.676773344,LastTimestamp:2026-01-17 00:32:01.900888868 +0000 UTC m=+1.676773344,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:32:14.589286 kubelet[2192]: E0117 00:32:14.588893 2192 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:14.593600 kubelet[2192]: I0117 00:32:14.593034 2192 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:14.598606 kubelet[2192]: E0117 00:32:14.598575 2192 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:14.602711 kubelet[2192]: I0117 00:32:14.600052 2192 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:32:14.602711 kubelet[2192]: E0117 00:32:14.602478 2192 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 17 00:32:14.899290 kubelet[2192]: I0117 00:32:14.898552 2192 apiserver.go:52] "Watching apiserver" Jan 17 00:32:15.002773 kubelet[2192]: I0117 00:32:15.001643 2192 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:32:15.429901 kubelet[2192]: I0117 00:32:15.428588 2192 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:15.546725 kubelet[2192]: E0117 00:32:15.546079 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:15.598647 kubelet[2192]: E0117 00:32:15.596658 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:18.620968 kubelet[2192]: I0117 00:32:18.617479 2192 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:32:18.688547 kubelet[2192]: E0117 00:32:18.687812 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:19.728648 kubelet[2192]: E0117 00:32:19.728394 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:22.707302 kubelet[2192]: I0117 00:32:22.700221 2192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.700127976 podStartE2EDuration="7.700127976s" podCreationTimestamp="2026-01-17 00:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:32:18.752082359 +0000 UTC m=+18.527966856" watchObservedRunningTime="2026-01-17 00:32:22.700127976 +0000 UTC m=+22.476012462" Jan 17 00:32:23.588807 kubelet[2192]: I0117 00:32:23.588452 2192 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:23.611877 kubelet[2192]: I0117 00:32:23.611172 2192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.611152249 podStartE2EDuration="5.611152249s" podCreationTimestamp="2026-01-17 00:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:32:22.728437074 +0000 UTC m=+22.504321570" watchObservedRunningTime="2026-01-17 00:32:23.611152249 +0000 UTC m=+23.387036725" Jan 17 00:32:23.612243 kubelet[2192]: E0117 00:32:23.612168 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:23.836129 kubelet[2192]: E0117 00:32:23.835994 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:25.445072 kubelet[2192]: E0117 00:32:25.439421 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:25.587457 kubelet[2192]: I0117 00:32:25.583596 2192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.583572895 podStartE2EDuration="2.583572895s" podCreationTimestamp="2026-01-17 00:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:32:25.581121179 +0000 UTC m=+25.357005655" watchObservedRunningTime="2026-01-17 00:32:25.583572895 +0000 UTC m=+25.359457391" Jan 17 00:32:25.807744 kubelet[2192]: E0117 00:32:25.807431 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:26.192206 systemd[1]: Reloading requested from client PID 2494 ('systemctl') (unit session-7.scope)... Jan 17 00:32:26.192656 systemd[1]: Reloading... Jan 17 00:32:26.212291 kubelet[2192]: E0117 00:32:26.209300 2192 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:27.148252 zram_generator::config[2539]: No configuration found. Jan 17 00:32:27.514494 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:32:27.865794 systemd[1]: Reloading finished in 1662 ms. Jan 17 00:32:27.981979 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:32:28.016133 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:32:28.018674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:32:28.019292 systemd[1]: kubelet.service: Consumed 8.213s CPU time, 132.3M memory peak, 0B memory swap peak. Jan 17 00:32:28.040419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:32:28.714180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:32:28.763876 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:32:28.954274 kubelet[2577]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:32:28.954274 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:32:28.963392 kubelet[2577]: I0117 00:32:28.958066 2577 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:32:28.979613 kubelet[2577]: I0117 00:32:28.977863 2577 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:32:28.979613 kubelet[2577]: I0117 00:32:28.977890 2577 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:32:28.979613 kubelet[2577]: I0117 00:32:28.977924 2577 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:32:28.979613 kubelet[2577]: I0117 00:32:28.977939 2577 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:32:28.979613 kubelet[2577]: I0117 00:32:28.978782 2577 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:32:28.981541 kubelet[2577]: I0117 00:32:28.981374 2577 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:32:28.984292 kubelet[2577]: I0117 00:32:28.984097 2577 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:32:29.009602 kubelet[2577]: E0117 00:32:29.008174 2577 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:32:29.009602 kubelet[2577]: I0117 00:32:29.008279 2577 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:32:29.034060 kubelet[2577]: I0117 00:32:29.028876 2577 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:32:29.034060 kubelet[2577]: I0117 00:32:29.029276 2577 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:32:29.034060 kubelet[2577]: I0117 00:32:29.029367 2577 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:32:29.034060 kubelet[2577]: I0117 00:32:29.029565 2577 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:32:29.034873 kubelet[2577]: I0117 00:32:29.029578 2577 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:32:29.034873 kubelet[2577]: I0117 00:32:29.029911 2577 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:32:29.034873 kubelet[2577]: I0117 00:32:29.034400 2577 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:32:29.034873 kubelet[2577]: I0117 00:32:29.034829 2577 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:32:29.034873 kubelet[2577]: I0117 00:32:29.034858 2577 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:32:29.035075 kubelet[2577]: I0117 00:32:29.034892 2577 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:32:29.035075 kubelet[2577]: I0117 00:32:29.034927 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:32:29.040855 kubelet[2577]: I0117 00:32:29.039854 2577 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:32:29.044997 kubelet[2577]: I0117 00:32:29.044963 2577 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:32:29.049292 kubelet[2577]: I0117 00:32:29.045009 2577 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:32:29.077401 kubelet[2577]: I0117 00:32:29.071004 2577 server.go:1262] "Started kubelet" Jan 17 00:32:29.077401 kubelet[2577]: I0117 00:32:29.071277 2577 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:32:29.077401 kubelet[2577]: I0117 00:32:29.076075 2577 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:32:29.077743 kubelet[2577]: I0117 00:32:29.077665 2577 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:32:29.084426 kubelet[2577]: I0117 00:32:29.083246 2577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:32:29.084426 kubelet[2577]: I0117 00:32:29.078118 2577 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:32:29.093468 kubelet[2577]: I0117 00:32:29.091448 2577 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:32:29.111912 kubelet[2577]: I0117 00:32:29.094764 2577 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:32:29.111912 kubelet[2577]: I0117 00:32:29.096207 2577 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:32:29.111912 kubelet[2577]: I0117 00:32:29.102442 2577 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:32:29.111912 kubelet[2577]: I0117 00:32:29.102845 2577 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:32:29.112286 kubelet[2577]: I0117 00:32:29.112180 2577 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:32:29.113779 kubelet[2577]: I0117 00:32:29.112389 2577 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:32:29.121784 kubelet[2577]: E0117 00:32:29.119289 2577 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:32:29.121784 kubelet[2577]: I0117 00:32:29.120522 2577 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:32:29.160131 kubelet[2577]: I0117 00:32:29.160021 2577 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:32:29.224425 kubelet[2577]: I0117 00:32:29.224298 2577 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:32:29.224425 kubelet[2577]: I0117 00:32:29.224404 2577 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:32:29.224425 kubelet[2577]: I0117 00:32:29.224431 2577 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:32:29.224751 kubelet[2577]: E0117 00:32:29.224496 2577 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.295892 2577 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.295914 2577 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.295937 2577 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.296090 2577 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.296103 2577 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.296123 2577 policy_none.go:49] "None policy: Start" Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.296165 2577 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.296180 2577 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.296411 2577 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:32:29.296663 kubelet[2577]: I0117 00:32:29.296555 2577 policy_none.go:47] "Start" Jan 17 00:32:29.313423 kubelet[2577]: E0117 00:32:29.310570 2577 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:32:29.313423 kubelet[2577]: I0117 00:32:29.310829 2577 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:32:29.313423 kubelet[2577]: I0117 00:32:29.310843 2577 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:32:29.313423 kubelet[2577]: I0117 00:32:29.312021 2577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:32:29.315027 sudo[2617]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:32:29.316437 sudo[2617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:32:29.317485 kubelet[2577]: E0117 00:32:29.317253 2577 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:32:29.327822 kubelet[2577]: I0117 00:32:29.327621 2577 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:32:29.332250 kubelet[2577]: I0117 00:32:29.332136 2577 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:29.337886 kubelet[2577]: I0117 00:32:29.337848 2577 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:29.367777 kubelet[2577]: E0117 00:32:29.367106 2577 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:29.367913 kubelet[2577]: E0117 00:32:29.367812 2577 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 00:32:29.373878 kubelet[2577]: E0117 00:32:29.373817 2577 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:29.403992 kubelet[2577]: I0117 00:32:29.403823 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c00788923f37704537bf76b30065372e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c00788923f37704537bf76b30065372e\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:29.403992 kubelet[2577]: I0117 00:32:29.403901 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c00788923f37704537bf76b30065372e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c00788923f37704537bf76b30065372e\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:29.404407 kubelet[2577]: I0117 00:32:29.404016 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:29.404407 kubelet[2577]: I0117 00:32:29.404076 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:29.404407 kubelet[2577]: I0117 00:32:29.404141 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c00788923f37704537bf76b30065372e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c00788923f37704537bf76b30065372e\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:29.404407 kubelet[2577]: I0117 00:32:29.404172 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:29.404407 kubelet[2577]: I0117 00:32:29.404226 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:29.409966 kubelet[2577]: I0117 00:32:29.404377 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:32:29.409966 kubelet[2577]: I0117 00:32:29.404475 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:32:29.426099 kubelet[2577]: I0117 00:32:29.425986 2577 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:32:29.475761 kubelet[2577]: I0117 00:32:29.473007 2577 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 17 00:32:29.475761 kubelet[2577]: I0117 00:32:29.473138 2577 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:32:29.672011 kubelet[2577]: E0117 00:32:29.671302 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:29.672011 kubelet[2577]: E0117 00:32:29.671855 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:29.674995 kubelet[2577]: E0117 00:32:29.674972 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:30.039372 kubelet[2577]: I0117 00:32:30.038214 2577 apiserver.go:52] "Watching apiserver" Jan 17 00:32:30.103278 kubelet[2577]: I0117 00:32:30.103213 2577 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:32:30.284446 kubelet[2577]: E0117 00:32:30.280524 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:30.287885 kubelet[2577]: I0117 00:32:30.286814 2577 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:30.287885 kubelet[2577]: E0117 00:32:30.287552 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:30.467431 kubelet[2577]: E0117 00:32:30.446642 2577 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 00:32:30.467431 kubelet[2577]: E0117 00:32:30.466009 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:30.643912 kubelet[2577]: I0117 00:32:30.643183 2577 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:32:30.645423 containerd[1457]: time="2026-01-17T00:32:30.644996751Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:32:30.646543 kubelet[2577]: I0117 00:32:30.645665 2577 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:32:31.292991 kubelet[2577]: E0117 00:32:31.288288 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:31.292991 kubelet[2577]: E0117 00:32:31.288898 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:31.436402 sudo[2617]: pam_unix(sudo:session): session closed for user root Jan 17 00:32:32.083840 kubelet[2577]: I0117 00:32:32.083473 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/618252d5-5715-4ff8-b4ca-210aa040ffbb-xtables-lock\") pod \"kube-proxy-9xt6k\" (UID: \"618252d5-5715-4ff8-b4ca-210aa040ffbb\") " pod="kube-system/kube-proxy-9xt6k" Jan 17 00:32:32.089014 kubelet[2577]: I0117 00:32:32.083946 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxckf\" (UniqueName: \"kubernetes.io/projected/618252d5-5715-4ff8-b4ca-210aa040ffbb-kube-api-access-nxckf\") pod \"kube-proxy-9xt6k\" (UID: \"618252d5-5715-4ff8-b4ca-210aa040ffbb\") " pod="kube-system/kube-proxy-9xt6k" Jan 17 00:32:32.089014 kubelet[2577]: I0117 00:32:32.084135 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/618252d5-5715-4ff8-b4ca-210aa040ffbb-kube-proxy\") pod \"kube-proxy-9xt6k\" (UID: \"618252d5-5715-4ff8-b4ca-210aa040ffbb\") " pod="kube-system/kube-proxy-9xt6k" Jan 17 00:32:32.089014 kubelet[2577]: I0117 00:32:32.084291 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/618252d5-5715-4ff8-b4ca-210aa040ffbb-lib-modules\") pod \"kube-proxy-9xt6k\" (UID: \"618252d5-5715-4ff8-b4ca-210aa040ffbb\") " pod="kube-system/kube-proxy-9xt6k" Jan 17 00:32:32.093977 systemd[1]: Created slice kubepods-besteffort-pod618252d5_5715_4ff8_b4ca_210aa040ffbb.slice - libcontainer container kubepods-besteffort-pod618252d5_5715_4ff8_b4ca_210aa040ffbb.slice. Jan 17 00:32:32.299736 kubelet[2577]: E0117 00:32:32.299655 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:32.439818 kubelet[2577]: E0117 00:32:32.439024 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:32.448307 containerd[1457]: time="2026-01-17T00:32:32.448091771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xt6k,Uid:618252d5-5715-4ff8-b4ca-210aa040ffbb,Namespace:kube-system,Attempt:0,}" Jan 17 00:32:32.586953 containerd[1457]: time="2026-01-17T00:32:32.586097095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:32.586953 containerd[1457]: time="2026-01-17T00:32:32.586230575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:32.586953 containerd[1457]: time="2026-01-17T00:32:32.586629578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:32.589442 containerd[1457]: time="2026-01-17T00:32:32.589384436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:32.695742 systemd[1]: Started cri-containerd-622adabbd710fb096a22f455e4038794d5fef9ecf576847aea52bc81f92a2749.scope - libcontainer container 622adabbd710fb096a22f455e4038794d5fef9ecf576847aea52bc81f92a2749. Jan 17 00:32:32.778009 containerd[1457]: time="2026-01-17T00:32:32.775941337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xt6k,Uid:618252d5-5715-4ff8-b4ca-210aa040ffbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"622adabbd710fb096a22f455e4038794d5fef9ecf576847aea52bc81f92a2749\"" Jan 17 00:32:32.778216 kubelet[2577]: E0117 00:32:32.777201 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:32.818192 containerd[1457]: time="2026-01-17T00:32:32.817981418Z" level=info msg="CreateContainer within sandbox \"622adabbd710fb096a22f455e4038794d5fef9ecf576847aea52bc81f92a2749\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:32:32.916159 containerd[1457]: time="2026-01-17T00:32:32.897421303Z" level=info msg="CreateContainer within sandbox \"622adabbd710fb096a22f455e4038794d5fef9ecf576847aea52bc81f92a2749\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d991417cf58e062813355763552c4f5c3fc2bd5f5ac89753eb63be5c7de86853\"" Jan 17 00:32:32.916159 containerd[1457]: time="2026-01-17T00:32:32.910220629Z" level=info msg="StartContainer for \"d991417cf58e062813355763552c4f5c3fc2bd5f5ac89753eb63be5c7de86853\"" Jan 17 00:32:33.147979 systemd[1]: Started cri-containerd-d991417cf58e062813355763552c4f5c3fc2bd5f5ac89753eb63be5c7de86853.scope - libcontainer container d991417cf58e062813355763552c4f5c3fc2bd5f5ac89753eb63be5c7de86853. Jan 17 00:32:33.238195 containerd[1457]: time="2026-01-17T00:32:33.237153033Z" level=info msg="StartContainer for \"d991417cf58e062813355763552c4f5c3fc2bd5f5ac89753eb63be5c7de86853\" returns successfully" Jan 17 00:32:33.306195 kubelet[2577]: E0117 00:32:33.305633 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:33.306195 kubelet[2577]: E0117 00:32:33.305964 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:33.609922 kubelet[2577]: I0117 00:32:33.608163 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9xt6k" podStartSLOduration=2.6081426800000003 podStartE2EDuration="2.60814268s" podCreationTimestamp="2026-01-17 00:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:32:33.368249166 +0000 UTC m=+4.596840123" watchObservedRunningTime="2026-01-17 00:32:33.60814268 +0000 UTC m=+4.836733606" Jan 17 00:32:33.636288 systemd[1]: Created slice kubepods-burstable-pod8a21c1ba_2a81_43e1_9d1d_c330429b6ab7.slice - libcontainer container kubepods-burstable-pod8a21c1ba_2a81_43e1_9d1d_c330429b6ab7.slice. Jan 17 00:32:33.698707 systemd[1]: Created slice kubepods-besteffort-pod2ee0f3e2_aaad_4f77_9dd2_79d79a23ffdb.slice - libcontainer container kubepods-besteffort-pod2ee0f3e2_aaad_4f77_9dd2_79d79a23ffdb.slice. Jan 17 00:32:33.727558 kubelet[2577]: I0117 00:32:33.727417 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-host-proc-sys-kernel\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.727558 kubelet[2577]: I0117 00:32:33.727497 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cni-path\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.727558 kubelet[2577]: I0117 00:32:33.727527 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-lib-modules\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.727558 kubelet[2577]: I0117 00:32:33.727551 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-etc-cni-netd\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.727918 kubelet[2577]: I0117 00:32:33.727604 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nlc\" (UniqueName: \"kubernetes.io/projected/2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb-kube-api-access-79nlc\") pod \"cilium-operator-6f9c7c5859-zhxbz\" (UID: \"2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb\") " pod="kube-system/cilium-operator-6f9c7c5859-zhxbz" Jan 17 00:32:33.727918 kubelet[2577]: I0117 00:32:33.727632 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-hostproc\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.727918 kubelet[2577]: I0117 00:32:33.727654 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-host-proc-sys-net\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.727918 kubelet[2577]: I0117 00:32:33.727723 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzsgw\" (UniqueName: \"kubernetes.io/projected/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-kube-api-access-tzsgw\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.727918 kubelet[2577]: I0117 00:32:33.727750 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-run\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.728201 kubelet[2577]: I0117 00:32:33.727774 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-xtables-lock\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.728201 kubelet[2577]: I0117 00:32:33.727796 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-hubble-tls\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.728201 kubelet[2577]: I0117 00:32:33.727819 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-bpf-maps\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.728201 kubelet[2577]: I0117 00:32:33.727841 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-cgroup\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.728201 kubelet[2577]: I0117 00:32:33.727863 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-clustermesh-secrets\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.728201 kubelet[2577]: I0117 00:32:33.727895 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-config-path\") pod \"cilium-n9qnx\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " pod="kube-system/cilium-n9qnx" Jan 17 00:32:33.728569 kubelet[2577]: I0117 00:32:33.727920 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-zhxbz\" (UID: \"2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb\") " pod="kube-system/cilium-operator-6f9c7c5859-zhxbz" Jan 17 00:32:33.967772 kubelet[2577]: E0117 00:32:33.965247 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:33.982054 containerd[1457]: time="2026-01-17T00:32:33.981800931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n9qnx,Uid:8a21c1ba-2a81-43e1-9d1d-c330429b6ab7,Namespace:kube-system,Attempt:0,}" Jan 17 00:32:34.015949 kubelet[2577]: E0117 00:32:34.015829 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:34.017111 containerd[1457]: time="2026-01-17T00:32:34.016852578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-zhxbz,Uid:2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb,Namespace:kube-system,Attempt:0,}" Jan 17 00:32:34.075812 containerd[1457]: time="2026-01-17T00:32:34.074889043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:34.075812 containerd[1457]: time="2026-01-17T00:32:34.074978680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:34.075812 containerd[1457]: time="2026-01-17T00:32:34.075043941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:34.075812 containerd[1457]: time="2026-01-17T00:32:34.075235999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:34.134742 systemd[1]: Started cri-containerd-d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba.scope - libcontainer container d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba. Jan 17 00:32:34.142624 containerd[1457]: time="2026-01-17T00:32:34.140037770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:32:34.142624 containerd[1457]: time="2026-01-17T00:32:34.142065107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:32:34.142624 containerd[1457]: time="2026-01-17T00:32:34.142083961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:34.142624 containerd[1457]: time="2026-01-17T00:32:34.142240844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:32:34.182475 systemd[1]: Started cri-containerd-05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464.scope - libcontainer container 05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464. Jan 17 00:32:34.203444 containerd[1457]: time="2026-01-17T00:32:34.203293043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n9qnx,Uid:8a21c1ba-2a81-43e1-9d1d-c330429b6ab7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\"" Jan 17 00:32:34.205086 kubelet[2577]: E0117 00:32:34.204999 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:34.209196 containerd[1457]: time="2026-01-17T00:32:34.208062387Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:32:34.284399 containerd[1457]: time="2026-01-17T00:32:34.284151364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-zhxbz,Uid:2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\"" Jan 17 00:32:34.286125 kubelet[2577]: E0117 00:32:34.286022 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:34.756942 kubelet[2577]: E0117 00:32:34.755262 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:35.314698 kubelet[2577]: E0117 00:32:35.314532 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:36.322841 kubelet[2577]: E0117 00:32:36.322745 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:37.009790 kubelet[2577]: E0117 00:32:37.007442 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:37.339413 kubelet[2577]: E0117 00:32:37.337885 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:32:59.818458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084938294.mount: Deactivated successfully. Jan 17 00:33:08.104911 containerd[1457]: time="2026-01-17T00:33:08.104760733Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:33:08.112176 containerd[1457]: time="2026-01-17T00:33:08.111750011Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:33:08.114579 containerd[1457]: time="2026-01-17T00:33:08.114478124Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:33:08.121455 containerd[1457]: time="2026-01-17T00:33:08.121399565Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 33.91326828s" Jan 17 00:33:08.122253 containerd[1457]: time="2026-01-17T00:33:08.121667176Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:33:08.133859 containerd[1457]: time="2026-01-17T00:33:08.133454317Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:33:08.150691 containerd[1457]: time="2026-01-17T00:33:08.150511417Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:33:08.192195 containerd[1457]: time="2026-01-17T00:33:08.192097201Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b\"" Jan 17 00:33:08.195134 containerd[1457]: time="2026-01-17T00:33:08.193089890Z" level=info msg="StartContainer for \"b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b\"" Jan 17 00:33:08.288262 systemd[1]: Started cri-containerd-b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b.scope - libcontainer container b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b. Jan 17 00:33:08.413303 containerd[1457]: time="2026-01-17T00:33:08.411678310Z" level=info msg="StartContainer for \"b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b\" returns successfully" Jan 17 00:33:08.461297 systemd[1]: cri-containerd-b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b.scope: Deactivated successfully. Jan 17 00:33:08.537766 kubelet[2577]: E0117 00:33:08.537224 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:08.698161 containerd[1457]: time="2026-01-17T00:33:08.697760009Z" level=info msg="shim disconnected" id=b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b namespace=k8s.io Jan 17 00:33:08.698161 containerd[1457]: time="2026-01-17T00:33:08.697947339Z" level=warning msg="cleaning up after shim disconnected" id=b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b namespace=k8s.io Jan 17 00:33:08.698161 containerd[1457]: time="2026-01-17T00:33:08.698029563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:33:09.178797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b-rootfs.mount: Deactivated successfully. Jan 17 00:33:09.252970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077901648.mount: Deactivated successfully. Jan 17 00:33:09.548232 kubelet[2577]: E0117 00:33:09.547830 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:09.597960 containerd[1457]: time="2026-01-17T00:33:09.597798098Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:33:09.686644 containerd[1457]: time="2026-01-17T00:33:09.686464324Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057\"" Jan 17 00:33:09.690460 containerd[1457]: time="2026-01-17T00:33:09.690276361Z" level=info msg="StartContainer for \"c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057\"" Jan 17 00:33:09.796695 systemd[1]: Started cri-containerd-c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057.scope - libcontainer container c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057. Jan 17 00:33:09.890081 containerd[1457]: time="2026-01-17T00:33:09.889572268Z" level=info msg="StartContainer for \"c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057\" returns successfully" Jan 17 00:33:09.953807 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:33:09.954969 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:33:09.955082 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:33:09.965952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:33:09.973210 systemd[1]: cri-containerd-c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057.scope: Deactivated successfully. Jan 17 00:33:10.034977 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:33:10.113755 containerd[1457]: time="2026-01-17T00:33:10.111646654Z" level=info msg="shim disconnected" id=c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057 namespace=k8s.io Jan 17 00:33:10.113755 containerd[1457]: time="2026-01-17T00:33:10.111715333Z" level=warning msg="cleaning up after shim disconnected" id=c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057 namespace=k8s.io Jan 17 00:33:10.113755 containerd[1457]: time="2026-01-17T00:33:10.111728277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:33:10.577513 kubelet[2577]: E0117 00:33:10.574745 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:10.600691 containerd[1457]: time="2026-01-17T00:33:10.600264173Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:33:10.669720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421752725.mount: Deactivated successfully. Jan 17 00:33:10.694514 containerd[1457]: time="2026-01-17T00:33:10.694425046Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38\"" Jan 17 00:33:10.695465 containerd[1457]: time="2026-01-17T00:33:10.695434716Z" level=info msg="StartContainer for \"a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38\"" Jan 17 00:33:10.835982 systemd[1]: Started cri-containerd-a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38.scope - libcontainer container a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38. Jan 17 00:33:11.058874 systemd[1]: cri-containerd-a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38.scope: Deactivated successfully. Jan 17 00:33:11.060921 containerd[1457]: time="2026-01-17T00:33:11.060412284Z" level=info msg="StartContainer for \"a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38\" returns successfully" Jan 17 00:33:11.177938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38-rootfs.mount: Deactivated successfully. Jan 17 00:33:11.263018 containerd[1457]: time="2026-01-17T00:33:11.260758218Z" level=info msg="shim disconnected" id=a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38 namespace=k8s.io Jan 17 00:33:11.263018 containerd[1457]: time="2026-01-17T00:33:11.260823140Z" level=warning msg="cleaning up after shim disconnected" id=a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38 namespace=k8s.io Jan 17 00:33:11.263018 containerd[1457]: time="2026-01-17T00:33:11.260836665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:33:11.435436 containerd[1457]: time="2026-01-17T00:33:11.435171996Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:33:11.439800 containerd[1457]: time="2026-01-17T00:33:11.439487446Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:33:11.442920 containerd[1457]: time="2026-01-17T00:33:11.441946338Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:33:11.448031 containerd[1457]: time="2026-01-17T00:33:11.446120545Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.312599313s" Jan 17 00:33:11.448031 containerd[1457]: time="2026-01-17T00:33:11.446159268Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:33:11.469002 containerd[1457]: time="2026-01-17T00:33:11.465979202Z" level=info msg="CreateContainer within sandbox \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:33:11.521761 containerd[1457]: time="2026-01-17T00:33:11.521471182Z" level=info msg="CreateContainer within sandbox \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\"" Jan 17 00:33:11.529017 containerd[1457]: time="2026-01-17T00:33:11.527018265Z" level=info msg="StartContainer for \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\"" Jan 17 00:33:11.610585 systemd[1]: Started cri-containerd-47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5.scope - libcontainer container 47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5. Jan 17 00:33:11.613834 kubelet[2577]: E0117 00:33:11.611552 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:11.710815 containerd[1457]: time="2026-01-17T00:33:11.702990540Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:33:11.921760 containerd[1457]: time="2026-01-17T00:33:11.921654903Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809\"" Jan 17 00:33:11.927036 containerd[1457]: time="2026-01-17T00:33:11.925405025Z" level=info msg="StartContainer for \"a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809\"" Jan 17 00:33:11.987592 containerd[1457]: time="2026-01-17T00:33:11.987410529Z" level=info msg="StartContainer for \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\" returns successfully" Jan 17 00:33:12.029835 systemd[1]: Started cri-containerd-a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809.scope - libcontainer container a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809. Jan 17 00:33:12.194866 systemd[1]: cri-containerd-a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809.scope: Deactivated successfully. Jan 17 00:33:12.204011 containerd[1457]: time="2026-01-17T00:33:12.203877164Z" level=info msg="StartContainer for \"a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809\" returns successfully" Jan 17 00:33:12.289233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809-rootfs.mount: Deactivated successfully. Jan 17 00:33:12.343885 containerd[1457]: time="2026-01-17T00:33:12.343566757Z" level=info msg="shim disconnected" id=a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809 namespace=k8s.io Jan 17 00:33:12.343885 containerd[1457]: time="2026-01-17T00:33:12.343953941Z" level=warning msg="cleaning up after shim disconnected" id=a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809 namespace=k8s.io Jan 17 00:33:12.343885 containerd[1457]: time="2026-01-17T00:33:12.343971474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:33:12.641408 kubelet[2577]: E0117 00:33:12.619593 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:12.652124 kubelet[2577]: E0117 00:33:12.651697 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:12.686077 containerd[1457]: time="2026-01-17T00:33:12.683779452Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:33:12.827218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589027594.mount: Deactivated successfully. Jan 17 00:33:12.847003 kubelet[2577]: I0117 00:33:12.846357 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-zhxbz" podStartSLOduration=2.683485632 podStartE2EDuration="39.846291282s" podCreationTimestamp="2026-01-17 00:32:33 +0000 UTC" firstStartedPulling="2026-01-17 00:32:34.286962393 +0000 UTC m=+5.515553319" lastFinishedPulling="2026-01-17 00:33:11.449768043 +0000 UTC m=+42.678358969" observedRunningTime="2026-01-17 00:33:12.834388764 +0000 UTC m=+44.062979710" watchObservedRunningTime="2026-01-17 00:33:12.846291282 +0000 UTC m=+44.074882208" Jan 17 00:33:12.920297 containerd[1457]: time="2026-01-17T00:33:12.920128268Z" level=info msg="CreateContainer within sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab\"" Jan 17 00:33:12.927431 containerd[1457]: time="2026-01-17T00:33:12.922630097Z" level=info msg="StartContainer for \"f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab\"" Jan 17 00:33:13.103623 systemd[1]: Started cri-containerd-f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab.scope - libcontainer container f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab. Jan 17 00:33:13.294184 containerd[1457]: time="2026-01-17T00:33:13.292797712Z" level=info msg="StartContainer for \"f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab\" returns successfully" Jan 17 00:33:13.671765 kubelet[2577]: E0117 00:33:13.668029 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:13.671765 kubelet[2577]: E0117 00:33:13.668991 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:13.838249 kubelet[2577]: I0117 00:33:13.836872 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n9qnx" podStartSLOduration=6.917537486 podStartE2EDuration="40.836850303s" podCreationTimestamp="2026-01-17 00:32:33 +0000 UTC" firstStartedPulling="2026-01-17 00:32:34.206632924 +0000 UTC m=+5.435223851" lastFinishedPulling="2026-01-17 00:33:08.125945742 +0000 UTC m=+39.354536668" observedRunningTime="2026-01-17 00:33:13.835084943 +0000 UTC m=+45.063675879" watchObservedRunningTime="2026-01-17 00:33:13.836850303 +0000 UTC m=+45.065441259" Jan 17 00:33:13.855043 kubelet[2577]: I0117 00:33:13.854976 2577 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:33:14.277230 systemd[1]: Created slice kubepods-burstable-podd9607691_7b5f_49b6_8fad_ab102047a26d.slice - libcontainer container kubepods-burstable-podd9607691_7b5f_49b6_8fad_ab102047a26d.slice. Jan 17 00:33:14.292492 kubelet[2577]: I0117 00:33:14.290134 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl446\" (UniqueName: \"kubernetes.io/projected/d9607691-7b5f-49b6-8fad-ab102047a26d-kube-api-access-dl446\") pod \"coredns-66bc5c9577-x6dqb\" (UID: \"d9607691-7b5f-49b6-8fad-ab102047a26d\") " pod="kube-system/coredns-66bc5c9577-x6dqb" Jan 17 00:33:14.292492 kubelet[2577]: I0117 00:33:14.290203 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9607691-7b5f-49b6-8fad-ab102047a26d-config-volume\") pod \"coredns-66bc5c9577-x6dqb\" (UID: \"d9607691-7b5f-49b6-8fad-ab102047a26d\") " pod="kube-system/coredns-66bc5c9577-x6dqb" Jan 17 00:33:14.298933 systemd[1]: Created slice kubepods-burstable-poda9c25bfb_62c5_4050_9d8a_3f93af9f67a8.slice - libcontainer container kubepods-burstable-poda9c25bfb_62c5_4050_9d8a_3f93af9f67a8.slice. Jan 17 00:33:14.393020 kubelet[2577]: I0117 00:33:14.392968 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8v7\" (UniqueName: \"kubernetes.io/projected/a9c25bfb-62c5-4050-9d8a-3f93af9f67a8-kube-api-access-9m8v7\") pod \"coredns-66bc5c9577-5qhn2\" (UID: \"a9c25bfb-62c5-4050-9d8a-3f93af9f67a8\") " pod="kube-system/coredns-66bc5c9577-5qhn2" Jan 17 00:33:14.396546 kubelet[2577]: I0117 00:33:14.393671 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9c25bfb-62c5-4050-9d8a-3f93af9f67a8-config-volume\") pod \"coredns-66bc5c9577-5qhn2\" (UID: \"a9c25bfb-62c5-4050-9d8a-3f93af9f67a8\") " pod="kube-system/coredns-66bc5c9577-5qhn2" Jan 17 00:33:14.590742 kubelet[2577]: E0117 00:33:14.590198 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:14.592246 containerd[1457]: time="2026-01-17T00:33:14.591199053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x6dqb,Uid:d9607691-7b5f-49b6-8fad-ab102047a26d,Namespace:kube-system,Attempt:0,}" Jan 17 00:33:14.619723 kubelet[2577]: E0117 00:33:14.619256 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:14.620752 containerd[1457]: time="2026-01-17T00:33:14.620654851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5qhn2,Uid:a9c25bfb-62c5-4050-9d8a-3f93af9f67a8,Namespace:kube-system,Attempt:0,}" Jan 17 00:33:14.672721 kubelet[2577]: E0117 00:33:14.672684 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:15.684716 kubelet[2577]: E0117 00:33:15.681299 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:17.288983 systemd-networkd[1391]: cilium_host: Link UP Jan 17 00:33:17.289268 systemd-networkd[1391]: cilium_net: Link UP Jan 17 00:33:17.289693 systemd-networkd[1391]: cilium_net: Gained carrier Jan 17 00:33:17.290001 systemd-networkd[1391]: cilium_host: Gained carrier Jan 17 00:33:17.685647 systemd-networkd[1391]: cilium_vxlan: Link UP Jan 17 00:33:17.685660 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jan 17 00:33:17.958835 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jan 17 00:33:18.019584 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jan 17 00:33:18.244376 kernel: NET: Registered PF_ALG protocol family Jan 17 00:33:19.165279 systemd[1]: run-containerd-runc-k8s.io-f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab-runc.iycDY4.mount: Deactivated successfully. Jan 17 00:33:19.177059 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jan 17 00:33:20.472493 systemd-networkd[1391]: lxc_health: Link UP Jan 17 00:33:20.515537 systemd-networkd[1391]: lxc_health: Gained carrier Jan 17 00:33:20.877418 systemd-networkd[1391]: lxc864f742c6294: Link UP Jan 17 00:33:20.924362 kernel: eth0: renamed from tmpc099f Jan 17 00:33:20.943154 systemd-networkd[1391]: lxc864f742c6294: Gained carrier Jan 17 00:33:20.943844 systemd-networkd[1391]: lxc435d4ada2479: Link UP Jan 17 00:33:20.966426 kernel: eth0: renamed from tmpf85d6 Jan 17 00:33:20.977151 systemd-networkd[1391]: lxc435d4ada2479: Gained carrier Jan 17 00:33:21.599257 systemd[1]: run-containerd-runc-k8s.io-f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab-runc.iEofuc.mount: Deactivated successfully. Jan 17 00:33:21.795309 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 17 00:33:21.951529 kubelet[2577]: E0117 00:33:21.950108 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:22.374466 systemd-networkd[1391]: lxc864f742c6294: Gained IPv6LL Jan 17 00:33:22.733405 kubelet[2577]: E0117 00:33:22.728917 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:22.820522 systemd-networkd[1391]: lxc435d4ada2479: Gained IPv6LL Jan 17 00:33:26.374574 systemd[1]: run-containerd-runc-k8s.io-f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab-runc.Y8O36e.mount: Deactivated successfully. Jan 17 00:33:28.726096 sudo[1636]: pam_unix(sudo:session): session closed for user root Jan 17 00:33:28.733451 sshd[1632]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:28.741421 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:33:28.741656 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:36400.service: Deactivated successfully. Jan 17 00:33:28.746301 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:33:28.746608 systemd[1]: session-7.scope: Consumed 16.690s CPU time, 163.6M memory peak, 0B memory swap peak. Jan 17 00:33:28.749919 systemd-logind[1449]: Removed session 7. Jan 17 00:33:30.475763 containerd[1457]: time="2026-01-17T00:33:30.475588790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:33:30.475763 containerd[1457]: time="2026-01-17T00:33:30.475701570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:33:30.476841 containerd[1457]: time="2026-01-17T00:33:30.475743578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:33:30.476841 containerd[1457]: time="2026-01-17T00:33:30.475822175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:33:30.476841 containerd[1457]: time="2026-01-17T00:33:30.475844167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:33:30.476841 containerd[1457]: time="2026-01-17T00:33:30.475964982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:33:30.476841 containerd[1457]: time="2026-01-17T00:33:30.475747717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:33:30.476841 containerd[1457]: time="2026-01-17T00:33:30.475892819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:33:30.520943 systemd[1]: Started cri-containerd-c099f2cf5711b6168b211b69042707a295a64c0d3f78435d3096297e41d1bff1.scope - libcontainer container c099f2cf5711b6168b211b69042707a295a64c0d3f78435d3096297e41d1bff1. Jan 17 00:33:30.525458 systemd[1]: Started cri-containerd-f85d69e648dc81f1dbb9e41c238a57dc33626102b5f395066c96d12f02aa6a14.scope - libcontainer container f85d69e648dc81f1dbb9e41c238a57dc33626102b5f395066c96d12f02aa6a14. Jan 17 00:33:30.553690 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:33:30.559368 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:33:30.615641 containerd[1457]: time="2026-01-17T00:33:30.615246887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5qhn2,Uid:a9c25bfb-62c5-4050-9d8a-3f93af9f67a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f85d69e648dc81f1dbb9e41c238a57dc33626102b5f395066c96d12f02aa6a14\"" Jan 17 00:33:30.616899 kubelet[2577]: E0117 00:33:30.616777 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:30.633411 containerd[1457]: time="2026-01-17T00:33:30.632507106Z" level=info msg="CreateContainer within sandbox \"f85d69e648dc81f1dbb9e41c238a57dc33626102b5f395066c96d12f02aa6a14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:33:30.651864 containerd[1457]: time="2026-01-17T00:33:30.651078923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x6dqb,Uid:d9607691-7b5f-49b6-8fad-ab102047a26d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c099f2cf5711b6168b211b69042707a295a64c0d3f78435d3096297e41d1bff1\"" Jan 17 00:33:30.652515 kubelet[2577]: E0117 00:33:30.652441 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:30.670112 containerd[1457]: time="2026-01-17T00:33:30.669933629Z" level=info msg="CreateContainer within sandbox \"c099f2cf5711b6168b211b69042707a295a64c0d3f78435d3096297e41d1bff1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:33:30.712659 containerd[1457]: time="2026-01-17T00:33:30.711657455Z" level=info msg="CreateContainer within sandbox \"c099f2cf5711b6168b211b69042707a295a64c0d3f78435d3096297e41d1bff1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c3c7fad5139ab7a9722b99dd3d94ad33c96a3ca596e92698705a9ea004b01b0\"" Jan 17 00:33:30.713156 containerd[1457]: time="2026-01-17T00:33:30.713127615Z" level=info msg="StartContainer for \"1c3c7fad5139ab7a9722b99dd3d94ad33c96a3ca596e92698705a9ea004b01b0\"" Jan 17 00:33:30.720970 containerd[1457]: time="2026-01-17T00:33:30.720424624Z" level=info msg="CreateContainer within sandbox \"f85d69e648dc81f1dbb9e41c238a57dc33626102b5f395066c96d12f02aa6a14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e40aab35e1ec5e4a43e2f579518a0bff3c7efd0872ea3d6da566de8583ce73fe\"" Jan 17 00:33:30.722808 containerd[1457]: time="2026-01-17T00:33:30.722714486Z" level=info msg="StartContainer for \"e40aab35e1ec5e4a43e2f579518a0bff3c7efd0872ea3d6da566de8583ce73fe\"" Jan 17 00:33:30.804600 systemd[1]: Started cri-containerd-e40aab35e1ec5e4a43e2f579518a0bff3c7efd0872ea3d6da566de8583ce73fe.scope - libcontainer container e40aab35e1ec5e4a43e2f579518a0bff3c7efd0872ea3d6da566de8583ce73fe. Jan 17 00:33:30.814837 systemd[1]: Started cri-containerd-1c3c7fad5139ab7a9722b99dd3d94ad33c96a3ca596e92698705a9ea004b01b0.scope - libcontainer container 1c3c7fad5139ab7a9722b99dd3d94ad33c96a3ca596e92698705a9ea004b01b0. Jan 17 00:33:30.887834 containerd[1457]: time="2026-01-17T00:33:30.887253101Z" level=info msg="StartContainer for \"e40aab35e1ec5e4a43e2f579518a0bff3c7efd0872ea3d6da566de8583ce73fe\" returns successfully" Jan 17 00:33:30.902522 containerd[1457]: time="2026-01-17T00:33:30.899826229Z" level=info msg="StartContainer for \"1c3c7fad5139ab7a9722b99dd3d94ad33c96a3ca596e92698705a9ea004b01b0\" returns successfully" Jan 17 00:33:31.320782 kubelet[2577]: E0117 00:33:31.320703 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:31.333183 kubelet[2577]: E0117 00:33:31.330778 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:31.434200 kubelet[2577]: I0117 00:33:31.433796 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x6dqb" podStartSLOduration=60.433773005 podStartE2EDuration="1m0.433773005s" podCreationTimestamp="2026-01-17 00:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:33:31.372202456 +0000 UTC m=+62.600793412" watchObservedRunningTime="2026-01-17 00:33:31.433773005 +0000 UTC m=+62.662363941" Jan 17 00:33:31.492827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368240165.mount: Deactivated successfully. Jan 17 00:33:32.335277 kubelet[2577]: E0117 00:33:32.334550 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:32.337272 kubelet[2577]: E0117 00:33:32.337191 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:32.385901 kubelet[2577]: I0117 00:33:32.385674 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5qhn2" podStartSLOduration=61.385654995 podStartE2EDuration="1m1.385654995s" podCreationTimestamp="2026-01-17 00:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:33:31.434161659 +0000 UTC m=+62.662752615" watchObservedRunningTime="2026-01-17 00:33:32.385654995 +0000 UTC m=+63.614245942" Jan 17 00:33:33.340106 kubelet[2577]: E0117 00:33:33.335767 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:33.340106 kubelet[2577]: E0117 00:33:33.338211 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:44.234861 kubelet[2577]: E0117 00:33:44.234202 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:33:48.229171 kubelet[2577]: E0117 00:33:48.227264 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:34:02.239713 kubelet[2577]: E0117 00:34:02.232511 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:34:04.231430 kubelet[2577]: E0117 00:34:04.229594 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:34:36.225393 kubelet[2577]: E0117 00:34:36.225181 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:34:42.228082 kubelet[2577]: E0117 00:34:42.227452 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:34:46.234573 kubelet[2577]: E0117 00:34:46.233450 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:34:53.222002 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:54736.service - OpenSSH per-connection server daemon (10.0.0.1:54736). Jan 17 00:34:53.414151 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 54736 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:34:53.408048 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:34:53.422578 systemd-logind[1449]: New session 8 of user core. Jan 17 00:34:53.431872 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:34:53.859541 sshd[4116]: pam_unix(sshd:session): session closed for user core Jan 17 00:34:53.867941 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:54736.service: Deactivated successfully. Jan 17 00:34:53.871195 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:34:53.880406 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:34:53.888280 systemd-logind[1449]: Removed session 8. Jan 17 00:34:57.229250 kubelet[2577]: E0117 00:34:57.229058 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:34:58.235482 kubelet[2577]: E0117 00:34:58.229808 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:34:58.904151 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:54740.service - OpenSSH per-connection server daemon (10.0.0.1:54740). Jan 17 00:34:59.048568 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 54740 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:34:59.069492 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:34:59.090117 systemd-logind[1449]: New session 9 of user core. Jan 17 00:34:59.097737 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:34:59.433273 sshd[4132]: pam_unix(sshd:session): session closed for user core Jan 17 00:34:59.454585 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:54740.service: Deactivated successfully. Jan 17 00:34:59.470697 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:34:59.474751 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:34:59.478119 systemd-logind[1449]: Removed session 9. Jan 17 00:35:04.482738 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:51686.service - OpenSSH per-connection server daemon (10.0.0.1:51686). Jan 17 00:35:04.665216 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 51686 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:04.668968 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:04.691383 systemd-logind[1449]: New session 10 of user core. Jan 17 00:35:04.710825 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:35:05.004573 sshd[4150]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:05.015010 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:51686.service: Deactivated successfully. Jan 17 00:35:05.025532 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:35:05.026865 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:35:05.033625 systemd-logind[1449]: Removed session 10. Jan 17 00:35:08.228404 kubelet[2577]: E0117 00:35:08.226928 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:35:10.053271 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:51700.service - OpenSSH per-connection server daemon (10.0.0.1:51700). Jan 17 00:35:10.136988 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 51700 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:10.141083 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:10.162629 systemd-logind[1449]: New session 11 of user core. Jan 17 00:35:10.180540 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:35:10.497735 sshd[4166]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:10.507156 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:51700.service: Deactivated successfully. Jan 17 00:35:10.511228 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:35:10.513715 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:35:10.517526 systemd-logind[1449]: Removed session 11. Jan 17 00:35:11.231476 kubelet[2577]: E0117 00:35:11.231012 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:35:15.551249 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:55350.service - OpenSSH per-connection server daemon (10.0.0.1:55350). Jan 17 00:35:15.629530 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 55350 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:15.640482 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:15.665885 systemd-logind[1449]: New session 12 of user core. Jan 17 00:35:15.678210 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:35:15.993168 sshd[4181]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:16.016689 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:55350.service: Deactivated successfully. Jan 17 00:35:16.026724 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:35:16.033042 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:35:16.035485 systemd-logind[1449]: Removed session 12. Jan 17 00:35:21.055964 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:55366.service - OpenSSH per-connection server daemon (10.0.0.1:55366). Jan 17 00:35:21.143861 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 55366 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:21.147911 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:21.171388 systemd-logind[1449]: New session 13 of user core. Jan 17 00:35:21.204635 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:35:21.459838 sshd[4196]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:21.467510 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:55366.service: Deactivated successfully. Jan 17 00:35:21.470002 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:35:21.475225 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:35:21.488473 systemd-logind[1449]: Removed session 13. Jan 17 00:35:26.536551 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:37850.service - OpenSSH per-connection server daemon (10.0.0.1:37850). Jan 17 00:35:26.651371 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 37850 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:26.653658 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:26.706677 systemd-logind[1449]: New session 14 of user core. Jan 17 00:35:26.730447 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:35:27.193455 sshd[4212]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:27.199198 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:37850.service: Deactivated successfully. Jan 17 00:35:27.200233 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:35:27.206559 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:35:27.211210 systemd-logind[1449]: Removed session 14. Jan 17 00:35:31.227252 kubelet[2577]: E0117 00:35:31.227062 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:35:32.255534 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:37858.service - OpenSSH per-connection server daemon (10.0.0.1:37858). Jan 17 00:35:32.371925 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 37858 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:32.375207 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:32.406117 systemd-logind[1449]: New session 15 of user core. Jan 17 00:35:32.424889 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:35:32.764663 sshd[4229]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:32.789071 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:37858.service: Deactivated successfully. Jan 17 00:35:32.800852 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:35:32.803616 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:35:32.809866 systemd-logind[1449]: Removed session 15. Jan 17 00:35:37.822806 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:37130.service - OpenSSH per-connection server daemon (10.0.0.1:37130). Jan 17 00:35:37.921475 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 37130 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:37.928465 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:37.956812 systemd-logind[1449]: New session 16 of user core. Jan 17 00:35:37.979848 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:35:38.486531 sshd[4246]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:38.501701 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:37130.service: Deactivated successfully. Jan 17 00:35:38.506042 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:35:38.508221 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:35:38.523140 systemd-logind[1449]: Removed session 16. Jan 17 00:35:43.522794 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:38274.service - OpenSSH per-connection server daemon (10.0.0.1:38274). Jan 17 00:35:43.639353 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 38274 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:43.642621 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:43.675861 systemd-logind[1449]: New session 17 of user core. Jan 17 00:35:43.681675 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:35:43.944044 sshd[4261]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:43.974300 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:38274.service: Deactivated successfully. Jan 17 00:35:43.978788 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:35:43.986819 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:35:43.995766 systemd-logind[1449]: Removed session 17. Jan 17 00:35:48.230249 kubelet[2577]: E0117 00:35:48.226806 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:35:49.014913 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:38278.service - OpenSSH per-connection server daemon (10.0.0.1:38278). Jan 17 00:35:49.136176 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 38278 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:49.134063 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:49.176027 systemd-logind[1449]: New session 18 of user core. Jan 17 00:35:49.194693 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:35:49.542990 sshd[4276]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:49.577893 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:38278.service: Deactivated successfully. Jan 17 00:35:49.583598 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:35:49.592734 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:35:49.618903 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:38288.service - OpenSSH per-connection server daemon (10.0.0.1:38288). Jan 17 00:35:49.621645 systemd-logind[1449]: Removed session 18. Jan 17 00:35:49.690053 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 38288 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:49.695824 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:49.715516 systemd-logind[1449]: New session 19 of user core. Jan 17 00:35:49.724799 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:35:50.132089 sshd[4292]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:50.155514 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:38288.service: Deactivated successfully. Jan 17 00:35:50.164119 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:35:50.173406 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:35:50.188465 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:38302.service - OpenSSH per-connection server daemon (10.0.0.1:38302). Jan 17 00:35:50.196697 systemd-logind[1449]: Removed session 19. Jan 17 00:35:50.292629 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 38302 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:50.296610 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:50.311233 systemd-logind[1449]: New session 20 of user core. Jan 17 00:35:50.317546 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:35:50.593012 sshd[4304]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:50.614624 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:38302.service: Deactivated successfully. Jan 17 00:35:50.619527 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:35:50.626229 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:35:50.628893 systemd-logind[1449]: Removed session 20. Jan 17 00:35:55.642474 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:36958.service - OpenSSH per-connection server daemon (10.0.0.1:36958). Jan 17 00:35:55.703639 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 36958 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:35:55.708245 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:35:55.724236 systemd-logind[1449]: New session 21 of user core. Jan 17 00:35:55.738763 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:35:56.030834 sshd[4319]: pam_unix(sshd:session): session closed for user core Jan 17 00:35:56.038042 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:36958.service: Deactivated successfully. Jan 17 00:35:56.041627 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:35:56.047177 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:35:56.052244 systemd-logind[1449]: Removed session 21. Jan 17 00:35:57.237970 kubelet[2577]: E0117 00:35:57.237249 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:35:58.226040 kubelet[2577]: E0117 00:35:58.225384 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:36:01.062831 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:36960.service - OpenSSH per-connection server daemon (10.0.0.1:36960). Jan 17 00:36:01.174652 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 36960 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:01.175528 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:01.193682 systemd-logind[1449]: New session 22 of user core. Jan 17 00:36:01.204244 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:36:01.480071 sshd[4335]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:01.487454 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:36960.service: Deactivated successfully. Jan 17 00:36:01.492497 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:36:01.494893 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:36:01.497375 systemd-logind[1449]: Removed session 22. Jan 17 00:36:06.546509 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:57658.service - OpenSSH per-connection server daemon (10.0.0.1:57658). Jan 17 00:36:06.652153 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 57658 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:06.660473 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:06.708466 systemd-logind[1449]: New session 23 of user core. Jan 17 00:36:06.731068 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:36:07.241629 sshd[4351]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:07.264053 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:57658.service: Deactivated successfully. Jan 17 00:36:07.288366 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:36:07.291833 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:36:07.295129 systemd-logind[1449]: Removed session 23. Jan 17 00:36:12.290433 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:57672.service - OpenSSH per-connection server daemon (10.0.0.1:57672). Jan 17 00:36:12.390408 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 57672 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:12.393257 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:12.411785 systemd-logind[1449]: New session 24 of user core. Jan 17 00:36:12.432747 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:36:12.834145 sshd[4368]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:12.853597 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:57672.service: Deactivated successfully. Jan 17 00:36:12.881300 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:36:12.895107 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:36:12.896874 systemd-logind[1449]: Removed session 24. Jan 17 00:36:15.227282 kubelet[2577]: E0117 00:36:15.227171 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:36:17.911040 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:36350.service - OpenSSH per-connection server daemon (10.0.0.1:36350). Jan 17 00:36:18.016596 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 36350 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:18.020109 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:18.046792 systemd-logind[1449]: New session 25 of user core. Jan 17 00:36:18.085859 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:36:18.227500 kubelet[2577]: E0117 00:36:18.226198 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:36:18.500063 sshd[4382]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:18.510093 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:36350.service: Deactivated successfully. Jan 17 00:36:18.528110 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:36:18.541575 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:36:18.550030 systemd-logind[1449]: Removed session 25. Jan 17 00:36:21.386870 kubelet[2577]: E0117 00:36:21.386589 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:36:23.562936 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:52070.service - OpenSSH per-connection server daemon (10.0.0.1:52070). Jan 17 00:36:23.632274 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 52070 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:23.634749 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:23.662754 systemd-logind[1449]: New session 26 of user core. Jan 17 00:36:23.673374 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:36:23.985495 sshd[4397]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:24.000390 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:52070.service: Deactivated successfully. Jan 17 00:36:24.005227 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:36:24.017439 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:36:24.022069 systemd-logind[1449]: Removed session 26. Jan 17 00:36:27.249372 kubelet[2577]: E0117 00:36:27.249239 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:36:29.055196 systemd[1]: Started sshd@26-10.0.0.79:22-10.0.0.1:52086.service - OpenSSH per-connection server daemon (10.0.0.1:52086). Jan 17 00:36:29.228787 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 52086 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:29.227010 sshd[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:29.265256 systemd-logind[1449]: New session 27 of user core. Jan 17 00:36:29.284006 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:36:29.627358 sshd[4418]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:29.644751 systemd[1]: sshd@26-10.0.0.79:22-10.0.0.1:52086.service: Deactivated successfully. Jan 17 00:36:29.659686 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:36:29.675441 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:36:29.692172 systemd[1]: Started sshd@27-10.0.0.79:22-10.0.0.1:52098.service - OpenSSH per-connection server daemon (10.0.0.1:52098). Jan 17 00:36:29.697682 systemd-logind[1449]: Removed session 27. Jan 17 00:36:29.773717 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 52098 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:29.777247 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:29.809776 systemd-logind[1449]: New session 28 of user core. Jan 17 00:36:29.819845 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:36:30.725591 sshd[4435]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:30.738838 systemd[1]: sshd@27-10.0.0.79:22-10.0.0.1:52098.service: Deactivated successfully. Jan 17 00:36:30.745795 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:36:30.761741 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:36:30.787206 systemd[1]: Started sshd@28-10.0.0.79:22-10.0.0.1:52104.service - OpenSSH per-connection server daemon (10.0.0.1:52104). Jan 17 00:36:30.793479 systemd-logind[1449]: Removed session 28. Jan 17 00:36:30.965834 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 52104 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:30.984635 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:31.013757 systemd-logind[1449]: New session 29 of user core. Jan 17 00:36:31.044033 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 00:36:32.693182 sshd[4447]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:32.718124 systemd[1]: sshd@28-10.0.0.79:22-10.0.0.1:52104.service: Deactivated successfully. Jan 17 00:36:32.720650 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 00:36:32.724575 systemd-logind[1449]: Session 29 logged out. Waiting for processes to exit. Jan 17 00:36:32.748817 systemd[1]: Started sshd@29-10.0.0.79:22-10.0.0.1:41146.service - OpenSSH per-connection server daemon (10.0.0.1:41146). Jan 17 00:36:32.757041 systemd-logind[1449]: Removed session 29. Jan 17 00:36:32.867083 sshd[4473]: Accepted publickey for core from 10.0.0.1 port 41146 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:32.872291 sshd[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:32.895358 systemd-logind[1449]: New session 30 of user core. Jan 17 00:36:32.909513 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 00:36:33.766207 sshd[4473]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:33.784932 systemd[1]: sshd@29-10.0.0.79:22-10.0.0.1:41146.service: Deactivated successfully. Jan 17 00:36:33.796036 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 00:36:33.799692 systemd-logind[1449]: Session 30 logged out. Waiting for processes to exit. Jan 17 00:36:33.826075 systemd[1]: Started sshd@30-10.0.0.79:22-10.0.0.1:41148.service - OpenSSH per-connection server daemon (10.0.0.1:41148). Jan 17 00:36:33.839409 systemd-logind[1449]: Removed session 30. Jan 17 00:36:33.986293 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 41148 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:33.995210 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:34.021016 systemd-logind[1449]: New session 31 of user core. Jan 17 00:36:34.035447 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 17 00:36:34.509280 sshd[4488]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:34.526125 systemd[1]: sshd@30-10.0.0.79:22-10.0.0.1:41148.service: Deactivated successfully. Jan 17 00:36:34.529742 systemd[1]: session-31.scope: Deactivated successfully. Jan 17 00:36:34.538438 systemd-logind[1449]: Session 31 logged out. Waiting for processes to exit. Jan 17 00:36:34.546244 systemd-logind[1449]: Removed session 31. Jan 17 00:36:39.570793 systemd[1]: Started sshd@31-10.0.0.79:22-10.0.0.1:41164.service - OpenSSH per-connection server daemon (10.0.0.1:41164). Jan 17 00:36:39.728124 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 41164 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:39.731514 sshd[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:39.772497 systemd-logind[1449]: New session 32 of user core. Jan 17 00:36:39.787083 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 17 00:36:40.397174 sshd[4504]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:40.423091 systemd[1]: sshd@31-10.0.0.79:22-10.0.0.1:41164.service: Deactivated successfully. Jan 17 00:36:40.427454 systemd[1]: session-32.scope: Deactivated successfully. Jan 17 00:36:40.439194 systemd-logind[1449]: Session 32 logged out. Waiting for processes to exit. Jan 17 00:36:40.448857 systemd-logind[1449]: Removed session 32. Jan 17 00:36:41.233117 kubelet[2577]: E0117 00:36:41.231259 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:36:45.461380 systemd[1]: Started sshd@32-10.0.0.79:22-10.0.0.1:46430.service - OpenSSH per-connection server daemon (10.0.0.1:46430). Jan 17 00:36:45.581746 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 46430 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:45.588868 sshd[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:45.608646 systemd-logind[1449]: New session 33 of user core. Jan 17 00:36:45.623928 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 17 00:36:45.938830 sshd[4518]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:45.947037 systemd[1]: sshd@32-10.0.0.79:22-10.0.0.1:46430.service: Deactivated successfully. Jan 17 00:36:45.958148 systemd[1]: session-33.scope: Deactivated successfully. Jan 17 00:36:45.964721 systemd-logind[1449]: Session 33 logged out. Waiting for processes to exit. Jan 17 00:36:45.969820 systemd-logind[1449]: Removed session 33. Jan 17 00:36:50.979025 systemd[1]: Started sshd@33-10.0.0.79:22-10.0.0.1:46444.service - OpenSSH per-connection server daemon (10.0.0.1:46444). Jan 17 00:36:51.139051 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 46444 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:51.145961 sshd[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:51.200004 systemd-logind[1449]: New session 34 of user core. Jan 17 00:36:51.214467 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 17 00:36:51.571684 sshd[4532]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:51.588962 systemd[1]: sshd@33-10.0.0.79:22-10.0.0.1:46444.service: Deactivated successfully. Jan 17 00:36:51.595310 systemd[1]: session-34.scope: Deactivated successfully. Jan 17 00:36:51.602058 systemd-logind[1449]: Session 34 logged out. Waiting for processes to exit. Jan 17 00:36:51.604697 systemd-logind[1449]: Removed session 34. Jan 17 00:36:56.658801 systemd[1]: Started sshd@34-10.0.0.79:22-10.0.0.1:57394.service - OpenSSH per-connection server daemon (10.0.0.1:57394). Jan 17 00:36:56.874498 sshd[4546]: Accepted publickey for core from 10.0.0.1 port 57394 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:36:56.880512 sshd[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:36:56.906638 systemd-logind[1449]: New session 35 of user core. Jan 17 00:36:56.929990 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 17 00:36:57.196489 sshd[4546]: pam_unix(sshd:session): session closed for user core Jan 17 00:36:57.203602 systemd[1]: sshd@34-10.0.0.79:22-10.0.0.1:57394.service: Deactivated successfully. Jan 17 00:36:57.210240 systemd[1]: session-35.scope: Deactivated successfully. Jan 17 00:36:57.215172 systemd-logind[1449]: Session 35 logged out. Waiting for processes to exit. Jan 17 00:36:57.219591 systemd-logind[1449]: Removed session 35. Jan 17 00:37:01.232180 kubelet[2577]: E0117 00:37:01.226990 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:02.281496 systemd[1]: Started sshd@35-10.0.0.79:22-10.0.0.1:57398.service - OpenSSH per-connection server daemon (10.0.0.1:57398). Jan 17 00:37:02.399110 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 57398 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:37:02.401224 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:37:02.425100 systemd-logind[1449]: New session 36 of user core. Jan 17 00:37:02.445549 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 17 00:37:02.794569 sshd[4560]: pam_unix(sshd:session): session closed for user core Jan 17 00:37:02.807634 systemd[1]: sshd@35-10.0.0.79:22-10.0.0.1:57398.service: Deactivated successfully. Jan 17 00:37:02.811509 systemd[1]: session-36.scope: Deactivated successfully. Jan 17 00:37:02.820091 systemd-logind[1449]: Session 36 logged out. Waiting for processes to exit. Jan 17 00:37:02.824385 systemd-logind[1449]: Removed session 36. Jan 17 00:37:07.841003 systemd[1]: Started sshd@36-10.0.0.79:22-10.0.0.1:36264.service - OpenSSH per-connection server daemon (10.0.0.1:36264). Jan 17 00:37:07.895788 sshd[4578]: Accepted publickey for core from 10.0.0.1 port 36264 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:37:07.898174 sshd[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:37:07.910664 systemd-logind[1449]: New session 37 of user core. Jan 17 00:37:07.920695 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 17 00:37:08.132749 sshd[4578]: pam_unix(sshd:session): session closed for user core Jan 17 00:37:08.141142 systemd[1]: sshd@36-10.0.0.79:22-10.0.0.1:36264.service: Deactivated successfully. Jan 17 00:37:08.150283 systemd[1]: session-37.scope: Deactivated successfully. Jan 17 00:37:08.153730 systemd-logind[1449]: Session 37 logged out. Waiting for processes to exit. Jan 17 00:37:08.159519 systemd-logind[1449]: Removed session 37. Jan 17 00:37:09.237961 kubelet[2577]: E0117 00:37:09.234848 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:13.183976 systemd[1]: Started sshd@37-10.0.0.79:22-10.0.0.1:51270.service - OpenSSH per-connection server daemon (10.0.0.1:51270). Jan 17 00:37:13.283109 sshd[4593]: Accepted publickey for core from 10.0.0.1 port 51270 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:37:13.285515 sshd[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:37:13.298796 systemd-logind[1449]: New session 38 of user core. Jan 17 00:37:13.308873 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 17 00:37:13.522371 sshd[4593]: pam_unix(sshd:session): session closed for user core Jan 17 00:37:13.528987 systemd[1]: sshd@37-10.0.0.79:22-10.0.0.1:51270.service: Deactivated successfully. Jan 17 00:37:13.532169 systemd[1]: session-38.scope: Deactivated successfully. Jan 17 00:37:13.540770 systemd-logind[1449]: Session 38 logged out. Waiting for processes to exit. Jan 17 00:37:13.546062 systemd-logind[1449]: Removed session 38. Jan 17 00:37:18.539081 systemd[1]: Started sshd@38-10.0.0.79:22-10.0.0.1:51286.service - OpenSSH per-connection server daemon (10.0.0.1:51286). Jan 17 00:37:18.624498 sshd[4607]: Accepted publickey for core from 10.0.0.1 port 51286 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:37:18.630639 sshd[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:37:18.649542 systemd-logind[1449]: New session 39 of user core. Jan 17 00:37:18.660054 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 17 00:37:18.940070 sshd[4607]: pam_unix(sshd:session): session closed for user core Jan 17 00:37:18.968595 systemd[1]: sshd@38-10.0.0.79:22-10.0.0.1:51286.service: Deactivated successfully. Jan 17 00:37:18.974762 systemd[1]: session-39.scope: Deactivated successfully. Jan 17 00:37:18.987016 systemd-logind[1449]: Session 39 logged out. Waiting for processes to exit. Jan 17 00:37:19.003100 systemd[1]: Started sshd@39-10.0.0.79:22-10.0.0.1:51288.service - OpenSSH per-connection server daemon (10.0.0.1:51288). Jan 17 00:37:19.007652 systemd-logind[1449]: Removed session 39. Jan 17 00:37:19.060215 sshd[4622]: Accepted publickey for core from 10.0.0.1 port 51288 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:37:19.071546 sshd[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:37:19.088757 systemd-logind[1449]: New session 40 of user core. Jan 17 00:37:19.104441 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 17 00:37:22.139302 containerd[1457]: time="2026-01-17T00:37:22.138613516Z" level=info msg="StopContainer for \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\" with timeout 30 (s)" Jan 17 00:37:22.141395 containerd[1457]: time="2026-01-17T00:37:22.139957634Z" level=info msg="Stop container \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\" with signal terminated" Jan 17 00:37:22.332537 systemd[1]: cri-containerd-47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5.scope: Deactivated successfully. Jan 17 00:37:22.347498 systemd[1]: cri-containerd-47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5.scope: Consumed 1.876s CPU time. Jan 17 00:37:22.405289 containerd[1457]: time="2026-01-17T00:37:22.405130485Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:37:22.494207 containerd[1457]: time="2026-01-17T00:37:22.493992281Z" level=info msg="StopContainer for \"f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab\" with timeout 2 (s)" Jan 17 00:37:22.497217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5-rootfs.mount: Deactivated successfully. Jan 17 00:37:22.499266 containerd[1457]: time="2026-01-17T00:37:22.499168482Z" level=info msg="Stop container \"f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab\" with signal terminated" Jan 17 00:37:22.533966 systemd-networkd[1391]: lxc_health: Link DOWN Jan 17 00:37:22.533979 systemd-networkd[1391]: lxc_health: Lost carrier Jan 17 00:37:22.575964 containerd[1457]: time="2026-01-17T00:37:22.572776687Z" level=info msg="shim disconnected" id=47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5 namespace=k8s.io Jan 17 00:37:22.579359 containerd[1457]: time="2026-01-17T00:37:22.579249928Z" level=warning msg="cleaning up after shim disconnected" id=47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5 namespace=k8s.io Jan 17 00:37:22.579359 containerd[1457]: time="2026-01-17T00:37:22.579299350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:37:22.629803 systemd[1]: cri-containerd-f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab.scope: Deactivated successfully. Jan 17 00:37:22.630618 systemd[1]: cri-containerd-f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab.scope: Consumed 18.822s CPU time. Jan 17 00:37:22.741140 containerd[1457]: time="2026-01-17T00:37:22.739790521Z" level=info msg="StopContainer for \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\" returns successfully" Jan 17 00:37:22.749016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab-rootfs.mount: Deactivated successfully. Jan 17 00:37:22.759290 containerd[1457]: time="2026-01-17T00:37:22.757791975Z" level=info msg="StopPodSandbox for \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\"" Jan 17 00:37:22.759290 containerd[1457]: time="2026-01-17T00:37:22.757846417Z" level=info msg="Container to stop \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:37:22.776587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464-shm.mount: Deactivated successfully. Jan 17 00:37:22.793975 containerd[1457]: time="2026-01-17T00:37:22.793457281Z" level=info msg="shim disconnected" id=f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab namespace=k8s.io Jan 17 00:37:22.793975 containerd[1457]: time="2026-01-17T00:37:22.793540656Z" level=warning msg="cleaning up after shim disconnected" id=f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab namespace=k8s.io Jan 17 00:37:22.793975 containerd[1457]: time="2026-01-17T00:37:22.793556546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:37:22.807857 systemd[1]: cri-containerd-05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464.scope: Deactivated successfully. Jan 17 00:37:22.930222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464-rootfs.mount: Deactivated successfully. Jan 17 00:37:23.037018 containerd[1457]: time="2026-01-17T00:37:23.020202334Z" level=info msg="StopContainer for \"f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab\" returns successfully" Jan 17 00:37:23.037018 containerd[1457]: time="2026-01-17T00:37:23.020765635Z" level=info msg="StopPodSandbox for \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\"" Jan 17 00:37:23.037018 containerd[1457]: time="2026-01-17T00:37:23.020793597Z" level=info msg="Container to stop \"a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:37:23.037018 containerd[1457]: time="2026-01-17T00:37:23.020809066Z" level=info msg="Container to stop \"a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:37:23.037018 containerd[1457]: time="2026-01-17T00:37:23.020822161Z" level=info msg="Container to stop \"c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:37:23.037018 containerd[1457]: time="2026-01-17T00:37:23.020835466Z" level=info msg="Container to stop \"f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:37:23.037018 containerd[1457]: time="2026-01-17T00:37:23.020850153Z" level=info msg="Container to stop \"b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:37:23.052383 containerd[1457]: time="2026-01-17T00:37:23.052124982Z" level=info msg="shim disconnected" id=05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464 namespace=k8s.io Jan 17 00:37:23.052682 containerd[1457]: time="2026-01-17T00:37:23.052578579Z" level=warning msg="cleaning up after shim disconnected" id=05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464 namespace=k8s.io Jan 17 00:37:23.052682 containerd[1457]: time="2026-01-17T00:37:23.052598947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:37:23.091868 systemd[1]: cri-containerd-d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba.scope: Deactivated successfully. Jan 17 00:37:23.178027 containerd[1457]: time="2026-01-17T00:37:23.175201826Z" level=info msg="TearDown network for sandbox \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\" successfully" Jan 17 00:37:23.178027 containerd[1457]: time="2026-01-17T00:37:23.175239256Z" level=info msg="StopPodSandbox for \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\" returns successfully" Jan 17 00:37:23.231984 kubelet[2577]: E0117 00:37:23.230412 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:23.284189 containerd[1457]: time="2026-01-17T00:37:23.283466975Z" level=info msg="shim disconnected" id=d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba namespace=k8s.io Jan 17 00:37:23.284189 containerd[1457]: time="2026-01-17T00:37:23.283528901Z" level=warning msg="cleaning up after shim disconnected" id=d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba namespace=k8s.io Jan 17 00:37:23.284189 containerd[1457]: time="2026-01-17T00:37:23.283541384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:37:23.309717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba-rootfs.mount: Deactivated successfully. Jan 17 00:37:23.310244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba-shm.mount: Deactivated successfully. Jan 17 00:37:23.351653 containerd[1457]: time="2026-01-17T00:37:23.350032620Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:37:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:37:23.351793 kubelet[2577]: I0117 00:37:23.351692 2577 scope.go:117] "RemoveContainer" containerID="47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5" Jan 17 00:37:23.358607 containerd[1457]: time="2026-01-17T00:37:23.358564262Z" level=info msg="TearDown network for sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" successfully" Jan 17 00:37:23.358763 containerd[1457]: time="2026-01-17T00:37:23.358744218Z" level=info msg="StopPodSandbox for \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" returns successfully" Jan 17 00:37:23.375219 containerd[1457]: time="2026-01-17T00:37:23.374144530Z" level=info msg="RemoveContainer for \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\"" Jan 17 00:37:23.411823 containerd[1457]: time="2026-01-17T00:37:23.411695433Z" level=info msg="RemoveContainer for \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\" returns successfully" Jan 17 00:37:23.412216 kubelet[2577]: I0117 00:37:23.412083 2577 scope.go:117] "RemoveContainer" containerID="47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5" Jan 17 00:37:23.413145 containerd[1457]: time="2026-01-17T00:37:23.413015629Z" level=error msg="ContainerStatus for \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\": not found" Jan 17 00:37:23.413277 kubelet[2577]: E0117 00:37:23.413211 2577 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\": not found" containerID="47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5" Jan 17 00:37:23.413382 kubelet[2577]: I0117 00:37:23.413249 2577 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5"} err="failed to get container status \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\": rpc error: code = NotFound desc = an error occurred when try to find container \"47f172c84bf75dac9f346d2341bf50d9336b7eade29125f3f9ef4bc360092bd5\": not found" Jan 17 00:37:23.431283 kubelet[2577]: I0117 00:37:23.430864 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-lib-modules\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.431283 kubelet[2577]: I0117 00:37:23.430968 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79nlc\" (UniqueName: \"kubernetes.io/projected/2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb-kube-api-access-79nlc\") pod \"2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb\" (UID: \"2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb\") " Jan 17 00:37:23.431283 kubelet[2577]: I0117 00:37:23.430994 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-host-proc-sys-net\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.431283 kubelet[2577]: I0117 00:37:23.431017 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-hostproc\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.431283 kubelet[2577]: I0117 00:37:23.431036 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-run\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.431283 kubelet[2577]: I0117 00:37:23.431060 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb-cilium-config-path\") pod \"2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb\" (UID: \"2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb\") " Jan 17 00:37:23.431823 kubelet[2577]: I0117 00:37:23.431078 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cni-path\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.431823 kubelet[2577]: I0117 00:37:23.431097 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-hubble-tls\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.431823 kubelet[2577]: I0117 00:37:23.431118 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-host-proc-sys-kernel\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.431823 kubelet[2577]: I0117 00:37:23.431139 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-config-path\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.431823 kubelet[2577]: I0117 00:37:23.431163 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzsgw\" (UniqueName: \"kubernetes.io/projected/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-kube-api-access-tzsgw\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.431823 kubelet[2577]: I0117 00:37:23.431181 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-bpf-maps\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.432115 kubelet[2577]: I0117 00:37:23.431205 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-xtables-lock\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.432115 kubelet[2577]: I0117 00:37:23.431224 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-cgroup\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.432115 kubelet[2577]: I0117 00:37:23.431245 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-clustermesh-secrets\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.432115 kubelet[2577]: I0117 00:37:23.431266 2577 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-etc-cni-netd\") pod \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\" (UID: \"8a21c1ba-2a81-43e1-9d1d-c330429b6ab7\") " Jan 17 00:37:23.432115 kubelet[2577]: I0117 00:37:23.431419 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.432115 kubelet[2577]: I0117 00:37:23.431468 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.432488 kubelet[2577]: I0117 00:37:23.431964 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.432488 kubelet[2577]: I0117 00:37:23.431997 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.432488 kubelet[2577]: I0117 00:37:23.432020 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-hostproc" (OuterVolumeSpecName: "hostproc") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.432488 kubelet[2577]: I0117 00:37:23.432039 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.433460 kubelet[2577]: I0117 00:37:23.432694 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cni-path" (OuterVolumeSpecName: "cni-path") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.435762 kubelet[2577]: I0117 00:37:23.435011 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.448094 kubelet[2577]: I0117 00:37:23.446101 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.448094 kubelet[2577]: I0117 00:37:23.446157 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:37:23.474472 kubelet[2577]: I0117 00:37:23.458518 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:37:23.486662 kubelet[2577]: I0117 00:37:23.477381 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb" (UID: "2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:37:23.486271 systemd[1]: var-lib-kubelet-pods-2ee0f3e2\x2daaad\x2d4f77\x2d9dd2\x2d79d79a23ffdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79nlc.mount: Deactivated successfully. Jan 17 00:37:23.496486 systemd[1]: var-lib-kubelet-pods-8a21c1ba\x2d2a81\x2d43e1\x2d9d1d\x2dc330429b6ab7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:37:23.508105 systemd[1]: var-lib-kubelet-pods-8a21c1ba\x2d2a81\x2d43e1\x2d9d1d\x2dc330429b6ab7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtzsgw.mount: Deactivated successfully. Jan 17 00:37:23.511492 systemd[1]: var-lib-kubelet-pods-8a21c1ba\x2d2a81\x2d43e1\x2d9d1d\x2dc330429b6ab7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:37:23.514710 kubelet[2577]: I0117 00:37:23.513052 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb-kube-api-access-79nlc" (OuterVolumeSpecName: "kube-api-access-79nlc") pod "2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb" (UID: "2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb"). InnerVolumeSpecName "kube-api-access-79nlc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:37:23.516507 kubelet[2577]: I0117 00:37:23.515466 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:37:23.516507 kubelet[2577]: I0117 00:37:23.515585 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-kube-api-access-tzsgw" (OuterVolumeSpecName: "kube-api-access-tzsgw") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "kube-api-access-tzsgw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:37:23.516507 kubelet[2577]: I0117 00:37:23.515596 2577 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" (UID: "8a21c1ba-2a81-43e1-9d1d-c330429b6ab7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:37:23.532264 kubelet[2577]: I0117 00:37:23.532216 2577 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.532751 kubelet[2577]: I0117 00:37:23.532481 2577 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.532751 kubelet[2577]: I0117 00:37:23.532530 2577 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-79nlc\" (UniqueName: \"kubernetes.io/projected/2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb-kube-api-access-79nlc\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.532751 kubelet[2577]: I0117 00:37:23.532561 2577 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535546 kubelet[2577]: I0117 00:37:23.533803 2577 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535546 kubelet[2577]: I0117 00:37:23.533849 2577 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535546 kubelet[2577]: I0117 00:37:23.533867 2577 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535546 kubelet[2577]: I0117 00:37:23.533883 2577 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535546 kubelet[2577]: I0117 00:37:23.533940 2577 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535546 kubelet[2577]: I0117 00:37:23.533959 2577 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535546 kubelet[2577]: I0117 00:37:23.533982 2577 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535546 kubelet[2577]: I0117 00:37:23.533995 2577 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tzsgw\" (UniqueName: \"kubernetes.io/projected/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-kube-api-access-tzsgw\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535854 kubelet[2577]: I0117 00:37:23.534009 2577 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535854 kubelet[2577]: I0117 00:37:23.534023 2577 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535854 kubelet[2577]: I0117 00:37:23.534036 2577 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.535854 kubelet[2577]: I0117 00:37:23.534049 2577 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 17 00:37:23.684488 systemd[1]: Removed slice kubepods-besteffort-pod2ee0f3e2_aaad_4f77_9dd2_79d79a23ffdb.slice - libcontainer container kubepods-besteffort-pod2ee0f3e2_aaad_4f77_9dd2_79d79a23ffdb.slice. Jan 17 00:37:23.684878 systemd[1]: kubepods-besteffort-pod2ee0f3e2_aaad_4f77_9dd2_79d79a23ffdb.slice: Consumed 1.920s CPU time. Jan 17 00:37:23.734200 sshd[4622]: pam_unix(sshd:session): session closed for user core Jan 17 00:37:23.752421 systemd[1]: sshd@39-10.0.0.79:22-10.0.0.1:51288.service: Deactivated successfully. Jan 17 00:37:23.756617 systemd[1]: session-40.scope: Deactivated successfully. Jan 17 00:37:23.757074 systemd[1]: session-40.scope: Consumed 1.101s CPU time. Jan 17 00:37:23.789853 systemd-logind[1449]: Session 40 logged out. Waiting for processes to exit. Jan 17 00:37:23.819446 systemd[1]: Started sshd@40-10.0.0.79:22-10.0.0.1:44446.service - OpenSSH per-connection server daemon (10.0.0.1:44446). Jan 17 00:37:23.830140 systemd-logind[1449]: Removed session 40. Jan 17 00:37:24.025024 sshd[4781]: Accepted publickey for core from 10.0.0.1 port 44446 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:37:24.050450 sshd[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:37:24.099786 systemd-logind[1449]: New session 41 of user core. Jan 17 00:37:24.123843 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 17 00:37:24.448819 kubelet[2577]: I0117 00:37:24.441007 2577 scope.go:117] "RemoveContainer" containerID="f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab" Jan 17 00:37:24.488796 systemd[1]: Removed slice kubepods-burstable-pod8a21c1ba_2a81_43e1_9d1d_c330429b6ab7.slice - libcontainer container kubepods-burstable-pod8a21c1ba_2a81_43e1_9d1d_c330429b6ab7.slice. Jan 17 00:37:24.488979 systemd[1]: kubepods-burstable-pod8a21c1ba_2a81_43e1_9d1d_c330429b6ab7.slice: Consumed 19.204s CPU time. Jan 17 00:37:24.493753 containerd[1457]: time="2026-01-17T00:37:24.493274287Z" level=info msg="RemoveContainer for \"f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab\"" Jan 17 00:37:24.516024 containerd[1457]: time="2026-01-17T00:37:24.515124425Z" level=info msg="RemoveContainer for \"f8ff7a6e71bdd2832822062165f66bafa4dd9f12f9b5bf82975334bdf13cc4ab\" returns successfully" Jan 17 00:37:24.516158 kubelet[2577]: I0117 00:37:24.515616 2577 scope.go:117] "RemoveContainer" containerID="a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809" Jan 17 00:37:24.519006 containerd[1457]: time="2026-01-17T00:37:24.517643918Z" level=info msg="RemoveContainer for \"a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809\"" Jan 17 00:37:24.550015 containerd[1457]: time="2026-01-17T00:37:24.549520882Z" level=info msg="RemoveContainer for \"a17e582878ef6fddc59fd78a0c2d90c86bb21f0ebe704f9b768b2d1105a18809\" returns successfully" Jan 17 00:37:24.560286 kubelet[2577]: I0117 00:37:24.552829 2577 scope.go:117] "RemoveContainer" containerID="a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38" Jan 17 00:37:24.563645 containerd[1457]: time="2026-01-17T00:37:24.563604827Z" level=info msg="RemoveContainer for \"a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38\"" Jan 17 00:37:24.594373 containerd[1457]: time="2026-01-17T00:37:24.594172310Z" level=info msg="RemoveContainer for \"a9e32aa8a75bf66bfdf9a2e53c1931a704acf533922057f95df18d95fad12a38\" returns successfully" Jan 17 00:37:24.594609 kubelet[2577]: I0117 00:37:24.594544 2577 scope.go:117] "RemoveContainer" containerID="c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057" Jan 17 00:37:24.596838 containerd[1457]: time="2026-01-17T00:37:24.596502085Z" level=info msg="RemoveContainer for \"c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057\"" Jan 17 00:37:24.609470 containerd[1457]: time="2026-01-17T00:37:24.609422949Z" level=info msg="RemoveContainer for \"c1f7f25198424fcc3897786cb378d17f9bd5171519b61e7e8fa485f122f57057\" returns successfully" Jan 17 00:37:24.609869 kubelet[2577]: I0117 00:37:24.609840 2577 scope.go:117] "RemoveContainer" containerID="b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b" Jan 17 00:37:24.611721 containerd[1457]: time="2026-01-17T00:37:24.611690261Z" level=info msg="RemoveContainer for \"b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b\"" Jan 17 00:37:24.622807 containerd[1457]: time="2026-01-17T00:37:24.622169472Z" level=info msg="RemoveContainer for \"b6861dd24f2c561d443fde64afc7600c8eee47759a80d482e8ecad398c2d435b\" returns successfully" Jan 17 00:37:24.628700 kubelet[2577]: E0117 00:37:24.628642 2577 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:37:25.241022 kubelet[2577]: I0117 00:37:25.240687 2577 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb" path="/var/lib/kubelet/pods/2ee0f3e2-aaad-4f77-9dd2-79d79a23ffdb/volumes" Jan 17 00:37:25.243380 kubelet[2577]: I0117 00:37:25.242089 2577 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a21c1ba-2a81-43e1-9d1d-c330429b6ab7" path="/var/lib/kubelet/pods/8a21c1ba-2a81-43e1-9d1d-c330429b6ab7/volumes" Jan 17 00:37:25.834589 sshd[4781]: pam_unix(sshd:session): session closed for user core Jan 17 00:37:25.879032 systemd[1]: sshd@40-10.0.0.79:22-10.0.0.1:44446.service: Deactivated successfully. Jan 17 00:37:25.900562 systemd[1]: session-41.scope: Deactivated successfully. Jan 17 00:37:25.914253 systemd-logind[1449]: Session 41 logged out. Waiting for processes to exit. Jan 17 00:37:25.953295 systemd[1]: Started sshd@41-10.0.0.79:22-10.0.0.1:44454.service - OpenSSH per-connection server daemon (10.0.0.1:44454). Jan 17 00:37:25.960829 systemd-logind[1449]: Removed session 41. Jan 17 00:37:26.060472 sshd[4794]: Accepted publickey for core from 10.0.0.1 port 44454 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:37:26.073740 sshd[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:37:26.083764 systemd[1]: Created slice kubepods-burstable-pod89188d9d_38f0_45e0_b668_cdfaafbcde73.slice - libcontainer container kubepods-burstable-pod89188d9d_38f0_45e0_b668_cdfaafbcde73.slice. Jan 17 00:37:26.116452 systemd-logind[1449]: New session 42 of user core. Jan 17 00:37:26.122846 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 17 00:37:26.129637 kubelet[2577]: I0117 00:37:26.129599 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-hostproc\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.130203 kubelet[2577]: I0117 00:37:26.130173 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89188d9d-38f0-45e0-b668-cdfaafbcde73-cilium-config-path\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.130304 kubelet[2577]: I0117 00:37:26.130285 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/89188d9d-38f0-45e0-b668-cdfaafbcde73-cilium-ipsec-secrets\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.130488 kubelet[2577]: I0117 00:37:26.130466 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6n7j\" (UniqueName: \"kubernetes.io/projected/89188d9d-38f0-45e0-b668-cdfaafbcde73-kube-api-access-s6n7j\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.133559 kubelet[2577]: I0117 00:37:26.130568 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-cilium-run\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.133760 kubelet[2577]: I0117 00:37:26.133735 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-cni-path\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.133943 kubelet[2577]: I0117 00:37:26.133889 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-lib-modules\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.134064 kubelet[2577]: I0117 00:37:26.134043 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-host-proc-sys-net\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.134162 kubelet[2577]: I0117 00:37:26.134144 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-etc-cni-netd\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.134307 kubelet[2577]: I0117 00:37:26.134284 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89188d9d-38f0-45e0-b668-cdfaafbcde73-hubble-tls\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.134498 kubelet[2577]: I0117 00:37:26.134477 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-xtables-lock\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.134827 kubelet[2577]: I0117 00:37:26.134797 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-host-proc-sys-kernel\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.135034 kubelet[2577]: I0117 00:37:26.135001 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-bpf-maps\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.135219 kubelet[2577]: I0117 00:37:26.135041 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89188d9d-38f0-45e0-b668-cdfaafbcde73-cilium-cgroup\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.135219 kubelet[2577]: I0117 00:37:26.135066 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89188d9d-38f0-45e0-b668-cdfaafbcde73-clustermesh-secrets\") pod \"cilium-cxz9b\" (UID: \"89188d9d-38f0-45e0-b668-cdfaafbcde73\") " pod="kube-system/cilium-cxz9b" Jan 17 00:37:26.214174 sshd[4794]: pam_unix(sshd:session): session closed for user core Jan 17 00:37:26.230866 systemd[1]: sshd@41-10.0.0.79:22-10.0.0.1:44454.service: Deactivated successfully. Jan 17 00:37:26.242990 systemd[1]: session-42.scope: Deactivated successfully. Jan 17 00:37:26.245682 systemd-logind[1449]: Session 42 logged out. Waiting for processes to exit. Jan 17 00:37:26.263219 systemd[1]: Started sshd@42-10.0.0.79:22-10.0.0.1:44464.service - OpenSSH per-connection server daemon (10.0.0.1:44464). Jan 17 00:37:26.329546 systemd-logind[1449]: Removed session 42. Jan 17 00:37:26.359585 sshd[4805]: Accepted publickey for core from 10.0.0.1 port 44464 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:37:26.368300 sshd[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:37:26.396123 systemd-logind[1449]: New session 43 of user core. Jan 17 00:37:26.417626 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 17 00:37:26.430476 kubelet[2577]: E0117 00:37:26.428543 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:26.430640 containerd[1457]: time="2026-01-17T00:37:26.430455530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxz9b,Uid:89188d9d-38f0-45e0-b668-cdfaafbcde73,Namespace:kube-system,Attempt:0,}" Jan 17 00:37:26.542306 containerd[1457]: time="2026-01-17T00:37:26.539525760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:37:26.542306 containerd[1457]: time="2026-01-17T00:37:26.540843198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:37:26.542306 containerd[1457]: time="2026-01-17T00:37:26.540868405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:37:26.542306 containerd[1457]: time="2026-01-17T00:37:26.541057057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:37:26.647217 systemd[1]: Started cri-containerd-e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01.scope - libcontainer container e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01. Jan 17 00:37:26.711394 kubelet[2577]: I0117 00:37:26.701271 2577 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:37:26Z","lastTransitionTime":"2026-01-17T00:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:37:26.908415 containerd[1457]: time="2026-01-17T00:37:26.908074569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxz9b,Uid:89188d9d-38f0-45e0-b668-cdfaafbcde73,Namespace:kube-system,Attempt:0,} returns sandbox id \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\"" Jan 17 00:37:26.910260 kubelet[2577]: E0117 00:37:26.909589 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:26.936285 containerd[1457]: time="2026-01-17T00:37:26.936232803Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:37:27.018411 containerd[1457]: time="2026-01-17T00:37:27.015272259Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1ff85ccc634973753dc8c8a840d56642352feb5edb600b47ed528f7acad6e2e6\"" Jan 17 00:37:27.018411 containerd[1457]: time="2026-01-17T00:37:27.016393301Z" level=info msg="StartContainer for \"1ff85ccc634973753dc8c8a840d56642352feb5edb600b47ed528f7acad6e2e6\"" Jan 17 00:37:27.168235 systemd[1]: Started cri-containerd-1ff85ccc634973753dc8c8a840d56642352feb5edb600b47ed528f7acad6e2e6.scope - libcontainer container 1ff85ccc634973753dc8c8a840d56642352feb5edb600b47ed528f7acad6e2e6. Jan 17 00:37:27.398193 containerd[1457]: time="2026-01-17T00:37:27.397875903Z" level=info msg="StartContainer for \"1ff85ccc634973753dc8c8a840d56642352feb5edb600b47ed528f7acad6e2e6\" returns successfully" Jan 17 00:37:27.520130 kubelet[2577]: E0117 00:37:27.514806 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:27.551251 systemd[1]: cri-containerd-1ff85ccc634973753dc8c8a840d56642352feb5edb600b47ed528f7acad6e2e6.scope: Deactivated successfully. Jan 17 00:37:27.706140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ff85ccc634973753dc8c8a840d56642352feb5edb600b47ed528f7acad6e2e6-rootfs.mount: Deactivated successfully. Jan 17 00:37:27.777173 containerd[1457]: time="2026-01-17T00:37:27.776686121Z" level=info msg="shim disconnected" id=1ff85ccc634973753dc8c8a840d56642352feb5edb600b47ed528f7acad6e2e6 namespace=k8s.io Jan 17 00:37:27.777173 containerd[1457]: time="2026-01-17T00:37:27.776759688Z" level=warning msg="cleaning up after shim disconnected" id=1ff85ccc634973753dc8c8a840d56642352feb5edb600b47ed528f7acad6e2e6 namespace=k8s.io Jan 17 00:37:27.777173 containerd[1457]: time="2026-01-17T00:37:27.776776078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:37:28.515462 kubelet[2577]: E0117 00:37:28.514951 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:28.543455 containerd[1457]: time="2026-01-17T00:37:28.543373046Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:37:28.594130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280161448.mount: Deactivated successfully. Jan 17 00:37:28.609677 containerd[1457]: time="2026-01-17T00:37:28.609578549Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"220fad19342fc14c4f2031832132596ce1919bf46bc2c0af3da13d8dccef4adb\"" Jan 17 00:37:28.612412 containerd[1457]: time="2026-01-17T00:37:28.612103592Z" level=info msg="StartContainer for \"220fad19342fc14c4f2031832132596ce1919bf46bc2c0af3da13d8dccef4adb\"" Jan 17 00:37:28.690759 systemd[1]: Started cri-containerd-220fad19342fc14c4f2031832132596ce1919bf46bc2c0af3da13d8dccef4adb.scope - libcontainer container 220fad19342fc14c4f2031832132596ce1919bf46bc2c0af3da13d8dccef4adb. Jan 17 00:37:28.779695 containerd[1457]: time="2026-01-17T00:37:28.775511468Z" level=info msg="StartContainer for \"220fad19342fc14c4f2031832132596ce1919bf46bc2c0af3da13d8dccef4adb\" returns successfully" Jan 17 00:37:28.788862 systemd[1]: cri-containerd-220fad19342fc14c4f2031832132596ce1919bf46bc2c0af3da13d8dccef4adb.scope: Deactivated successfully. Jan 17 00:37:28.837068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-220fad19342fc14c4f2031832132596ce1919bf46bc2c0af3da13d8dccef4adb-rootfs.mount: Deactivated successfully. Jan 17 00:37:28.861605 containerd[1457]: time="2026-01-17T00:37:28.861500769Z" level=info msg="shim disconnected" id=220fad19342fc14c4f2031832132596ce1919bf46bc2c0af3da13d8dccef4adb namespace=k8s.io Jan 17 00:37:28.861605 containerd[1457]: time="2026-01-17T00:37:28.861579657Z" level=warning msg="cleaning up after shim disconnected" id=220fad19342fc14c4f2031832132596ce1919bf46bc2c0af3da13d8dccef4adb namespace=k8s.io Jan 17 00:37:28.861605 containerd[1457]: time="2026-01-17T00:37:28.861595005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:37:29.198397 containerd[1457]: time="2026-01-17T00:37:29.196149606Z" level=info msg="StopPodSandbox for \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\"" Jan 17 00:37:29.198397 containerd[1457]: time="2026-01-17T00:37:29.196298634Z" level=info msg="TearDown network for sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" successfully" Jan 17 00:37:29.198397 containerd[1457]: time="2026-01-17T00:37:29.196374306Z" level=info msg="StopPodSandbox for \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" returns successfully" Jan 17 00:37:29.198397 containerd[1457]: time="2026-01-17T00:37:29.196877084Z" level=info msg="RemovePodSandbox for \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\"" Jan 17 00:37:29.198397 containerd[1457]: time="2026-01-17T00:37:29.196959407Z" level=info msg="Forcibly stopping sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\"" Jan 17 00:37:29.198397 containerd[1457]: time="2026-01-17T00:37:29.197041139Z" level=info msg="TearDown network for sandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" successfully" Jan 17 00:37:29.218197 containerd[1457]: time="2026-01-17T00:37:29.215674791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:37:29.218197 containerd[1457]: time="2026-01-17T00:37:29.215782772Z" level=info msg="RemovePodSandbox \"d957678268b9d29653dd4a28ccdc87085b2a4d43c0a4168dc7cb9f4e953175ba\" returns successfully" Jan 17 00:37:29.218197 containerd[1457]: time="2026-01-17T00:37:29.217124514Z" level=info msg="StopPodSandbox for \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\"" Jan 17 00:37:29.218197 containerd[1457]: time="2026-01-17T00:37:29.217533697Z" level=info msg="TearDown network for sandbox \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\" successfully" Jan 17 00:37:29.218197 containerd[1457]: time="2026-01-17T00:37:29.217558684Z" level=info msg="StopPodSandbox for \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\" returns successfully" Jan 17 00:37:29.220801 containerd[1457]: time="2026-01-17T00:37:29.220452555Z" level=info msg="RemovePodSandbox for \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\"" Jan 17 00:37:29.220801 containerd[1457]: time="2026-01-17T00:37:29.220485977Z" level=info msg="Forcibly stopping sandbox \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\"" Jan 17 00:37:29.220801 containerd[1457]: time="2026-01-17T00:37:29.220566418Z" level=info msg="TearDown network for sandbox \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\" successfully" Jan 17 00:37:29.237402 containerd[1457]: time="2026-01-17T00:37:29.237175838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:37:29.237402 containerd[1457]: time="2026-01-17T00:37:29.237252782Z" level=info msg="RemovePodSandbox \"05cc24403621bb226d62eb7e0fc0dc9c58b1adf00d27c976aa2a75d2317b3464\" returns successfully" Jan 17 00:37:29.527553 kubelet[2577]: E0117 00:37:29.522224 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:29.535219 containerd[1457]: time="2026-01-17T00:37:29.535145209Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:37:29.620297 containerd[1457]: time="2026-01-17T00:37:29.619777465Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2ad9c59efad5f3aa76d45bb204499072391e73fc7b0bd9f570a0598f00881d67\"" Jan 17 00:37:29.620963 containerd[1457]: time="2026-01-17T00:37:29.620827047Z" level=info msg="StartContainer for \"2ad9c59efad5f3aa76d45bb204499072391e73fc7b0bd9f570a0598f00881d67\"" Jan 17 00:37:29.633839 kubelet[2577]: E0117 00:37:29.633734 2577 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:37:29.756078 systemd[1]: Started cri-containerd-2ad9c59efad5f3aa76d45bb204499072391e73fc7b0bd9f570a0598f00881d67.scope - libcontainer container 2ad9c59efad5f3aa76d45bb204499072391e73fc7b0bd9f570a0598f00881d67. Jan 17 00:37:29.877998 containerd[1457]: time="2026-01-17T00:37:29.877469830Z" level=info msg="StartContainer for \"2ad9c59efad5f3aa76d45bb204499072391e73fc7b0bd9f570a0598f00881d67\" returns successfully" Jan 17 00:37:29.891063 systemd[1]: cri-containerd-2ad9c59efad5f3aa76d45bb204499072391e73fc7b0bd9f570a0598f00881d67.scope: Deactivated successfully. Jan 17 00:37:30.027719 containerd[1457]: time="2026-01-17T00:37:30.023362778Z" level=info msg="shim disconnected" id=2ad9c59efad5f3aa76d45bb204499072391e73fc7b0bd9f570a0598f00881d67 namespace=k8s.io Jan 17 00:37:30.027719 containerd[1457]: time="2026-01-17T00:37:30.023423191Z" level=warning msg="cleaning up after shim disconnected" id=2ad9c59efad5f3aa76d45bb204499072391e73fc7b0bd9f570a0598f00881d67 namespace=k8s.io Jan 17 00:37:30.027719 containerd[1457]: time="2026-01-17T00:37:30.023434582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:37:30.544867 kubelet[2577]: E0117 00:37:30.539647 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:30.591637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ad9c59efad5f3aa76d45bb204499072391e73fc7b0bd9f570a0598f00881d67-rootfs.mount: Deactivated successfully. Jan 17 00:37:30.616650 containerd[1457]: time="2026-01-17T00:37:30.615965685Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:37:30.686249 containerd[1457]: time="2026-01-17T00:37:30.686065449Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ff91730509ac5cacd0481b07d74d07771a48dcce320bf7cf78a4508872517f2f\"" Jan 17 00:37:30.696047 containerd[1457]: time="2026-01-17T00:37:30.694577758Z" level=info msg="StartContainer for \"ff91730509ac5cacd0481b07d74d07771a48dcce320bf7cf78a4508872517f2f\"" Jan 17 00:37:30.813761 systemd[1]: Started cri-containerd-ff91730509ac5cacd0481b07d74d07771a48dcce320bf7cf78a4508872517f2f.scope - libcontainer container ff91730509ac5cacd0481b07d74d07771a48dcce320bf7cf78a4508872517f2f. Jan 17 00:37:30.906376 systemd[1]: cri-containerd-ff91730509ac5cacd0481b07d74d07771a48dcce320bf7cf78a4508872517f2f.scope: Deactivated successfully. Jan 17 00:37:30.911952 containerd[1457]: time="2026-01-17T00:37:30.910006581Z" level=info msg="StartContainer for \"ff91730509ac5cacd0481b07d74d07771a48dcce320bf7cf78a4508872517f2f\" returns successfully" Jan 17 00:37:30.974193 containerd[1457]: time="2026-01-17T00:37:30.973687555Z" level=info msg="shim disconnected" id=ff91730509ac5cacd0481b07d74d07771a48dcce320bf7cf78a4508872517f2f namespace=k8s.io Jan 17 00:37:30.974193 containerd[1457]: time="2026-01-17T00:37:30.973760781Z" level=warning msg="cleaning up after shim disconnected" id=ff91730509ac5cacd0481b07d74d07771a48dcce320bf7cf78a4508872517f2f namespace=k8s.io Jan 17 00:37:30.974193 containerd[1457]: time="2026-01-17T00:37:30.973772693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:37:31.551258 kubelet[2577]: E0117 00:37:31.550712 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:31.576388 containerd[1457]: time="2026-01-17T00:37:31.574645340Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:37:31.588062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff91730509ac5cacd0481b07d74d07771a48dcce320bf7cf78a4508872517f2f-rootfs.mount: Deactivated successfully. Jan 17 00:37:31.651058 containerd[1457]: time="2026-01-17T00:37:31.648195208Z" level=info msg="CreateContainer within sandbox \"e39da13ae1186b7fe21e9deba4ce2cdfd1d4ac9ec9b965dc143231f3c8594f01\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad18d59383483380a3415c6381d7ef292d4b4c83c35f34f8f6d5057822ff9417\"" Jan 17 00:37:31.651058 containerd[1457]: time="2026-01-17T00:37:31.650831117Z" level=info msg="StartContainer for \"ad18d59383483380a3415c6381d7ef292d4b4c83c35f34f8f6d5057822ff9417\"" Jan 17 00:37:31.726951 systemd[1]: Started cri-containerd-ad18d59383483380a3415c6381d7ef292d4b4c83c35f34f8f6d5057822ff9417.scope - libcontainer container ad18d59383483380a3415c6381d7ef292d4b4c83c35f34f8f6d5057822ff9417. Jan 17 00:37:31.862192 containerd[1457]: time="2026-01-17T00:37:31.861985012Z" level=info msg="StartContainer for \"ad18d59383483380a3415c6381d7ef292d4b4c83c35f34f8f6d5057822ff9417\" returns successfully" Jan 17 00:37:32.610979 kubelet[2577]: E0117 00:37:32.607525 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:32.706220 kubelet[2577]: I0117 00:37:32.704240 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cxz9b" podStartSLOduration=7.704217349 podStartE2EDuration="7.704217349s" podCreationTimestamp="2026-01-17 00:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:37:32.687293601 +0000 UTC m=+303.915884557" watchObservedRunningTime="2026-01-17 00:37:32.704217349 +0000 UTC m=+303.932808305" Jan 17 00:37:33.266629 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:37:34.430458 kubelet[2577]: E0117 00:37:34.430155 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:35.267098 kubelet[2577]: E0117 00:37:35.266970 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:35.267098 kubelet[2577]: E0117 00:37:35.266759 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:40.476100 systemd[1]: run-containerd-runc-k8s.io-ad18d59383483380a3415c6381d7ef292d4b4c83c35f34f8f6d5057822ff9417-runc.LxEQzx.mount: Deactivated successfully. Jan 17 00:37:40.704812 systemd-networkd[1391]: lxc_health: Link UP Jan 17 00:37:40.717990 systemd-networkd[1391]: lxc_health: Gained carrier Jan 17 00:37:42.275702 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 17 00:37:42.431737 kubelet[2577]: E0117 00:37:42.430479 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:42.690839 kubelet[2577]: E0117 00:37:42.686399 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:43.688127 kubelet[2577]: E0117 00:37:43.687307 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:45.233759 kubelet[2577]: E0117 00:37:45.233206 2577 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:37:50.158726 sshd[4805]: pam_unix(sshd:session): session closed for user core Jan 17 00:37:50.173295 systemd[1]: sshd@42-10.0.0.79:22-10.0.0.1:44464.service: Deactivated successfully. Jan 17 00:37:50.175779 systemd[1]: session-43.scope: Deactivated successfully. Jan 17 00:37:50.178649 systemd-logind[1449]: Session 43 logged out. Waiting for processes to exit. Jan 17 00:37:50.182873 systemd-logind[1449]: Removed session 43.